The confusion here is that you created 3 class mappings for a single global (not a good practice) and your understanding is that they're just "views", however the 3 of them are full class/tables with the same properties/ownership of the data. With that in mind, using SQL DROP statement has the same effect for any of the 3 tables/classes  (if DDL were allowed).  As others have pointed out, you can accomplish what you're trying to do with DROP %NODELDATA or by removing the class definition (either from Studio or programatically). 

If you wanted to declare views instead of tables then you need to do so via SQL CREATE VIEW statement.

I like them in the following order:

1) Class Queries

2) Dynamic Queries

3) Embedded SQL

There was a time when Embedded SQL were way faster than dynamic queries, however I think that's not that different anymore. The readability provided by dynamic queries (especially for people new to Cache) outweighs the performance embedded SQL could provide. Of course each situation is different and maybe your application might benefit greatly from that extra performance.

One "drawback" from embedded SQL is that if it's used within routines you need to [manually] compile those routines when pushing changes for a class/table that is part of the embedded SQL. This is true even if those new changes doesn't directly affect that routine or embedded SQL. Classes are compiled automatically depending on the flags you use when compiling the affected table otherwise you need to [manually] compile them as well.

You can remove the Calculated keyword and use a combination of  SqlComputeOnChange and Readonly keywords.

What you accomplish:

- Removing Calculated keyword, makes the column persistent

- Adding SqlComputeOnChange gives you control on when you want the value to be calculated (using %%INSERT or %%UPDATE) and based on which column (don't add any if calc must happen for all inserts regardless of which fields are being inserted).

- Adding Readonly just provides and extra "safety" that the value can't be changed via SQL or Object means.

 Property readonlyprop As %String [ ReadOnly, SqlComputeCode = { {*} = {ID}_(+$H)}, SqlComputed, SqlComputeOnChange = %%INSERT ];

I'm not sure there's a Cache specific document for designs like this. There are multiple ways to separate data in any database (schema, instance, namespaces, etc). Cache provides extra features such as namespace mappings that makes sharing simpler but before you get to Cache specific implementation you need a high level design of your application. Regardless of database and programming language, how should my application work?  How do you plan to sell your product? How flexible you want it to be? As Dmitry pointed out, are there any regulations/limitations around sharing data within customers? All this should be based on your  business model and has nothing to do with technology stack. 

Even SQL operations are captured in the journals as far as my understanding goes. But I'd agree any solution would have to check on any scenarios such as data not being journaled since processes can disable journaling "at will".  Journaling can also be disabled at the db level.

Going under the assumption that "everything" is captured in the journals it should be possible to write a process that reads the journal file and convert its entries to another format.

For Cache/IRIS, the JOURNAL files would be the closest option to CDC. There isn't any "generic" product out there that can read and publish the content on journal files for things like streaming or other uses. You can certainly write something to read and publish/stream the changes but it'll be a customized option. 

Cache/IRIS can do it internally for Cache-to-Cache sync/mirroring so it is possible to follow the same logic for other uses. Maybe check with Intersytems if they have had request lake this before. Either they already have a utility or can help you build one. 

Besides dynamically changing the maxlen what else are you trying to accomplish/avoid? These values are used at compile time to generate the SQL catalog and become part of your contract (for clients calling them). If you were to dynamically change the values 1) you'll be breaking the contract (e.g. you could change the values to a more restrictive size) and 2) you would still need to compile the class holding the stored proc/query. 

The benefit I can think of this approach right now is avoiding pushing code to prod (assuming that process is cumbersome), however you still need the compile part.  It'll be better to define and modify the stored proc using DDLs (maybe  a faster change in prod depending your company's rules).  Since this change is a database object change it should be treated as code and thus (hopefully) kept in source control (for audit and replication purposes). 

Agree on your "usefulness" statement. Making a class non-extendable breaks all recommended development practices. For example TDD. I won't be able to mock this class for testing purposes and will be forced to test with your class only (Granted Cache is not strong typed so in theory I have ways to bypass this limitation but still). 

Another recommended principal is to use composition instead of inheritance so in theory I could do exactly the same I wanted to do with your class (after you "lock it down") by making it part of another class without extending it. If a developer wants to get around your class ( don't know the reasons why) he/she can by other means unless you plague your code with a bunch of type checks (making Cache strong typed). 

Even if a developer get to extending your class, the fact you can't overload methods in Cache gives you a level of restriction when combined with deployed code and final as there's "no way" a developer can change your implementation so your code will still call your methods' implementation on a subclass.

All and all it seems you're going to extremes for no real benefit but since we don't know the whole picture maybe this effort is worth it. 

Code placement depends on what you're trying to do and when: if processing the event from javascript (ajax call) then your code must be separate from OnPreHTTP(); if processing on the server (after page submit) then your code should be in OnPreHTTP() or called from it (this last part depends on your coding style - jam everything in a single method or separate things in multiple smaller methods)

In simple terms (as Robert pointed out) , OnPreHTTP needs to handle any processing the server must perform prior to "displaying"/"printing" the page on the browser.

For large files I use %XML.TextReader.ParseFile(). It allows me to pass a "concrete" global instead of using Cache defaults. Usually CacheTemp which is more limited than a concrete global. With this I'm able to process larger files. Granted, it's a little bit of more work (code) as you have to traverse the results (element by element) but it gets the job done.

There's certainly a performance gap. The significance depends on what you're trying to do. For most jobs it'll be very very insignificant.  In general SQL is a little bit faster than Object access, however Object access provides other benefits in terms of code clarity and reusability. In the case of interacting with Cache from other languages (e.g Java/Spring) then SQL will be preferred for lots of reasons. The main one being that SQL, as Robert Cemper said, it's a well supported and known language.