Where do you see the ![CDATA[  ?

CDATA it's part of an XML document, "a CDATA section is a piece of element content that is marked up to be interpreted literally, as textual data, not as marked-up content" (from wikipedia).

If it's contained in a string property (is this that you mean by field?), then evidently that value was assigned to the string value.

But I suspect your question is missing the context......

I'm not sure if this apply to your case but in the past we found that a very old database (20+ years) that has been upgraded many time over the years had bitmap indices "not compacted" and we gained a lot of space and, more importantly, huge performance improvement running %SYS.Maint.Bitmap documented here:

This utility is used to compact bitmap/bitslice indices. Over time in a volatile table (think lots of INSERTs and DELETEs) the storage for a bitmap index may become less efficient. To a lesser extent index value changes, i.e. UPDATES, can also degrade bitmap performance.

I've used Object Script to update linked tables projected as IRIS/Caché classes, like in your sample, since very long time and it works.

As the error says, your issue is that some property/column cannot be set/modified, I'm pretty sure the same issue arise if you use SQL to update/insert the same column.

Without the table definition it's impossible to guess what's the field and why that column cannot be set.
Maybe some of the fields are part of the primary key that includes other fields that are not set?
Make sure that the table is properly linked, the link table wizard sometime need "guidance" on properly linking tables, particularly in defining primary keys...

Anyway, if properly linked, you can definitely treat/manage/manipulate linked tables the same way as native IRIS/Caché  classes.classes. 

Ciao Pietro,

as said %DynamicAbstractObject has excellent performance and can handle easily very large JSON streams/files.
Depending on your system settings, for large JSON you may need to accommodate process memory, fortunately you can adjust it in your code at runtime so you can write robust code that does not depend on system configuration parameters.
Note that the default value of "Maximum Per-Process Memory" has changed during time, so a new installation and an upgraded installation have different values.

IMHO the real question here is: in what side of the JSON processing is your code?

Are you generating the JSON or are you receiving the JSON from a third party?

If you are receiving the JSON, then I don't think there is much you can do about, just load it and IRIS will handle it.
I'm pretty sure that any attempt to "split" the loading of the JSON stream/file will result in worst performance and consumed resources.
To split a large JSON you need to parse it anyway....

If you are generating the JSON, then depending on the project and specifications constraints, you may split you payload in chunks, for example in FHIR the server can choose to break a bundle it up into "pages".

I'm not sure if your question is about loading the JSON file/stream into a %DynamicAbstractObject or about processing the large %DynamicAbstractObject once it has been imported from JSON?

What's your problem and what's your goal?

An integrity check also treats it as if it was a single file. This is only going to be more of an issue as the max size increses.

There is no other option for integrity check because in multi volume database a global can span across multiple volumes/files and a global structure must be fully scanned to check integrity.

If this is a concern, maybe splitting a global across multiple "small" (the concept of small may vary! 😁) database using subscript mapping can be an option.
Integrity check is performed at database/global level, not namespace/global level.