In 2020.2 we added $system.OBJ.GenerateEmbedded function which allows all universal cached queries to be compiled on a customer system even if all the routines/classes are deployed. So this function can be run after installation to prepare the SQL queries before you bring the system up.

The behavior you are seeing is because of chain '.' handling of null objects. For example if you:

Set person=##class(Sample.Person).%New()

Write person.Name.AnythingYouLike

It will succeed and return "", but if you:

Set tmp=person.Name

Write tmp.AnythingYouLike

It will fail with an INVALID OREF error, as would 'Write (person.Name).AnythingYouLike'.

This behavior is inconsistent, so I will not defend it, but it is how the product works.

It sounds like you mapped the data global but not the id counter global so the other namespace can not see the current id counter value hence it will overwrite the data you already inserted.

It looks like this was already fixed in 2017.1 by MAK4670, which says:

Correct bug in %Collection.ListOfObj:FindOref where it could return "" when the oref is present

Streams support the idea of writing to them without changing the previous stream content so you can either accept the newly changed stream value or discard it depending on if you call %Save or not. In order to support this when you attach to an existing file and then append some data you are actually making a copy of the original file and appending data to this copy. When you %Save this we remove the original file and rename this copy so this is now the version you see. However as you can see making a copy of a file is a potentially expensive operation especially when the file gets large so using a stream here is probably not what you want.

As you just want to append the data and do not want file copies made I would just open the file directly in append mode (using either 'Open' command directly or %File class) and write the data you wish to append so avoiding stream behavior.

Can you provide the code you are using currently so we have something definitive to base comments off, but have you tried using $translate and reading the data in big chunks e.g. 16k at a time?

While 'binarystream.AtEnd {

  Set sc=outputstream.Write($translate(binarystream.Read(16000),badchars,goodchars)

}

Where binarystream is your binary input stream, outputstream is your output stream with the converted characters and badchars is a list of the bad characters you need to convert and goodchars is the list of the values you want the badchars converted into.

The biggest issue I saw is that when you call %Save() you are returning the Status code into variable 'Status' which is good, but then this variable is totally ignored. So if you save an object which does not for example pass datatype validation the %Save will return an error in the Status variable but the caller will never know the save failed or why the save failed.

In addition %DeleteId does not return an oref, it returns a %Status code, so you need to check this to see if it is reporting an error and report this to the caller if it does also.

You are basically correct. We do try to minimize branch points in the tree, so an array with one subscript array("very long subscript") only has a single element in it rather than one per byte associated with the subscript. So this reduces it to <k depending on the distribution of the keys. 

The key bit of information here is the 'service unavailable' error being returned rather than say a 'not authorized' or 'not found' errors. By default when out of licenses we return the service unavailable error so if anyone else sees this they should check license usage as a first step. If you get not authorized errors it is probably a security issue so check the audit log as this often shows the exact problem.