Assuming you have a sync mirror established, adding new db to mirror is as simple as:

  1. Create DB on Primary.
  2. Run SYS.Mirror:AddDatabase. It returns %Status, check that it's ok with $$$ISOK(sc). It should be equal to 1.
  3. Dismount database on Primary (using SYS.Database:DismountDatabase) OR Freeze IRIS (Backup.General:ExternalFreeze).
  4. Copy IRIS.DAT to Backup.
  5. Mount database on Primary (using SYS.Database:MountDatabase) OR Thaw IRIS (Backup.General:ExternalThaw).
  6. Mount database on Backup.
  7. Activate database on Backup (SYS.Mirror:ActivateMirroredDatabase).
  8. Catchup database on Backup (SYS.Mirror:CatchupDB).

Please note that some methods accept db name, most accept db directory, and others db sfn. Please keep that in mind.

Eduard, Is the  rate at which one BS can process known or is it variable based on data unit to be processed?

It is variable based on data unit to be processed. Data unit (file) size varies between 1Kb and 100Mb.

Similarly is the rate of arrival known or possible to detect?

IRIS BS pulls messages, so as soon as BS job is done with the message, next message is pulled from the external queue (AWS SQS).

Is the design to have all the processing of a data unit in the BS rather than pass to a BP/BO?

Yes, it's a stateless app, so I need to process message and report success/error immediately since container can be reprovisioned at any time.

As process spawning is expensive, parallel queries are processed using Work Queue Manager. It works by having a queue managing process spawn worker processes at the system startup. When a new parallel query needs to be executed, it is distributed to workers.

That is why the $zparent value is not what you expected. It is a value of a work queue managing process.

You can use it instead, as it's relatively constant.

Before executing your main query, run this query to get Work Queue Manager JobId:

Class Test.Parallel
{

Query Test() As %SQLQuery
{
SELECT Test.Parallel_Parent() UNION %PARALLEL
SELECT Test.Parallel_Parent()
}

ClassMethod Parent() As %Integer [ CodeMode = expression, SqlProc ]
{
$zparent
}

/// do ##class(Test.Parallel).Try()
ClassMethod Try()
{
	set rs = ..TestFunc()
	do rs.%Next()
	write "Work Queue Manager Job: ", rs.%GetData(1)
}

}

Then, store your data subscribed by Work Queue Manager JobId, and all Work Queue Workers can pick it up using $zparent.

Parameter values are static values* and  the same for all objects.

Property values are different for each object.

Small example:

Class Circle Extends %RegisteredObject {
    
Parameter PI = 3.14;

Property Radius;

Method GetArea()
{
    quit ..#PI * ..Radius * ..Radius
}

ClassMethod Test()
{
    set circle = ..%New()
    set circle.Radius = 25
    write circle.GetArea()
}
}

* parameters can be calculated but not set by user.

Database is a physical file containing globals.

Namespace is a logical combination of one or more databases.

By default Namespace has two databases:

  • GLOBALS - default location for globals
  • ROUTINES - default location for code (classes/routines/includes)

In addition to that, namespace can have any amount of mappings. Mappings map specified globals/code from a specified database.

When you try to access a global, first global mappings are searched for this global, if no mappings are found, the GLOBALS database is used.

When you try to access some code, first package/routine mappings are searched for this code, if no mappings are found, the ROUTINES database is used.

To split data innto a separate DB:

  1. Create target DB.
  2. Make source DB RO (by forcefully logging users out for example and/or marking it as RO).
  3. Copy data/code from the source to target DB (for globals use ^GBLOCKCOPY).
  4. Create mapping from the target DB to your namespace.
  5. Confirm that data is present.
  6. Delete data from a source DB.