You have to distinguish between "journals" and "mirror-journals" files. the 1st are to ensure any instance DB integrity (DB corruption) in case of a failure. The 2nd are you ensure proper mirror "failover" and any A-Sync members.

When LIVETC01 (as a Backup) is "catch up" its a good source to copy .DAT files to the LIVEDR.
It is also safe to delete its mirror-journals.
The steps you did to catch up the LIVEDR are correct. (I assume you did "activate" & "catch up" after that in LIVEDR)

After the IRIS.DAT copy (of all DBs in mirror) from LIVETC01 to LIVEDR and both are "catch up" - It is safe to delete mirror-journals up to the point of the copy from your primary LIVETC02

Hello,

The IRIS management portal is a web application.
It connects to the IRIS database with a component called: "CSP Gateway" (can be installed on various web servers: IIS, Apache, Nginx). This "CSP gateway" is enabling you to develop web applications that will interact with the DB directly though this component. You may run "ajax like" code on the server-side, and have all your web pages fully dynamic (generated on the server in run-time).
TO work with the management portal from remote, you may take advantage of it's security mechanism (define users, roles, services) the ability to authenticate with external to IRIS entities (e.g. doing LDAP to ADDS) and also have a two-factor authentication, for better security.

Any external data database or other tool that can do ODBC & JDBC can communicate with IRIS and get data out of it. Those are also.   

Hello, as the documentation is saying, the call to %OnSaveFinally() is done at the last step of the %Save() = after data was written to the database, and the transaction is already committed.

I did a test with a class:

Class USER.TestClass Extends %Persistent
{
   Property Name As %String;
   ClassMethod %OnSaveFinally(oref As %ObjectHandle, status As %Status)
   {
      S ^TestGlobal="This is a test "_$ZDT($H)
   }
}

Then, I've saved some new data into the class:

USER>s obj=##class(USER.TestClass).%New(), obj.Name="Name", sc=obj.%Save() w !,sc
1
USER>zw ^USER.TestClassD
^USER.TestClassD=1
^USER.TestClassD(1)=$lb("","Name")

and then checked the journal entries to see the behavior.
It shows that the %OnSaveFinally() was called just after a successful save after the CT (close of transaction):

1675920  2312 BT  1675936 2312 S  +\iris\mgr\user\ USER.TestClassD = 1
1675992  2312 ST +\iris\mgr\user\ USER.TestClassD(1) = $lb("","Name")       
1676052  2312 CT
1676068  2312 S  +\iris\mgr\user\ TestGlobal = "This is a test 07/15/202+

Adding a soft delete is a good idea, but then indices will have to be changed as well to support that.

If all your 5 places of code are not called, and record are keeping "disappear" then it might be a SQL that is run by a user or developer. I would recommend to :

- have an detailed audit on that table/class to see it's deletions
- check all ODBC/JDBC users - to see if permissions for delete can be removed
- Possibly to have a code to scan journal files, to find that class, global, pid, date time stamp - and store this on a sperate table or global that can be later examined

Hello,


The best way it to do it is to use the dictionary to loop on properties of the original class and create a new class  which is identical, but with a different storage. The cloning is done by using %ConstructClone
Usually, the new class for backup, does not need to have methods, indices or triggers, so those can be "cleaned" before saving it.

Have the original and the destination class objects:

S OrigClsComp=##class(%Dictionary.CompiledClass).%OpenId(Class)
S DestCls=OrigCls.%ConstructClone(1)

You should give the destination class a name and type:

S DestCls.Name="BCK."_Class , DestCls.Super="%Persistent"

Usually the destination class does not need to have anything than the properties, so in case there are methods, triggers or indices that need to be removed from the destination class, you may do: 

F i=1:1:DestCls.Methods.Count() D DestCls.Methods.RemoveAt(i)      ; clear methods/classmethods
F i=1:1:DestCls.Triggers.Count() D DestCls.Triggers.RemoveAt(i)     ; clear triggers
F i=1:1:DestCls.Indices.Count() D DestCls.Indices.RemoveAt(i)       ; clear indices

Setting the new class storage:

S StoreGlo=$E(OrigCls.Storages.GetAt(1).DataLocation,2,*)
S StoreBCK="^BCK."_$S($L(StoreGlo)>27:$P(StoreGlo,".",2,*),1:StoreGlo)

S DestCls.Storages.GetAt(1).DataLocation=StoreBCK
S DestCls.Storages.GetAt(1).IdLocation=StoreBCK
S DestCls.Storages.GetAt(1).IndexLocation=$E(StoreBCK,1,*-1)_"I"
S DestCls.Storages.GetAt(1).StreamLocation=$E(StoreBCK,1,*-1)_"S"
S DestCls.Storages.GetAt(1).DefaultData=$P(Class,".",*)_"DefaultData"

Then just save the DestCls

S sc=DestCls.%Save()

Actually I was always using $Zorder ($ZO) which was "invented" 30 years ago, before $Query (popular MSM, DSM etc.)
Another thing is that $Order is a "vertical" way of looping through an array/global/PVC and $Query (or $ZO) are meant to loop in a "horizontal" way (same as you ZWRITE it)

it has the same functionality, and very easy to use:
Set node = "^TestGlobal(""Not Configured"")" W !,node
^TestGlobal("Not Configured")
F  S node=$ZO(@node) Q:node=""  w !,node,"=",@node
^TestGlobal("Not Configured","Value 1")=value 1
^TestGlobal("Not Configured","Value 2")=value 2