Adding a soft delete is a good idea, but then indices will have to be changed as well to support that.

If all your 5 places of code are not called, and record are keeping "disappear" then it might be a SQL that is run by a user or developer. I would recommend to :

- have an detailed audit on that table/class to see it's deletions
- check all ODBC/JDBC users - to see if permissions for delete can be removed
- Possibly to have a code to scan journal files, to find that class, global, pid, date time stamp - and store this on a sperate table or global that can be later examined

Hello,


The best way it to do it is to use the dictionary to loop on properties of the original class and create a new class  which is identical, but with a different storage. The cloning is done by using %ConstructClone
Usually, the new class for backup, does not need to have methods, indices or triggers, so those can be "cleaned" before saving it.

Have the original and the destination class objects:

S OrigClsComp=##class(%Dictionary.CompiledClass).%OpenId(Class)
S DestCls=OrigCls.%ConstructClone(1)

You should give the destination class a name and type:

S DestCls.Name="BCK."_Class , DestCls.Super="%Persistent"

Usually the destination class does not need to have anything than the properties, so in case there are methods, triggers or indices that need to be removed from the destination class, you may do: 

F i=1:1:DestCls.Methods.Count() D DestCls.Methods.RemoveAt(i)      ; clear methods/classmethods
F i=1:1:DestCls.Triggers.Count() D DestCls.Triggers.RemoveAt(i)     ; clear triggers
F i=1:1:DestCls.Indices.Count() D DestCls.Indices.RemoveAt(i)       ; clear indices

Setting the new class storage:

S StoreGlo=$E(OrigCls.Storages.GetAt(1).DataLocation,2,*)
S StoreBCK="^BCK."_$S($L(StoreGlo)>27:$P(StoreGlo,".",2,*),1:StoreGlo)

S DestCls.Storages.GetAt(1).DataLocation=StoreBCK
S DestCls.Storages.GetAt(1).IdLocation=StoreBCK
S DestCls.Storages.GetAt(1).IndexLocation=$E(StoreBCK,1,*-1)_"I"
S DestCls.Storages.GetAt(1).StreamLocation=$E(StoreBCK,1,*-1)_"S"
S DestCls.Storages.GetAt(1).DefaultData=$P(Class,".",*)_"DefaultData"

Then just save the DestCls

S sc=DestCls.%Save()

Actually I was always using $Zorder ($ZO) which was "invented" 30 years ago, before $Query (popular MSM, DSM etc.)
Another thing is that $Order is a "vertical" way of looping through an array/global/PVC and $Query (or $ZO) are meant to loop in a "horizontal" way (same as you ZWRITE it)

it has the same functionality, and very easy to use:
Set node = "^TestGlobal(""Not Configured"")" W !,node
^TestGlobal("Not Configured")
F  S node=$ZO(@node) Q:node=""  w !,node,"=",@node
^TestGlobal("Not Configured","Value 1")=value 1
^TestGlobal("Not Configured","Value 2")=value 2

Usually the GREF is the total number of global references (per second). A given process can do a limited number of I/O operations/sec (this is due to the CPU clock speed).
When there are bottlencecks, there are some tools that can tell you which part of your system (or code) can be improved. Monitoring with SAM or other tools can give you some numbers to work with. there is also a %SYS.MONLBL that can help you improve your code.
Storage is also a consideration, sometimes a DB can be optimized to store data in a more compact way and save I/O (especially when you are on the cloud when disks are somehow slower than you have on premisse). 
One easy improvment is to do some "heavy" parts on your system (e.g. reports, massive data manipulatipons etc.) in parallel. This can be done with using the "queue manager" or with the %PARALLEL keyword fr SQL queries.
A more complex way to go is to do a vertical or horizental scale of the system, of even sharding.

Hello,

We are using this on an a-sync mirror member, that we want to be "behind" for any X minutes (parameter)

Here is a sample code:
 

s sc=1

    try {

        I ##class(%SYSTEM.Mirror).GetMemberType()'="Failover" {

            S diff=..GetDiff(debug)

            I diff < minutes {

                I $zcvt(##class(SYS.Mirror).AsyncDejournalStatus(),"U")="RUNNING" {

                    D ##class(SYS.Mirror).AsyncDejournalStop()

                }

            } else {

                I $zcvt(##class(SYS.Mirror).AsyncDejournalStatus(),"U")'="RUNNING" {

                    D ##class(SYS.Mirror).AsyncDejournalStart()

                }

            }

        }

    } catch e {

        ; any error trp you want

    }

    Quit sc

Hello Peter,

Welcome back to the community.

I also have almost ~30 years (since 1991) experience on Intersystems technology starting with some PDP11, VAX & Alpha machines (anyone else misses the VMS OS like I do?) with DSM, the first PCs with MSM and then (1999) came Cache that become IRIS.. very long, interesting and challenging way !

I do remember at 1992 customers with 50-100 users running MSM on a 286 machines. and it was fast (!) At that times, where disks were slow, small & expensive, developers used to define the globals in a very compacted & normalized way, so applications flew on that platform.

Today, when disks (as Rebert said) are without limits, fast & relatively cheap, some "new" developers tend to disparage with a correct design of a DB, which on a big databases of few TBs it is noticeable.
You can get a significant improvement when the are been optimized (this is part of my job in the past few years). and this is not just the data structure, but also in correct way of optimized coding.

In my opinion, the multi-level tree of blocks that hold a global, is limited by the DB itself, which is limited by the OS file system. With global mapping, and sharding (new technology) - even a DB in a size of PETA bytes coukld easily hold them, and be a very efficient, like this technology was forever.

Very good article, Laurel.

To me, as one that has many years of experience with software development, it is obvious.

However, I also learned in my career that a good & precise design of your data model, using correct (and compact) storage of values, building the relationships correctly (in case of OO approach), using proper indices & fine tuning queries (in case of SQL approach) can make a huge difference between a "normal" and "high-end" systems.
All those become a major concern when we are talking about a very "high volume" systems that needs to process hundreds of millions of transaction per day.

Going to "compact" data model with a reliable code will save you also a lot in your capacity planning both in a cloud hosting or on premise.

DB tuning has many aspects:
1. When the DB is a SQL one (persistent classes) than as some said a "tune table might help, but this will help mostly with SQL queries, by optimizing the use of (correct) indices.
There are other SQL tools on the portal (query plan, sql index usage etc.) that might help to see if the indices are correct.

2. Using the correct data types for your data (e.g. using %Date data type to store dates in $horolog format is much more effective than a %String (YYYY-MM-DD) on large scaled databases, especially with indices where cache memory can not hold most of the DB or with systems with huge number of transactions).
BTW, this is true to both persistent classes (that you access qwith SQL) and "raw" globals that you access with COS (or other language).

3. In a large scald DBs if some of your queries are "heavy" - code optimization might also be a consideration (e.g. replacing SQL with COS that does direct $Order on the index globals).

Usually you do not mix between your "own" persistent (classes) data and Interoperability (aka Ensemble) request & respond messages (which are also persistent).

When you purge Ensemble data. the request/response messages are also being purged !

A best practice is to have for each component its own request/response messages. e.g.
PROD.BO.ComponentName.Msg.Request & PROD.BO.ComponentName.Msg.Response
where BO = business operation (so you could have BS for business service and BP for business process)
and ComponentName is the name of your components.
(sometimes few components, can share the same request & response messages, which is totally fine !)