Yaron Munz · Jul 8, 2022 go to post

Actually I was always using $Zorder ($ZO) which was "invented" 30 years ago, before $Query (popular MSM, DSM etc.)
Another thing is that $Order is a "vertical" way of looping through an array/global/PVC and $Query (or $ZO) are meant to loop in a "horizontal" way (same as you ZWRITE it)

it has the same functionality, and very easy to use:
Set node = "^TestGlobal(""Not Configured"")" W !,node
^TestGlobal("Not Configured")
F  S node=$ZO(@node) Q:node=""  w !,node,"=",@node
^TestGlobal("Not Configured","Value 1")=value 1
^TestGlobal("Not Configured","Value 2")=value 2

Yaron Munz · Jul 8, 2022 go to post

All answers are correct.

I personally prefer the "array of %String" approach, which is much more efficient and gives a better performance (in case your table grows to a huge size of hundreds of GBs)

Yaron Munz · Jul 8, 2022 go to post

Usually the GREF is the total number of global references (per second). A given process can do a limited number of I/O operations/sec (this is due to the CPU clock speed).
When there are bottlencecks, there are some tools that can tell you which part of your system (or code) can be improved. Monitoring with SAM or other tools can give you some numbers to work with. there is also a %SYS.MONLBL that can help you improve your code.
Storage is also a consideration, sometimes a DB can be optimized to store data in a more compact way and save I/O (especially when you are on the cloud when disks are somehow slower than you have on premisse). 
One easy improvment is to do some "heavy" parts on your system (e.g. reports, massive data manipulatipons etc.) in parallel. This can be done with using the "queue manager" or with the %PARALLEL keyword fr SQL queries.
A more complex way to go is to do a vertical or horizental scale of the system, of even sharding.

Yaron Munz · Jul 8, 2022 go to post

Hello,

We are using this on an a-sync mirror member, that we want to be "behind" for any X minutes (parameter)

Here is a sample code:
 

s sc=1

    try {

        I ##class(%SYSTEM.Mirror).GetMemberType()'="Failover" {

            S diff=..GetDiff(debug)

            I diff < minutes {

                I $zcvt(##class(SYS.Mirror).AsyncDejournalStatus(),"U")="RUNNING" {

                    D ##class(SYS.Mirror).AsyncDejournalStop()

                }

            } else {

                I $zcvt(##class(SYS.Mirror).AsyncDejournalStatus(),"U")'="RUNNING" {

                    D ##class(SYS.Mirror).AsyncDejournalStart()

                }

            }

        }

    } catch e {

        ; any error trp you want

    }

    Quit sc

Yaron Munz · Feb 28, 2022 go to post

I also suggest that you will try to use %PARALLEL. sometimes it helps.

We have experienced that in some (very heavy queries) a good option is in some cases is to have a SP (stored procedure) that will get the query request parameters and run it in parallel "segments" using the build in "queue manager"

Yaron Munz · Feb 28, 2022 go to post

Hello Peter,

Welcome back to the community.

I also have almost ~30 years (since 1991) experience on Intersystems technology starting with some PDP11, VAX & Alpha machines (anyone else misses the VMS OS like I do?) with DSM, the first PCs with MSM and then (1999) came Cache that become IRIS.. very long, interesting and challenging way !

I do remember at 1992 customers with 50-100 users running MSM on a 286 machines. and it was fast (!) At that times, where disks were slow, small & expensive, developers used to define the globals in a very compacted & normalized way, so applications flew on that platform.

Today, when disks (as Rebert said) are without limits, fast & relatively cheap, some "new" developers tend to disparage with a correct design of a DB, which on a big databases of few TBs it is noticeable.
You can get a significant improvement when the are been optimized (this is part of my job in the past few years). and this is not just the data structure, but also in correct way of optimized coding.

In my opinion, the multi-level tree of blocks that hold a global, is limited by the DB itself, which is limited by the OS file system. With global mapping, and sharding (new technology) - even a DB in a size of PETA bytes coukld easily hold them, and be a very efficient, like this technology was forever.

Yaron Munz · Jan 28, 2022 go to post

Here is a test for running all those codes. 
It looks like that the 1st solution (with $translate) is the fastest.

Yaron Munz · Nov 15, 2021 go to post

Very good article, Laurel.

To me, as one that has many years of experience with software development, it is obvious.

However, I also learned in my career that a good & precise design of your data model, using correct (and compact) storage of values, building the relationships correctly (in case of OO approach), using proper indices & fine tuning queries (in case of SQL approach) can make a huge difference between a "normal" and "high-end" systems.
All those become a major concern when we are talking about a very "high volume" systems that needs to process hundreds of millions of transaction per day.

Going to "compact" data model with a reliable code will save you also a lot in your capacity planning both in a cloud hosting or on premise.

Yaron Munz · Nov 15, 2021 go to post

Hello,

Will the "Built-in license for InterSystems SAM Manager" be effected as well ?

the "Expiration Date" is 0 - so i guess it is not, but just want to be sure ...

Yaron Munz · Sep 20, 2021 go to post

DB tuning has many aspects:
1. When the DB is a SQL one (persistent classes) than as some said a "tune table might help, but this will help mostly with SQL queries, by optimizing the use of (correct) indices.
There are other SQL tools on the portal (query plan, sql index usage etc.) that might help to see if the indices are correct.

2. Using the correct data types for your data (e.g. using %Date data type to store dates in $horolog format is much more effective than a %String (YYYY-MM-DD) on large scaled databases, especially with indices where cache memory can not hold most of the DB or with systems with huge number of transactions).
BTW, this is true to both persistent classes (that you access qwith SQL) and "raw" globals that you access with COS (or other language).

3. In a large scald DBs if some of your queries are "heavy" - code optimization might also be a consideration (e.g. replacing SQL with COS that does direct $Order on the index globals).

Yaron Munz · Aug 24, 2021 go to post

quine ; quine routine is a routine that prints itself content
"ZL "_$T(+0)_" ZP"
Q

Yaron Munz · Aug 24, 2021 go to post

to avoid hardcode the actual routine name, you may do:
"ZL "_$T(+0)_" ZP"

Yaron Munz · Aug 10, 2021 go to post

Mario,

The OS is totally irrelevant here !

The problem is with the IRIS %Library.File class that uses a $ZSEARCH to scan the directory, and gives a <STORE> error due to being unable to fully scan the directory   

Yaron Munz · Jul 29, 2021 go to post

Usually you do not mix between your "own" persistent (classes) data and Interoperability (aka Ensemble) request & respond messages (which are also persistent).

When you purge Ensemble data. the request/response messages are also being purged !

A best practice is to have for each component its own request/response messages. e.g.
PROD.BO.ComponentName.Msg.Request & PROD.BO.ComponentName.Msg.Response
where BO = business operation (so you could have BS for business service and BP for business process)
and ComponentName is the name of your components.
(sometimes few components, can share the same request & response messages, which is totally fine !)

Yaron Munz · Jul 27, 2021 go to post

consider also that even when you use the DISABLE^%SYS.NOJRN if you are within a transaction globals will be saved (e.g when doing a %Save() on an object).

To prevent this you may use:
$system.OBJ.SetTrandsactionMode(0) - to disable transaction
$system.OBJ.SetTrandsactionMode(1) - to enable transaction (leter)

Yaron Munz · Jul 1, 2021 go to post

Hello Subramaniyan,

If you can have a downtime for the DB, than you could write a script that dismount the DB, FTP it to another server and mount it again. This of course depends on the DB size and your network speed.

If a downtime is not possible, I would recommend doing a "hot backup" then copy (or FTP) it to another server and restore it. Another option is to use "external backup" with using a "freeze" & "Thaw" to ensure data integrity.

Further information:

https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cl…

Yaron Munz · Jul 1, 2021 go to post

There is a stand alone studio called "IRIS studio" that can be installed independently (no need for IRIS install).

If you download the latest version it will support "backward" all versions of Cache/IRIS 

Yaron Munz · Jun 2, 2021 go to post

There are few considerations here:

1. If you are using a failover (sync) mirror, you will need to that Ens.* data on the backup so a switch can be done seamlessly, and Ensemble production can start from the exact same point when failure occurred.

2. mapping any globals into a non-journal database will still have them on the journal when the process is "in transaction". Ensemble production is writing everything "in-transaction"

3. Mapping Ens.* to another DB is good idea if you have several disks, and you want to improve I/O on your system (specifically within cloud VMs that usually have a bit slower disks than on-premise). Another option is to let the system purge Ensemble data regularly and maybe keep less days of history.

4. Mirror journals are regularly being purged after all sync/a-sync mirror members got them.

Yaron Munz · May 31, 2021 go to post

In my opinion, it is much better & faster to store binary files out of the database. I have an application with hundreds of thousands of images. To get a faster access on a Windows O/S they are stored in a YYMM folders (to prevent having too many files in 1 folder that might slow the access) while the file path & file name are stored of course inside the database for quick access (using indices). As those images are being read a lot of times, I did not want to "waste" the "cache buffers" on those readings, hence storing them outside the database was the perfect solution.

Yaron Munz · Nov 30, 2020 go to post

Using $Extract will be 17 % faster (!)
Set date=20201121090000  
Set newDate=$E(date,1,4)_"-"_$E(date,5,6)_"-"_$E(date,7,8)_" "_$E(date,9,10)_":"_$E(date,11,12)_":"_$E(date,13,14)
 

Yaron Munz · Aug 28, 2020 go to post

Thank you Robert,

Excellent article.
In fact we use this approach when we need to gain speed for queries that run on hundreds of millions records indexes and need to check few items only, so we save checking the "base" class by using [ DATA ... ]  

In addition, to gain more speed, which is so essential in huge queries, we use the (good, old) $order on the index global itself to gain more speed. This is much faster than normal SQL. 

Yaron Munz · Aug 14, 2020 go to post

Hello,

I recommend either ways:

1. copy the *.DAT file to new server: for this you need to dismount the DB on source before copy (so you have downtime), mount it and do a merge between 2 DB/Namespaces.

2. Do a (hot) backup for the DB that has the table  = NO downtime (!) - then copy the *.BCK file to other server, restore to a NEW DB and merge.
This is slower - but downtime is 0

Yaron Munz · Aug 14, 2020 go to post

Analytics (aka DeppSee) build it.

also... great advantage for Intersystems itself (to earn more for license upgrade). 

Yaron Munz · Aug 14, 2020 go to post

Agree.

It would be nice to have a connector for ASB (Azure service bus)

Yaron Munz · Oct 23, 2019 go to post

Last version of Cache/Ensemble that support OpenVMS is 2017.1. IRIS does not.

Yaron Munz · Oct 21, 2019 go to post

Hello,

I have done many migrations from DSM -> Cache in the past, so I can share some of the knowledge here.

Basically there are two phases involved in the process:
1. Migration the DB itself (.i.e globals). For this you can use utility called %DSMCVT which will read the globals from a DSM DB into a Cache/IRIS DB. Sometimes a 7/8bit -> unicode is also part of the process, so an appropriate code for that should be written.
2. Migration of code, that, as mentioned before is the complex part of the process, as you need to rewrite/modify parts of your code that handle with O/S specific issues:
a. File system - all devices code is different (i.e. open/use/close commands) when working with files/directories.
b. Calls to VMS special variables/functions, need to be converted to "standard" code.
c. Some systems rely on VMS "batch queues" for scheduling jobs or for printing reports. This should be addresses as well.
d. Some systems rely on VMS FTP functionality. In this case you need to write the same functionality it inside Cache/IRIS.
d. Using tools, like "screen generator" tools that uses some CHUI non-standard" ANSI codes for on screen attributes, might need to be "adopted" to standard ANSI codes.

As mentioned before, the time/effort for such a migration, highly depends on the native of your application, and the way it was designed. for example if you have 1 central function to print a report (lets say it is in a global), then you need to modify only this function to have all your reports working on the new system. This also applies to "central" functions for reading/writing files,  dates calculations etc.