go to post Yaron Munz · Jul 8, 2022 All answers are correct. I personally prefer the "array of %String" approach, which is much more efficient and gives a better performance (in case your table grows to a huge size of hundreds of GBs)
go to post Yaron Munz · Jul 8, 2022 Usually the GREF is the total number of global references (per second). A given process can do a limited number of I/O operations/sec (this is due to the CPU clock speed).When there are bottlencecks, there are some tools that can tell you which part of your system (or code) can be improved. Monitoring with SAM or other tools can give you some numbers to work with. there is also a %SYS.MONLBL that can help you improve your code.Storage is also a consideration, sometimes a DB can be optimized to store data in a more compact way and save I/O (especially when you are on the cloud when disks are somehow slower than you have on premisse). One easy improvment is to do some "heavy" parts on your system (e.g. reports, massive data manipulatipons etc.) in parallel. This can be done with using the "queue manager" or with the %PARALLEL keyword fr SQL queries.A more complex way to go is to do a vertical or horizental scale of the system, of even sharding.
go to post Yaron Munz · Jul 8, 2022 Hello, We are using this on an a-sync mirror member, that we want to be "behind" for any X minutes (parameter) Here is a sample code: s sc=1 try { I ##class(%SYSTEM.Mirror).GetMemberType()'="Failover" { S diff=..GetDiff(debug) I diff < minutes { I $zcvt(##class(SYS.Mirror).AsyncDejournalStatus(),"U")="RUNNING" { D ##class(SYS.Mirror).AsyncDejournalStop() } } else { I $zcvt(##class(SYS.Mirror).AsyncDejournalStatus(),"U")'="RUNNING" { D ##class(SYS.Mirror).AsyncDejournalStart() } } } } catch e { ; any error trp you want } Quit sc
go to post Yaron Munz · Feb 28, 2022 I also suggest that you will try to use %PARALLEL. sometimes it helps. We have experienced that in some (very heavy queries) a good option is in some cases is to have a SP (stored procedure) that will get the query request parameters and run it in parallel "segments" using the build in "queue manager"
go to post Yaron Munz · Feb 28, 2022 Hello Peter, Welcome back to the community. I also have almost ~30 years (since 1991) experience on Intersystems technology starting with some PDP11, VAX & Alpha machines (anyone else misses the VMS OS like I do?) with DSM, the first PCs with MSM and then (1999) came Cache that become IRIS.. very long, interesting and challenging way ! I do remember at 1992 customers with 50-100 users running MSM on a 286 machines. and it was fast (!) At that times, where disks were slow, small & expensive, developers used to define the globals in a very compacted & normalized way, so applications flew on that platform. Today, when disks (as Rebert said) are without limits, fast & relatively cheap, some "new" developers tend to disparage with a correct design of a DB, which on a big databases of few TBs it is noticeable.You can get a significant improvement when the are been optimized (this is part of my job in the past few years). and this is not just the data structure, but also in correct way of optimized coding. In my opinion, the multi-level tree of blocks that hold a global, is limited by the DB itself, which is limited by the OS file system. With global mapping, and sharding (new technology) - even a DB in a size of PETA bytes coukld easily hold them, and be a very efficient, like this technology was forever.
go to post Yaron Munz · Jan 28, 2022 Here is a test for running all those codes. It looks like that the 1st solution (with $translate) is the fastest.
go to post Yaron Munz · Nov 15, 2021 Very good article, Laurel. To me, as one that has many years of experience with software development, it is obvious. However, I also learned in my career that a good & precise design of your data model, using correct (and compact) storage of values, building the relationships correctly (in case of OO approach), using proper indices & fine tuning queries (in case of SQL approach) can make a huge difference between a "normal" and "high-end" systems.All those become a major concern when we are talking about a very "high volume" systems that needs to process hundreds of millions of transaction per day. Going to "compact" data model with a reliable code will save you also a lot in your capacity planning both in a cloud hosting or on premise.
go to post Yaron Munz · Nov 15, 2021 Hello, Will the "Built-in license for InterSystems SAM Manager" be effected as well ? the "Expiration Date" is 0 - so i guess it is not, but just want to be sure ...
go to post Yaron Munz · Sep 20, 2021 DB tuning has many aspects:1. When the DB is a SQL one (persistent classes) than as some said a "tune table might help, but this will help mostly with SQL queries, by optimizing the use of (correct) indices.There are other SQL tools on the portal (query plan, sql index usage etc.) that might help to see if the indices are correct. 2. Using the correct data types for your data (e.g. using %Date data type to store dates in $horolog format is much more effective than a %String (YYYY-MM-DD) on large scaled databases, especially with indices where cache memory can not hold most of the DB or with systems with huge number of transactions).BTW, this is true to both persistent classes (that you access qwith SQL) and "raw" globals that you access with COS (or other language). 3. In a large scald DBs if some of your queries are "heavy" - code optimization might also be a consideration (e.g. replacing SQL with COS that does direct $Order on the index globals).
go to post Yaron Munz · Aug 24, 2021 quine ; quine routine is a routine that prints itself contentX "ZL "_$T(+0)_" ZP"Q
go to post Yaron Munz · Aug 24, 2021 to avoid hardcode the actual routine name, you may do: X "ZL "_$T(+0)_" ZP"
go to post Yaron Munz · Aug 10, 2021 Mario, The OS is totally irrelevant here ! The problem is with the IRIS %Library.File class that uses a $ZSEARCH to scan the directory, and gives a <STORE> error due to being unable to fully scan the directory
go to post Yaron Munz · Jul 29, 2021 Usually you do not mix between your "own" persistent (classes) data and Interoperability (aka Ensemble) request & respond messages (which are also persistent). When you purge Ensemble data. the request/response messages are also being purged ! A best practice is to have for each component its own request/response messages. e.g.PROD.BO.ComponentName.Msg.Request & PROD.BO.ComponentName.Msg.Responsewhere BO = business operation (so you could have BS for business service and BP for business process)and ComponentName is the name of your components.(sometimes few components, can share the same request & response messages, which is totally fine !)
go to post Yaron Munz · Jul 27, 2021 consider also that even when you use the DISABLE^%SYS.NOJRN if you are within a transaction globals will be saved (e.g when doing a %Save() on an object). To prevent this you may use:$system.OBJ.SetTrandsactionMode(0) - to disable transaction$system.OBJ.SetTrandsactionMode(1) - to enable transaction (leter)
go to post Yaron Munz · Jul 14, 2021 I agree with Eduard. However, there is an option to have global mapping (by subscript) to different databases that might be located on multiple disks for getting better I/O https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GG...
go to post Yaron Munz · Jul 1, 2021 Hello Subramaniyan, If you can have a downtime for the DB, than you could write a script that dismount the DB, FTP it to another server and mount it again. This of course depends on the DB size and your network speed. If a downtime is not possible, I would recommend doing a "hot backup" then copy (or FTP) it to another server and restore it. Another option is to use "external backup" with using a "freeze" & "Thaw" to ensure data integrity. Further information: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...
go to post Yaron Munz · Jul 1, 2021 There is a stand alone studio called "IRIS studio" that can be installed independently (no need for IRIS install). If you download the latest version it will support "backward" all versions of Cache/IRIS
go to post Yaron Munz · Jun 2, 2021 Hello, I assume you are looking for something like CDC (capture data changes) for . The basic idea is to programmatically read journal files record by record and analyze the SET/KILL ones (according to some dictionary you build to determine which globals or classes need the CDC capability). I have done something similar using the ^JRNUTIL https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GC...
go to post Yaron Munz · Jun 2, 2021 There are few considerations here: 1. If you are using a failover (sync) mirror, you will need to that Ens.* data on the backup so a switch can be done seamlessly, and Ensemble production can start from the exact same point when failure occurred. 2. mapping any globals into a non-journal database will still have them on the journal when the process is "in transaction". Ensemble production is writing everything "in-transaction" 3. Mapping Ens.* to another DB is good idea if you have several disks, and you want to improve I/O on your system (specifically within cloud VMs that usually have a bit slower disks than on-premise). Another option is to let the system purge Ensemble data regularly and maybe keep less days of history. 4. Mirror journals are regularly being purged after all sync/a-sync mirror members got them.