Question
· Jun 29, 2022

Measuring Global References ````

I had a customer the other day ask how GREFs are measured and how the relationship of the host, storage, and DB engine all influence that number measured in hundreds of thousands or millions.  Is there any good documentation that explains this and/or assists with calculating a GREF count?

thank you in advance...

Tom

Discussion (3)0
Log in or sign up to continue

The new version of IRIS 2022.2 has a new feature Columnar Storage, about which the documentation says the following:

Choosing a storage layout is not an exact science. You might need to experiment with multiple layouts and run multiple query tests to find the optimal one.

Therefore, you are unlikely to find an exact answer to your question.

Usually, the more efficient the query is and there are "correct" indexes, the smaller the GREF and, accordingly, the shorter the execution time. But this is influenced by many factors, not just the above: see InterSystems SQL Optimization Guide

The host, storage and DB engine don't influence the gref count at all. Only what the code does.
If you do a set,kill or write that is a gref.
The host, storage and DB engine determine the limit of grefs per second.
Faster storage is always better.
More memory (larger global/routine buffer) is always better.
Faster cores are always better. More cores are better if there is work for them to do.
Newer versions of IRIS (DB engine) are always better.


GLOSTAT will give you some numbers
https://docs.intersystems.com/iris20221/csp/docbook/Doc.View.cls?KEY=GCM_glostat

Vertical Scaling IRIS
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GSCALE_vertical

Usually the GREF is the total number of global references (per second). A given process can do a limited number of I/O operations/sec (this is due to the CPU clock speed).
When there are bottlencecks, there are some tools that can tell you which part of your system (or code) can be improved. Monitoring with SAM or other tools can give you some numbers to work with. there is also a %SYS.MONLBL that can help you improve your code.
Storage is also a consideration, sometimes a DB can be optimized to store data in a more compact way and save I/O (especially when you are on the cloud when disks are somehow slower than you have on premisse). 
One easy improvment is to do some "heavy" parts on your system (e.g. reports, massive data manipulatipons etc.) in parallel. This can be done with using the "queue manager" or with the %PARALLEL keyword fr SQL queries.
A more complex way to go is to do a vertical or horizental scale of the system, of even sharding.