Having been inspired with Shared code execution speed question/discussion, I dare to ask another one which is annoying me and my colleagues for several weeks.
We have a routine called Lib that comprises 200 $$-functions of 1500 code lines total. It was noticed that after calling _any_ function of another rather big routine (1900 functions, 32000 lines) the next call of $$someFunction^Lib(x) is getting 10-20% slower than previous call of the same function. This effect doesn't depend on:
I have created this script that does lot of writes to a single global. DB write performance is much slower than expected (compared to another similar systems).
Let's suppose two different routines use one and the same chunk of code. From the object-oriented POV, a good decision is to have this chunk of code in a separate class and have both routines call it. However, whenever you call code outside of the routine as opposed to calling code in the same routine, some execution speed is lost. For reports churning through millions of transactions this lost speed might be noticeable. Any advice how to optimize specifically speed?
I’m facing issues with replicating data from my Caché 2016 database to a PostgreSQL database. I need to handle around 300 data updates per minute, and whenever certain tables are modified, those changes must be reflected in other databases.
I have a queue that I need to traverse and perform operations on. To distribute the workload across multiple processes, I used a Work Queue Manager do process and I'm not expecting any status from it So, I skipped Sync / WaitForComplete in my implementation
The subroutine ^routine is not executed while the queue is being processed in WorkMgr. However, it works when defined as a function. Is it mandatory to define subroutine^routine as a function for it to execute properly?
In IRIS, every time a request need to be processed, a specific IRIS process (IRISDB.EXE) need to be assigned to handle that request.
If there is no spare IRIS process at that time, process will need to be created (and later destroyed). On some Windows systems (especially with security/antimalware solutions being active) creating new processes can be slow, which can results in delays during peak times (due to inrush of requests).