Having been inspired with Shared code execution speed question/discussion, I dare to ask another one which is annoying me and my colleagues for several weeks.
We have a routine called Lib that comprises 200 $$-functions of 1500 code lines total. It was noticed that after calling _any_ function of another rather big routine (1900 functions, 32000 lines) the next call of $$someFunction^Lib(x) is getting 10-20% slower than previous call of the same function. This effect doesn't depend on:
If I test the Native api for Node.js from the documentation, I noticed (if I'm correct) all methods and calls are synchronous. By default due to the nature of Node.js, there is only one thread of execution and normally all JavaScript methods and all calls should be asynchronous and use either a callback function (the "old way") or promises or the async/await contruct to return their result, e.g.:
Let's suppose two different routines use one and the same chunk of code. From the object-oriented POV, a good decision is to have this chunk of code in a separate class and have both routines call it. However, whenever you call code outside of the routine as opposed to calling code in the same routine, some execution speed is lost. For reports churning through millions of transactions this lost speed might be noticeable. Any advice how to optimize specifically speed?
I'm trying to find the faster way to get the data from a class, and I find it very slow compared to traditional globals. So, I hope some of you can bring some light to me :-)
I have thousands of registers in a class, and to access it quickly I'm going with $o at the index. From there, I get the values using $listget(). Something like that:
I have created this script that does lot of writes to a single global. DB write performance is much slower than expected (compared to another similar systems).
I want to process more requests per second in Ensemble 2015 (soap service).My problem is in a business process that makes a great transformation.I thought that I can put its group size to 4 (the current value is 1), or put 4 business processes and apply, for example, the round-robin algorithm. Which alternative is better?
I'm working on a project with my client. They have a visit table which has about 7,000,000 records. The table is used in a random search page witch holds 20+ conditions to be combined. The table is defined as below:
I recently encountered a issue with Caché and I can't figure out where the problem is coming from.
I noticed that the license limit (200) was reached whenever I was opening my Studio (so it seems). When this occurs, I restart Caché (with the Cube in the Taskbar), and the number of license used is back to 1%, but grows back after. The time taken before the number of license grows back again looks pretty random.
I’m facing issues with replicating data from my Caché 2016 database to a PostgreSQL database. I need to handle around 300 data updates per minute, and whenever certain tables are modified, those changes must be reflected in other databases.
Can someone direct me to where in the documentation we can find how consumption may be calculated for global storage?
Caché Version
2010.1
Operating System
HP OpenVMS 8.4
EDIT: After receiving some responses, it seems I was unclear in my initial inquiry. I am looking to determine our rate of consumption of storage; however, I am having some difficulty in doing that.
Suppose we need to store millions of values temporarily, that means, we don't care about them if we lose them but our application use them to get realtime information. Should I use Cachetemp or whatever other DB without journaling enabled? If answer is Cachetemp, shouldn't be a problem if we decide to scale using App Server + ECP? I'm not sure what would happen with the app logic in such architecture as I guess I couldn't map and share cachetemp...
Currently, we are receiving an alert that states, "Write Daemon still on pass 31". It's been that way for a few hours.
I was wondering if it is possible to identify what the WD has left to work on so that we can see how we can reduce this and possibly identify if there are issues with the way something is written.
I have a queue that I need to traverse and perform operations on. To distribute the workload across multiple processes, I used a Work Queue Manager do process and I'm not expecting any status from it So, I skipped Sync / WaitForComplete in my implementation
I work in a small development company that uses Caché as a database. In some support cases I have doubts about whether the client's infrastructure environment is not affecting Caché's response time. Reading a bit about comparing installations in different environments, both in production as testing and homologation environments , I understood that the TPC-E is a benchmarking method accepted in the market.
We are experimenting with IIS, as the PWS will be gone in newer versions.
The code which is executed, takes 15ms to run. If we execute it through PWS (REST), there is some overhead and the total execution time is 40ms, which is acceptable. However, if we go through IIS, it takes 150ms or sometimes even more.
Both PWS and IIS are running on the same server as IRIS in this case. No optimisations have been done on IIS.
Any suggestions on where to look/what to optimize on IIS?
Running cache 5.0.21 64 bit on Windows server 2016 in virtual environment. Trying to understand why every single process disk read speed (simple sql data walks) caps around ~20MB/s, however 2 paralell such tasks on different data areas can reach 19MB/s each, four - 17MB/s each, that is 70MB/s total, etc. Also simple copy file to nul on that system reach ~400MB/s.
What can keep single query on idle system from reaching for example 200MB/s? Virtualization? Windows? Cache? Processors are below 1-3%
Some time ago, I changed the configuration in SQL Runtime Statistic to "Turn on Stats code generation to gather stats at the Open and Close of a query". With this change, the CACHE base (cache/mgr/cache/) has grown a lot to reach 198GB.
Yesterday, I returned the configuration of SQL Runtime Statistic to the default which is "Turn off Stats code generation" and the cache base is no longer growing.
I would like to know if an encrypted caché database can run significantly slower than a normal "unencrypted" database, in a way that is noticeable to the end user (e.g. slower response time for most pages, especially the ones that rely on read/writing to globals).
I searched in Intersystems knowledge base and couldn't find anything related. I'm looking for possible before/after benchmarks.
The subroutine ^routine is not executed while the queue is being processed in WorkMgr. However, it works when defined as a function. Is it mandatory to define subroutine^routine as a function for it to execute properly?