Recently, I've been working on a Business Process that processes a large JSON FHIR message containing up to 50k requests in an array within the JSON.
Currently, the code imports the JSON as a dynamic object from the original message stream, obtains an iterator from it, and processes each request one at a time in a loop.
We are experimenting with IIS, as the PWS will be gone in newer versions.
The code which is executed, takes 15ms to run. If we execute it through PWS (REST), there is some overhead and the total execution time is 40ms, which is acceptable. However, if we go through IIS, it takes 150ms or sometimes even more.
Both PWS and IIS are running on the same server as IRIS in this case. No optimisations have been done on IIS.
Any suggestions on where to look/what to optimize on IIS?
Online document says: TUNE TABLE updates the SQL table definition (and therefore requires privileges to alter the table definition). Commonly, TUNE TABLE also updates the corresponding persistent class definition. This allows the gathered statistics to be used by the query optimizer without requiring a class compilation.
Is there a difference in outcome between the two screengrabs below?
In both cases, when certain conditions are met, a transformation is called and the output sent on to two targets. In the first case we surmise the transformation is called twice, and the output of the first run sent to the first target, the output of the second run to the second target. In the second case we surmise the transformation is called once, and the output duplicated and sent to the two targets.
I would like to get a list of all globals that have been read or written during a given context. In Portal, there are counters in dashboard that give the number of read/write to globals in general.
What I am looking for :
- some handler (eg: like $ZTRAP) that will be called everytime something is read/written to a global.
- to activate a "global log mode" in Portal that will dump some information to a file (like ^ISCSOAP for SOAP requests).
I'm working on a project with my client. They have a visit table which has about 7,000,000 records. The table is used in a random search page witch holds 20+ conditions to be combined. The table is defined as below:
In terms of general through-put design and long term support, I'm considering what would be a "best approach" for needing to create multiple batch files in a few different layouts from the same data-sets.
If I test the Native api for Node.js from the documentation, I noticed (if I'm correct) all methods and calls are synchronous. By default due to the nature of Node.js, there is only one thread of execution and normally all JavaScript methods and all calls should be asynchronous and use either a callback function (the "old way") or promises or the async/await contruct to return their result, e.g.:
After what is seemed was weeks, I finally got SSL/TLS enabled on both Apache Web Server and IRIS using the Web Gateway. However while we can now use HTTPS to connect to our Development instance of IRIS, I am running into several errors when I have others try to access the Management Portal via HTTPS.
On local environment, calling Foo() is instantaneous (a few ms). On production/test servers (which have much better hardware than local) calling this function is slow and take between 200 ms to 800 ms. Obviously starting a new job with "job" command take lot of time on those environments.
I would like to know if an encrypted caché database can run significantly slower than a normal "unencrypted" database, in a way that is noticeable to the end user (e.g. slower response time for most pages, especially the ones that rely on read/writing to globals).
I searched in Intersystems knowledge base and couldn't find anything related. I'm looking for possible before/after benchmarks.
The index we want to use is called "iFilter". Currently we use the following technique of ignoring all other indices because the automatically chosen index is always too slow.
Some time ago, I changed the configuration in SQL Runtime Statistic to "Turn on Stats code generation to gather stats at the Open and Close of a query". With this change, the CACHE base (cache/mgr/cache/) has grown a lot to reach 198GB.
Yesterday, I returned the configuration of SQL Runtime Statistic to the default which is "Turn off Stats code generation" and the cache base is no longer growing.
We are seeing more and more customers being lured with latest infrastructure technologies, particularly Composable Infrastructure. Coming with all sorts of data center consolidations and costs savings.
Question is: are there any concerns for HealthShare/TrakCare being run on these platforms or things to look out for? Anyone out there, already on these platforms?
To be more specific this is HPe Synergy with 480 Compute blades booting as bare metal.
Want to perform SNMP performance monitoring of cache2010 on AIX 5.3. Since the SNMP service that comes with AIX does not support agentX, it cannot extend the support for cache database. Therefore, I plan to deploy net-snmp on AIX first, then enable agentX, and finally configure cache's subagent. Is this workable? Any documents? Thx!
In the context of IKO (Iris Kubernetes Operator) the question of Service not redirecting dynamically to the correct Pod is still pending. In production this can be dangerous since an overload (or any other simpler problem) can cause you to change the main Pod and leave the application inoperable until we intervene.
Intersystems support warned that this is still an issue of IKO, but there are some possibilities that I am studying.
To explore an idea I had, I would like the help of this Forum to answer the following question:
Running cache 5.0.21 64 bit on Windows server 2016 in virtual environment. Trying to understand why every single process disk read speed (simple sql data walks) caps around ~20MB/s, however 2 paralell such tasks on different data areas can reach 19MB/s each, four - 17MB/s each, that is 70MB/s total, etc. Also simple copy file to nul on that system reach ~400MB/s.
What can keep single query on idle system from reaching for example 200MB/s? Virtualization? Windows? Cache? Processors are below 1-3%
A long time ago I enabled Activity Monitoring to be able to save myself headaches in the future when looking at the performance of various message routes through our productions. It's served it's purpose of answering questions on how many messages we process a week etc but I had not had the chance to really dig down into the stats for specific message types or destinations to pin point issues.