If I test the Native api for Node.js from the documentation, I noticed (if I'm correct) all methods and calls are synchronous. By default due to the nature of Node.js, there is only one thread of execution and normally all JavaScript methods and all calls should be asynchronous and use either a callback function (the "old way") or promises or the async/await contruct to return their result, e.g.:
I'm trying to find the faster way to get the data from a class, and I find it very slow compared to traditional globals. So, I hope some of you can bring some light to me :-)
I have thousands of registers in a class, and to access it quickly I'm going with $o at the index. From there, I get the values using $listget(). Something like that:
I want to process more requests per second in Ensemble 2015 (soap service).My problem is in a business process that makes a great transformation.I thought that I can put its group size to 4 (the current value is 1), or put 4 business processes and apply, for example, the round-robin algorithm. Which alternative is better?
I recently encountered a issue with Caché and I can't figure out where the problem is coming from.
I noticed that the license limit (200) was reached whenever I was opening my Studio (so it seems). When this occurs, I restart Caché (with the Cube in the Taskbar), and the number of license used is back to 1%, but grows back after. The time taken before the number of license grows back again looks pretty random.
I'm working on a project with my client. They have a visit table which has about 7,000,000 records. The table is used in a random search page witch holds 20+ conditions to be combined. The table is defined as below:
I’m facing issues with replicating data from my Caché 2016 database to a PostgreSQL database. I need to handle around 300 data updates per minute, and whenever certain tables are modified, those changes must be reflected in other databases.
Can someone direct me to where in the documentation we can find how consumption may be calculated for global storage?
Caché Version
2010.1
Operating System
HP OpenVMS 8.4
EDIT: After receiving some responses, it seems I was unclear in my initial inquiry. I am looking to determine our rate of consumption of storage; however, I am having some difficulty in doing that.
Suppose we need to store millions of values temporarily, that means, we don't care about them if we lose them but our application use them to get realtime information. Should I use Cachetemp or whatever other DB without journaling enabled? If answer is Cachetemp, shouldn't be a problem if we decide to scale using App Server + ECP? I'm not sure what would happen with the app logic in such architecture as I guess I couldn't map and share cachetemp...
I work in a small development company that uses Caché as a database. In some support cases I have doubts about whether the client's infrastructure environment is not affecting Caché's response time. Reading a bit about comparing installations in different environments, both in production as testing and homologation environments , I understood that the TPC-E is a benchmarking method accepted in the market.
Running cache 5.0.21 64 bit on Windows server 2016 in virtual environment. Trying to understand why every single process disk read speed (simple sql data walks) caps around ~20MB/s, however 2 paralell such tasks on different data areas can reach 19MB/s each, four - 17MB/s each, that is 70MB/s total, etc. Also simple copy file to nul on that system reach ~400MB/s.
What can keep single query on idle system from reaching for example 200MB/s? Virtualization? Windows? Cache? Processors are below 1-3%
Currently, we are receiving an alert that states, "Write Daemon still on pass 31". It's been that way for a few hours.
I was wondering if it is possible to identify what the WD has left to work on so that we can see how we can reduce this and possibly identify if there are issues with the way something is written.
Please excuse my ignorance. I am trying to identify what areas would be best to review in the System Dashboard (for Cache 2010.2) for performance issues with the database. It seems to be running slower than usual, but I am trying to find out the best way to go about identifying what the issue is.
The following are captures from the System Dashboard.
A long time ago I enabled Activity Monitoring to be able to save myself headaches in the future when looking at the performance of various message routes through our productions. It's served it's purpose of answering questions on how many messages we process a week etc but I had not had the chance to really dig down into the stats for specific message types or destinations to pin point issues.
We have 1lakh records in table and while using sql select statement , it is taking more than 9mins to 12 mins to get the records. could you please how to optimize this performance issue if we have more records. how to optimize it.
We are seeing more and more customers being lured with latest infrastructure technologies, particularly Composable Infrastructure. Coming with all sorts of data center consolidations and costs savings.
Question is: are there any concerns for HealthShare/TrakCare being run on these platforms or things to look out for? Anyone out there, already on these platforms?
To be more specific this is HPe Synergy with 480 Compute blades booting as bare metal.
Some time ago, I changed the configuration in SQL Runtime Statistic to "Turn on Stats code generation to gather stats at the Open and Close of a query". With this change, the CACHE base (cache/mgr/cache/) has grown a lot to reach 198GB.
Yesterday, I returned the configuration of SQL Runtime Statistic to the default which is "Turn off Stats code generation" and the cache base is no longer growing.
I would like to know if an encrypted caché database can run significantly slower than a normal "unencrypted" database, in a way that is noticeable to the end user (e.g. slower response time for most pages, especially the ones that rely on read/writing to globals).
I searched in Intersystems knowledge base and couldn't find anything related. I'm looking for possible before/after benchmarks.
On local environment, calling Foo() is instantaneous (a few ms). On production/test servers (which have much better hardware than local) calling this function is slow and take between 200 ms to 800 ms. Obviously starting a new job with "job" command take lot of time on those environments.
Has anyone got any experience of using the Microsoft diskspd utility to test the storage infrastructure in Healthshare/Ensemble environment.
I am interested in getting some figures to highlight any issues with different approaches to provisioning the disks on our new environment.
I am at a loss as to what parameters I should use to give a reasonable synthetic load that will give me any indication of potential issues. Any pointers would be greatly appreciated!
I would like to get a list of all globals that have been read or written during a given context. In Portal, there are counters in dashboard that give the number of read/write to globals in general.
What I am looking for :
- some handler (eg: like $ZTRAP) that will be called everytime something is read/written to a global.
- to activate a "global log mode" in Portal that will dump some information to a file (like ^ISCSOAP for SOAP requests).