If I were trying to access an index of a global variable, what time complexity would this operation have? My understanding of languages like Java/C++ is that arrays are stored as blocks of memory so that x[15] would have a lookup time complexity of O(1) because it just goes to (address of the array + 15) and retrieves the value stored there.
How does this work in Cache where the index of a variable isn't necessarily an integer value? If I were to have a variable like the following:
In terms of general through-put design and long term support, I'm considering what would be a "best approach" for needing to create multiple batch files in a few different layouts from the same data-sets.
Please excuse my ignorance. I am trying to identify what areas would be best to review in the System Dashboard (for Cache 2010.2) for performance issues with the database. It seems to be running slower than usual, but I am trying to find out the best way to go about identifying what the issue is.
The following are captures from the System Dashboard.
I would like to get a list of all globals that have been read or written during a given context. In Portal, there are counters in dashboard that give the number of read/write to globals in general.
What I am looking for :
- some handler (eg: like $ZTRAP) that will be called everytime something is read/written to a global.
- to activate a "global log mode" in Portal that will dump some information to a file (like ^ISCSOAP for SOAP requests).
Has anyone done any kind of integration with Dynatrace, which is a JVM transaction monitoring tool? Our organization uses this extensively with our Java and .Net applications and we wanted to know if it is even possible.
Some time ago, I changed the configuration in SQL Runtime Statistic to "Turn on Stats code generation to gather stats at the Open and Close of a query". With this change, the CACHE base (cache/mgr/cache/) has grown a lot to reach 198GB.
Yesterday, I returned the configuration of SQL Runtime Statistic to the default which is "Turn off Stats code generation" and the cache base is no longer growing.
I'm trying to find the faster way to get the data from a class, and I find it very slow compared to traditional globals. So, I hope some of you can bring some light to me :-)
I have thousands of registers in a class, and to access it quickly I'm going with $o at the index. From there, I get the values using $listget(). Something like that:
The index we want to use is called "iFilter". Currently we use the following technique of ignoring all other indices because the automatically chosen index is always too slow.
Has anyone got any experience of using the Microsoft diskspd utility to test the storage infrastructure in Healthshare/Ensemble environment.
I am interested in getting some figures to highlight any issues with different approaches to provisioning the disks on our new environment.
I am at a loss as to what parameters I should use to give a reasonable synthetic load that will give me any indication of potential issues. Any pointers would be greatly appreciated!
On local environment, calling Foo() is instantaneous (a few ms). On production/test servers (which have much better hardware than local) calling this function is slow and take between 200 ms to 800 ms. Obviously starting a new job with "job" command take lot of time on those environments.
Currently, namespace Alpha is configured to use database AlphaDB as its global database. How would we go about having namespace Alpha configured to use database AlphaDB for its global database except where global ^Customers(CustomerId) has a CustomerId greater than 10M, which we would like to have it redirected to database BetaDB.
In other words, ^|"AlphaDB"|Customers contains all customers between 1 and 10,000,000; and ^|"BetaDB"|Customers contains all customers greater than 10,000,000. Any help would be appreciated.
Suppose we need to store millions of values temporarily, that means, we don't care about them if we lose them but our application use them to get realtime information. Should I use Cachetemp or whatever other DB without journaling enabled? If answer is Cachetemp, shouldn't be a problem if we decide to scale using App Server + ECP? I'm not sure what would happen with the app logic in such architecture as I guess I couldn't map and share cachetemp...
Can someone direct me to where in the documentation we can find how consumption may be calculated for global storage?
Caché Version
2010.1
Operating System
HP OpenVMS 8.4
EDIT: After receiving some responses, it seems I was unclear in my initial inquiry. I am looking to determine our rate of consumption of storage; however, I am having some difficulty in doing that.
After what is seemed was weeks, I finally got SSL/TLS enabled on both Apache Web Server and IRIS using the Web Gateway. However while we can now use HTTPS to connect to our Development instance of IRIS, I am running into several errors when I have others try to access the Management Portal via HTTPS.
If I test the Native api for Node.js from the documentation, I noticed (if I'm correct) all methods and calls are synchronous. By default due to the nature of Node.js, there is only one thread of execution and normally all JavaScript methods and all calls should be asynchronous and use either a callback function (the "old way") or promises or the async/await contruct to return their result, e.g.:
While I can see the benefits that $ZSTORAGE could have if used properly, I have not seen it used in the environments I have worked in. I was wondering if there are any developers that promote its usage.
If used properly, I would imagine it could be highly effective in maximizing free memory since some processes will never go over X amount, while others may very well need much more.
I have been asked to assist in the planning of the a new server for our database, which we will be changing operating systems from OpenVMS to Linux (RedHat distribution). However, its difficult to find material regarding what would be recommended, which is likely due to the database being proprietary.
In looking at the information provided below and hoping to decrease processing time, would anyone be able to recommend type of configuration we should have for the new Linux server? Please feel free to ask any clarifying questions.
I'm working on a project with my client. They have a visit table which has about 7,000,000 records. The table is used in a random search page witch holds 20+ conditions to be combined. The table is defined as below:
A long time ago I enabled Activity Monitoring to be able to save myself headaches in the future when looking at the performance of various message routes through our productions. It's served it's purpose of answering questions on how many messages we process a week etc but I had not had the chance to really dig down into the stats for specific message types or destinations to pin point issues.
I work in a small development company that uses Caché as a database. In some support cases I have doubts about whether the client's infrastructure environment is not affecting Caché's response time. Reading a bit about comparing installations in different environments, both in production as testing and homologation environments , I understood that the TPC-E is a benchmarking method accepted in the market.
I recently encountered a issue with Caché and I can't figure out where the problem is coming from.
I noticed that the license limit (200) was reached whenever I was opening my Studio (so it seems). When this occurs, I restart Caché (with the Cube in the Taskbar), and the number of license used is back to 1%, but grows back after. The time taken before the number of license grows back again looks pretty random.