I recently encountered a issue with Caché and I can't figure out where the problem is coming from.
I noticed that the license limit (200) was reached whenever I was opening my Studio (so it seems). When this occurs, I restart Caché (with the Cube in the Taskbar), and the number of license used is back to 1%, but grows back after. The time taken before the number of license grows back again looks pretty random.
I have been asked to assist in the planning of the a new server for our database, which we will be changing operating systems from OpenVMS to Linux (RedHat distribution). However, its difficult to find material regarding what would be recommended, which is likely due to the database being proprietary.
In looking at the information provided below and hoping to decrease processing time, would anyone be able to recommend type of configuration we should have for the new Linux server? Please feel free to ask any clarifying questions.
Currently, we have an application running in one namespace ("Database B") that has globals and routines mapped to another database ("Database A"). After enforcing clean up on Database A, we found that 90% of the disk is free. We would like to compact Database A and release the unused space. However, we are running OpenVMS, which seems to be the issue.
For databases consisting of only globals, we are able to use ^GBLOCKCOPY; however, we need to ensure that the routines and mappings are also copied.
I am designing the software architecture for an Ensemble/Healthshare production to be deployed on Amazon AWS EC2 servers (2 mirrored m4.large - 4 vCPUs / 16 GiB RAM running RedHat Linux 3.10.0-327.el7.x86_64 and Healthshare for RHEL 64-bit 2016.2.1). It's a rather CPU-intensive production involving massive XSLT 2.0 transformations (massive both in terms of size and volume). I was wondering if anyone has experience configuring Ensemble productions on EC2 servers. My question or concern has to do with the following statement in the Ensemble documentation:
Suppose we need to store millions of values temporarily, that means, we don't care about them if we lose them but our application use them to get realtime information. Should I use Cachetemp or whatever other DB without journaling enabled? If answer is Cachetemp, shouldn't be a problem if we decide to scale using App Server + ECP? I'm not sure what would happen with the app logic in such architecture as I guess I couldn't map and share cachetemp...
Please excuse my ignorance. I am trying to identify what areas would be best to review in the System Dashboard (for Cache 2010.2) for performance issues with the database. It seems to be running slower than usual, but I am trying to find out the best way to go about identifying what the issue is.
The following are captures from the System Dashboard.
If I were trying to access an index of a global variable, what time complexity would this operation have? My understanding of languages like Java/C++ is that arrays are stored as blocks of memory so that x would have a lookup time complexity of O(1) because it just goes to (address of the array + 15) and retrieves the value stored there.
How does this work in Cache where the index of a variable isn't necessarily an integer value? If I were to have a variable like the following:
Currently, namespace Alpha is configured to use database AlphaDB as its global database. How would we go about having namespace Alpha configured to use database AlphaDB for its global database except where global ^Customers(CustomerId) has a CustomerId greater than 10M, which we would like to have it redirected to database BetaDB.
In other words, ^|"AlphaDB"|Customers contains all customers between 1 and 10,000,000; and ^|"BetaDB"|Customers contains all customers greater than 10,000,000. Any help would be appreciated.
I want to process more requests per second in Ensemble 2015 (soap service).My problem is in a business process that makes a great transformation.I thought that I can put its group size to 4 (the current value is 1), or put 4 business processes and apply, for example, the round-robin algorithm. Which alternative is better?
We have 1lakh records in table and while using sql select statement , it is taking more than 9mins to 12 mins to get the records. could you please how to optimize this performance issue if we have more records. how to optimize it.
I work in a small development company that uses Caché as a database. In some support cases I have doubts about whether the client's infrastructure environment is not affecting Caché's response time. Reading a bit about comparing installations in different environments, both in production as testing and homologation environments , I understood that the TPC-E is a benchmarking method accepted in the market.
In terms of general through-put design and long term support, I'm considering what would be a "best approach" for needing to create multiple batch files in a few different layouts from the same data-sets.
While I can see the benefits that $ZSTORAGE could have if used properly, I have not seen it used in the environments I have worked in. I was wondering if there are any developers that promote its usage.
If used properly, I would imagine it could be highly effective in maximizing free memory since some processes will never go over X amount, while others may very well need much more.