I have a ZEN page with nine tablepanes. Each tablepane queries a table in the same SQLServer db. I have a single SQLGateWay(odbc) to this SQLServer db. I need to get better performance when I query all nine table at the same time. Would my performance improve if I had nine SQLGateWays(nine odbc configurations/connections), one for each query? I would appreciate any and all suggestions for getting the very best performance when using SQLGateWays. Thank you.
Has anyone got any experience of using the Microsoft diskspd utility to test the storage infrastructure in Healthshare/Ensemble environment.
I am interested in getting some figures to highlight any issues with different approaches to provisioning the disks on our new environment.
I am at a loss as to what parameters I should use to give a reasonable synthetic load that will give me any indication of potential issues. Any pointers would be greatly appreciated!
I'm trying to find the faster way to get the data from a class, and I find it very slow compared to traditional globals. So, I hope some of you can bring some light to me :-)
I have thousands of registers in a class, and to access it quickly I'm going with $o at the index. From there, I get the values using $listget(). Something like that:
Often InterSystems technology architect team is asked about recommended storage arrays or storage technologies. To provide this information to a wider audience as reference, a new series is started to provide some of the results we have encountered with various storage technologies. As a general recommendation, all-flash storage is highly recommended with all InterSystems products to provide the lowest latency and predictable IOPS capabilities.
The first in the series was the most recently tested Netapp AFF A300 storage array. This is middle-tier type storage array with several higher models above it. This specific A300 model is capable of supporting a minimal configuration of only a few drives to hundreds of drives per HA pair, and also capable of being clustered with multiple controller pairs for tens of PB's of disk capacity and hundreds of thousands of IOPS or higher.
Currently, we are receiving an alert that states, "Write Daemon still on pass 31". It's been that way for a few hours.
I was wondering if it is possible to identify what the WD has left to work on so that we can see how we can reduce this and possibly identify if there are issues with the way something is written.
In the last post we scheduled 24-hour collections of performance metrics using pButtons. In this post we are going to be looking at a few of the key metrics that are being collected and how they relate to the underlying system hardware. We will also start to explore the relationship between Caché (or any of the InterSystems Data Platforms) metrics and system metrics. And how you can use these metrics to understand the daily beat rate of your systems and diagnose performance problems.
Currently, we are running 2014.1 on two different servers (OpenVMS, RHEL). The plan is to transition from OpenVMS to RHEL, but our Write Daemon is in a Troubled state on both servers.
On the OpenVMS server, we have a WIJ file that's 26G and can grow to 40G (size of database cache). Since it hasn't grown to 40G, we don't believe the size of the WIJ file to be the issue.
What else should we be looking at regarding the performance of the Write Daemon?
https://www.youtube.com/embed/87eG_zAbb9Y [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]
https://www.youtube.com/embed/q9Vbx8WDww0 [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]
As I was going though and trying to figure out why our CACHE.dat has increased in size over the past 18 days, I found that EnsLib_HL7.Message is still retaining messages dating back to 2014 even though we have our purge set to 10 days. Has anyone else experienced this?
I want to process more requests per second in Ensemble 2015 (soap service).My problem is in a business process that makes a great transformation.I thought that I can put its group size to 4 (the current value is 1), or put 4 business processes and apply, for example, the round-robin algorithm. Which alternative is better?
A practical guide to using the tools PERFMON and MONLBL.
Introduction
When investigating performance problems, I often use the utilities ^PERFMON and ^%SYS.MONLBL to identify exactly where in the application pieces of code are taking a long time to execute.
I recently encountered a issue with Caché and I can't figure out where the problem is coming from.
I noticed that the license limit (200) was reached whenever I was opening my Studio (so it seems). When this occurs, I restart Caché (with the Cube in the Taskbar), and the number of license used is back to 1%, but grows back after. The time taken before the number of license grows back again looks pretty random.
We have 1lakh records in table and while using sql select statement , it is taking more than 9mins to 12 mins to get the records. could you please how to optimize this performance issue if we have more records. how to optimize it.
https://www.youtube.com/embed/IINsLJ2g9E8 [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]
A key part of Application Performance Management (APM) is recording the activity and performance of user activity. For many web applications the closest you can get to this is to record the CSP pages or CSP based services being dispatched.
I work in a small development company that uses Caché as a database. In some support cases I have doubts about whether the client's infrastructure environment is not affecting Caché's response time. Reading a bit about comparing installations in different environments, both in production as testing and homologation environments , I understood that the TPC-E is a benchmarking method accepted in the market.
I am designing the software architecture for an Ensemble/Healthshare production to be deployed on Amazon AWS EC2 servers (2 mirrored m4.large - 4 vCPUs / 16 GiB RAM running RedHat Linux 3.10.0-327.el7.x86_64 and Healthshare for RHEL 64-bit 2016.2.1). It's a rather CPU-intensive production involving massive XSLT 2.0 transformations (massive both in terms of size and volume). I was wondering if anyone has experience configuring Ensemble productions on EC2 servers. My question or concern has to do with the following statement in the Ensemble documentation:
https://www.youtube.com/embed/Yjql-j8cGJo [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]
This post will guide you through the process of sizing shared memory requirements for database applications running on InterSystems data platforms. It will cover key aspects such as global and routine buffers, gmheap, and locksize, providing you with a comprehensive understanding. Additionally, it will offer performance tips for configuring servers and virtualizing IRIS applications. Please note that when I refer to IRIS, I include all the data platforms (Ensemble, HealthShare, iKnow, Caché, and IRIS).
Hyper-Converged Infrastructure (HCI) solutions have been gaining traction for the last few years with the number of deployments now increasing rapidly. IT decision makers are considering HCI when scoping new deployments or hardware refreshes especially for applications already virtualised on VMware. Reasons for choosing HCI include; dealing with a single vendor, validated interoperability between all hardware and software components, high performance especially IO, simple scalability by addition of hosts, simplified deployment and simplified management.
Please excuse my ignorance. I am trying to identify what areas would be best to review in the System Dashboard (for Cache 2010.2) for performance issues with the database. It seems to be running slower than usual, but I am trying to find out the best way to go about identifying what the issue is.
The following are captures from the System Dashboard.