Clear filter
Announcement
Bob Kuszewski · May 15, 2024
InterSystems is pleased to announce the general availability of:
InterSystems IRIS Data Platform 2024.1.0.267.2
InterSystems IRIS for Health 2024.1.0.267.2
HealthShare Health Connect 2024.1.0.267.2
This release adds support for the Ubuntu 24.04 operating system. Ubuntu 24.04 includes Linux kernel 6.8, security improvements, along with installer and user interface improvements. InterSystems IRIS IntegratedML is not yet available on Ubuntu 24.04.
Additionally, this release addresses two defects for all platforms:
A fix for some SQL queries using “NOT %INLIST” returning incorrect results. We previously issued an alert on this error.
A fix for incomplete stack traces in certain circumstances.
How to get the software
As usual, Extended Maintenance (EM) releases come with classic installation packages for all supported platforms, as well as container images in Docker container format. For a complete list, refer to the Supported Platforms page.
Classic installation packages
Installation packages are available from the WRC's Extended Maintenance Releases page. Additionally, kits can also be found in the Evaluation Services website.
Containers
Container images for both Enterprise and Community Editions of InterSystems IRIS and IRIS for Health and all corresponding components are available from the InterSystems Container Registry web interface.
Containers are tagged as both "2024.1" or "latest-em".
Article
Developer Community Admin · Oct 21, 2015
InterSystems Caché 2015.1 soars from 6 million to more than 21 million end-user database accesses per second on the Intel® Xeon® processor E7 v2 family compared to Caché 2013.1 on the Intel® Xeon® processor E5 familyOverviewWith data volumes soaring and the opportunities to derive value from data rising, database scalability has become a crucial challenge for a wide range of industries. In healthcare, the rising demands for healthcare services and significant changes in the regulatory and business climates can make the challenges particularly acute. How can organizations scale their databases in an efficient and cost-effective way?The InterSystems Caché 2015.1 data platform offers a solution. Identified as a Leader in the Gartner Magic Quadrant for Operational Database Management Systems,1 Caché combines advanced data management, integration, and analytics. Caché 2015.1 is optimized to take advantage of modern multi-core architectures and represents a new generation of ultra-high-performance database technology. Running on the Intel® Xeon® processor E7 v2 family, Caché 2015.1 provides a robust, affordable solution for scalable, data-intensive computing.To examine the scalability of Caché 2015.1, InterSystems worked with performance engineers at Epic, whose electronic medical records (EMRs) and other healthcare applications are deployed by some of the world’s largest hospitals, delivery systems, and other healthcare organizations. The test team found that Caché 2015.1 with Enterprise Cache Protocol® (ECP®) technology on the Intel Xeon processor E7 v2 family achieved more than 21 million end-user database accesses per second (known in the Caché environment as Global References per Second or GREFs) while maintaining excellent response times. This was more than triple the load levels of 6 million GREFs achieved by Caché 2013.1 on the Intel® Xeon® processor E5 family."The scalability and performance improvements of Caché version 2015.1 are terrific. Almost doubling the scalability, this version provides a key strategic advantage for our user organizations who are pursuing large-scale medical informatics programs as well as aggressive growth strategies in preparation for the volume-to-value transformation in healthcare."– Carl Dvorak, President, Epic
Article
Developer Community Admin · Oct 21, 2015
Providing a reliable infrastructure for rapid, unattended, automated failoverTechnology OverviewTraditional availability and replication solutions often require substantial capital investments in infrastructure, deployment, configuration, software licensing, and planning. Caché Database Mirroring (Mirroring) is designed to provide an economical solution for rapid, reliable, robust, automatic failover between two Caché systems, making mirroring the ideal automatic failover high-availability solution for the enterprise.In addition to providing an availability solution for unplanned downtime, mirroring offers the flexibility to incorporate certain planned downtimes on a particular Caché system while minimizing the overall SLA's for the organization. Combining InterSystems Enterprise Cache Protocol (ECP) application servers with mirroring provides an additional level of availability. Application servers allow processing to seamlessly continue on the new system once the failover is complete, thus greatly minimizing workflow and user disruption. Configuring the two mirror members in separate data centers offers additional redundancy and protection from catastrophic events.Key Features and BenefitsEconomical high availability database solution with automatic failoverRedundant components minimize shared-resource related risksLogical data replication minimizes risks of carry-forward physical corruptionProvides a solution for both planned and unplanned downtimeProvides business continuity benefits via a geographically dispersed disaster recovery configurationProvides Business Intelligence and reporting benefits via a centralized Enterprise Data Warehouse configurationTraditional availability solutions that rely on shared resources (such as shared disk) are often susceptible to a single point of failure with respect to that shared resource. Mirroring reduces that risk by maintaining independent components on the primary and backup mirror systems. Further, by utilizing logical data replication, mirroring reduces the potential risks associated with physical replication, such as out-of-order updates and carry-forward corruption, which are possible with other replication technologies such as SAN-based replication.Finally, mirroring allows for a special Async Member, which can be configured to receive updates from multiple mirrors across the enterprise. This allows a single system to act as a comprehensive enterprise data store, enabling - through the use of InterSystems DeepSee - real-time business intelligence that uses enterprise-wide data. The async member can also be deployed in a Disaster Recovery model in which a single mirror can update up to six geographically-dispersed async members; this model provides a robust framework for distributed data replication, thus ensuring business continuity benefits to the organization. The async member can also be configured as a traditional reporting system so that application reporting can be offloaded from the main production system.
Article
Developer Community Admin · Oct 21, 2015
Providing a reliable infrastructure for rapid, unattended, automated failoverTechnology OverviewTraditional availability and replication solutions often require substantial capital investments in infrastructure, deployment, configuration, software licensing, and planning. Caché Database Mirroring (Mirroring) is designed to provide an economical solution for rapid, reliable, robust, automatic failover between two Caché systems, making mirroring the ideal automatic failover high-availability solution for the enterprise.In addition to providing an availability solution for unplanned downtime, mirroring offers the flexibility to incorporate certain planned downtimes on a particular Caché system while minimizing the overall SLA's for the organization. Combining InterSystems Enterprise Cache Protocol (ECP) application servers with mirroring provides an additional level of availability. Application servers allow processing to seamlessly continue on the new system once the failover is complete, thus greatly minimizing workflow and user disruption. Configuring the two mirror members in separate data centers offers additional redundancy and protection from catastrophic events.Key Features and BenefitsEconomical high availability solution with automatic failover for database systemsRedundant components minimize shared-resource related risksLogical data replication minimizes risks of carry-forward physical corruptionProvides a solution for both planned and unplanned downtimeProvides business continuity benefits via a geographically dispersed disaster recovery configurationTraditional availability solutions that rely on shared resources (such as shared disk) are often susceptible to a single point of failure with respect to that shared resource. Mirroring reduces that risk by maintaining independent components on the primary and backup mirror systems. Further, by utilizing logical data replication, mirroring reduces the potential risks associated with physical replication, such as out-of-order updates and carry-forward corruption, which are possible with other replication technologies such as SAN-based replication.Finally, mirroring allows for a special Async Member, which can be configured to receive updates from multiple mirrors across the enterprise. This allows a single system to act as a comprehensive enterprise data store, enabling - through the use of InterSystems DeepSee - real-time business intelligence that uses enterprise-wide data. The async member can also be deployed in a Disaster Recovery model in which a single mirror can update up to six geographically-dispersed async members; this model provides a robust framework for distributed data replication, thus ensuring business continuity benefits to the organization. The async member can also be configured as a traditional reporting system so that application reporting can be offloaded from the main production system.
Article
Developer Community Admin · Oct 21, 2015
Introduction
To overcome the performance limitations of traditional relational databases, applications - ranging from those running on a single machine to large, interconnected grids - often use in-memory databases to accelerate data access. While in-memory databases and caching products increase throughput, they suffer from a number of limitations including lack of support for large data sets, excessive hardware requirements, and limits on scalability.
InterSystems Caché is a high-performance object database with a unique architecture that makes it suitable for applications that typically use in-memory databases. Caché's performance is comparable to that of in-memory databases, but Caché also provides:
Persistence - data is not lost when a machine is turned off or crashes
Rapid access to very large data sets
The ability to scale to hundreds of computers and tens of thousands of users
Simultaneous data access via SQL and objects: Java, C++, .NET, etc.
This paper explains why Caché is an attractive alternative to in-memory databases for companies that need high-speed access to large amounts of data.
superior-alternative-to-in-memory-databases-key-value-stores.pdf
Article
Murray Oldfield · Mar 8, 2016
Your application is deployed and everything is running fine. Great, hi-five! Then out of the blue the phone starts to ring off the hook – it’s users complaining that the application is sometimes ‘slow’. But what does that mean? Sometimes? What tools do you have and what statistics should you be looking at to find and resolve this slowness? Is your system infrastructure up to the task of the user load? What infrastructure design questions should you have asked before you went into production? How can you capacity plan for new hardware with confidence and without over-spec'ing? How can you stop the phone ringing? How could you have stopped it ringing in the first place?
A list of other posts in this series is here
This will be a journey
This is the first post in a series that will explore the tools and metrics available to monitor, review and troubleshoot systems performance as well as system and architecture design considerations that effect performance. Along the way we will head off down a quite a few tracks to understand performance for Caché, operating systems, hardware, virtualization and other areas that become topical from your feedback in the comments.
We will follow the feedback loop where performance data gives a lens to view the advantages and limitations of the applications and infrastructure that is deployed, and then back to better design and capacity planning.
It should go without saying that you should be reviewing performance metrics constantly, it is unfortunate the number of times customers are surprised by performance problems that would have been visible for a long time, if only they were looking at the data. But of course the question is - what data? We will start the journey by collecting some basic Caché and system metrics so we can get a feel for the health of your system today. In later posts we will dive into the meaning of key metrics.
There are many options available for system monitoring – from within Caché and external and we will explore a lot of them in this series.
To start we will look at my favorite go-to tool for continuous data collection which is already installed on every Caché system – ^pButtons.
To make sure you have the latest copy of pButtons please review the following post:
https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-how-update-pbuttons
Collecting system performance metrics - ^pButtons
The Caché pButtons utility generates a readable HTML performance report from log files it creates. Performance metrics output by pButtons can easily be extracted, charted and reviewed.
Data collected in the pButtons html file includes;
Caché set up: with configuration, drive mappings, etc.
mgstat: Caché performance metrics - most values are average per second.
Unix: vmstat and iostat: Operating system resource and performance metrics.
Windows: performance monitor: Windows resource and performance metrics.
Other metrics that will be useful.
pButtons data collection has very little impact on system performance, the metrics are being collected already by the system, pButtons simply packages these for easy filing and transport.
To keep a baseline, for trend analysis and for troubleshooting it is good practice to collect a 24-hour pButtons (midnight to midnight) every day for a complete business cycle. A business cycle could be a month or more, for example to capture data from end of month processing. If you do not have any other external performance monitoring or collection you can run pButtons year-round.
The following key points should be noted:
Change the log directory to a location away from production data to store accumulated output files to avoid disk full problems!
Run an operating system script or otherwise compress and archive the pButtons file regularly, this is especially important on Windows as the files can be large.
Review the data regularly!
In event of a problem needing immediate analysis pButtons data can be previewed (collected immediately) while metrics continue to be stored for collection at the end of the days run.
For more information on pButtons including preview, stopping a run and adding custom data gathering please see the Caché Monitoring Guide in the most recent Caché documentation:
http://docs.intersystems.com
The pButtons HTML file data can be separated and extracted (to CSV files for example) for processing into graphs or other analysis by scripting or simple cut and paste. We will see examples of the output in graphs later in the next post.
Of course if you have urgent performance problems contact the WRC.
Schedule 24 hour pButtons data collection
^pButtons can be started manually from the terminal prompt or scheduled. To schedule a 24-hour daily collection:
1. Start Caché terminal, switch to %SYS namespace and run pButtons manually once to set up pButtons file structures:
%SYS>d ^pButtons Current log directory: /db/backup/benchout/pButtonsOut/
Available profiles:
1 12hours - 12 hour run sampling every 10 seconds
2 24hours - 24 hour run sampling every 10 seconds
3 30mins - 30 minute run sampling every 1 second
4 4hours - 4 hour run sampling every 5 seconds
5 8hours - 8 hour run sampling every 10 seconds
6 test - A 5 minute TEST run sampling every 30 seconds
Select option 6. for test, 5 minute TEST run sampling every 30 seconds. Note your numbering may be different, but the test should be obvious.
During the run, run a Collect^pButtons (as shown below), you will see information including the runid. In this case “20160303_1851_test”.
%SYS>d Collect^pButtons
Current Performance runs:
20160303_1851_test ready in 6 minutes 48 seconds nothing available to collect at the moment.
%SYS>
Notice that this 5 minute run has 6 minutes and 48 seconds to go? pButtons adds a 2 minute grace period to all runs to allow time for collection and collation of the logs into html format.
2. IMPORTANT! Change pButtons log output directory – the default output location is the <cache install path>/mgr folder. For example on unix the path to the log directory may look like this:
do setlogdir^pButtons("/somewhere_with_lots_of_space/perflogs/")
Ensure Caché has write permissions for the directory and there is enough disk space available for accumulating the output files.
3. Create a new 24 hour profile with 30 second intervals by running the following:
write $$addprofile^pButtons("My_24hours_30sec","24 hours 30 sec interval",30,2880)
Check the profile has been added to pButtons:
%SYS>d ^pButtons
Current log directory: /db/backup/benchout/pButtonsOut/
Available profiles:
1 12hours - 12 hour run sampling every 10 seconds
2 24hours - 24 hour run sampling every 10 seconds
3 30mins - 30 minute run sampling every 1 second
4 4hours - 4 hour run sampling every 5 seconds
5 8hours - 8 hour run sampling every 10 seconds
6 My_24hours_30sec- 24 hours 30 sec interval
7 test - A 5 minute TEST run sampling every 30 seconds
select profile number to run:
Note: You can vary the collection interval – 30 seconds is fine for routine monitoring. I would not go below 5 seconds for a routine 24 hour run (…”,5,17280) as the output files can become very large as pButtons collects data at every tick of the interval. If you are trouble-shooting a particular time of day and want more granular data use one of the default profiles or create a new custom profile with a shorter time period, for example 1 hour with 5 second interval (…”,5,720). Multiple pButtons can run at the same time so you could have a short pButtons with 5 second interval at running at the same time as the 24-hour pButtons.
4. Tip For UNIX sites review the disk command. The default parameters used with the 'iostat' command may not include disk response times. First display what disk commands are currently configured:
%SYS>zw ^pButtons("cmds","disk")
^pButtons("cmds","disk")=2
^pButtons("cmds","disk",1)=$lb("iostat","iostat ","interval"," ","count"," > ")
^pButtons("cmds","disk",2)=$lb("sar -d","sar -d ","interval"," ","count"," > ")
In order to collect disk statistics, use the appropriate command to edit the syntax for your UNIX installation. Note the trailing space. Here are some examples:
LINUX: set $li(^pButtons("cmds","disk",1),2)="iostat -xt "
AIX: set $li(^pButtons("cmds","disk",1),2)="iostat -sadD "
VxFS: set ^pButtons("cmds","disk",3)=$lb("vxstat","vxstat -g DISKGROUP -i ","interval"," -c ","count"," > ")
You can create very large pButton html files by having both iostat and sar commands running. For regular performance reviews I usually only use iostat. To configure only one command:
set ^pButtons("cmds","disk")=1
More details on configuring pButtons are in the online documentation.
5. Schedule pButtons to start at midnight in the Management Portal > System Operation > Task Manager:
Namespace: %SYS
Task Type: RunLegacyTask
ExecuteCode: Do run^pButtons("My_24hours_30sec")
Task Priority: Normal
User: superuser
How often: Once daily at 00:00:01
Collecting pButtons data
pButtons shipped in more recent versions of InterSystems data platforms include automatic collection. To manually collect and collate the data into an html file; In %SYS namespace, run the following command to generate any outstanding pButtons html output files:
do Collect^pButtons
The html file will be in the logdir you set at step 2 (if you did not set it go and do it now!). Otherwise the default location is the <Caché install dir/mgr>
Files are named <hostname_instance_Name_date_time_profileName.html> e.g. vsan-tc-db1_H2015_20160218_0255_test.html
Windows Performance Monitor considerations
If the operating system is Windows then Windows Performance Monitor (perfmon) can be used to collect data in synch with the other metrics collected. On older Caché distributions of pButtons, Windows perfmon needs to be configured manually. If there is demand from the post comments I will write a post about creating a perfmon template to define the performance counters to monitor and schedule to run for the same period and interval as pButtons.
Summary
This post got us started collecting some data to look at. Later in the week I will start to look at some sample data and what it means. You can follow along with data you have collected on your own systems. See you then.
http://docs.intersystems.com Great article Murray. I'm looking forward to reading subsequent ones. Thanks for this Murray. I am just sending to our system manager!(We don't run pButtons all the time but perhaps we should) Thank you, Murray! Great article!People! See also other articles in InterSystems Data Platform Blog and subscribe not to miss new ones. When that's not enough, there are a ton of tools to go deeper and look at the OS side of things: http://www.brendangregg.com/linuxperf.html A nice intro., moving on to part two. Thank you. this one is so popular it's showing up three times in the posting list. Will try to figure out why... Just adding my 2c to "4. Tip For UNIX sites":All necessary unix/linux packages should be installed before the first invocation of
Do run^pButtons(<any profile>)
otherwise some commands may be missed in ^pButtons("cmds"). I've recently faced it at the server where sysstat wasn't installed: `sar -d` and `sar -u` commands were absent. If you decide to install it later (`sudo yum install sysstat`in my case), ^pButtons("cmds") would not be automatically updated without little help from you: just kill it before calling the run^pButtons().
This is actual at least for pButtons v.1.15c-1.16c and v.5 (which recently occurred on ftp://ftp.intersys.com/pub/performance/), in Caché 2015.1.2. Excellent Article. I have a question about options for running pButtons. Please Note: It has been awhile since I last used pButtons. But I can attest that it is a very useful tool. My question: Is there an option to run pButtons in "basic mode" versus "advanced mode"? I thought I recalled such a feature, but cannot seem to remember how to select the run mode. And from what I recall, basic mode collects less data/information than advanced mode. This can be helpful for support teams and others when they only need the high-level details.Thanks! And I look forward to reading the next 5 articles in this series. Basic and advanced mode were in an old version of another tool named ^Buttons. With ^pButtons you have an option to reduce the number of OS commands being performed, as it was shown in Tip #4. It is a pretty good idea to run pButtons all the time. That way you know you’ll have data for any acutely weird performance behavior during the time of the problem.The documentation has an example of setting up a 24 hour pButtons to run every week at a specific time using Task Manager:http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_pbuttons#GCM_pButtons_runsmpYou can also just set the task to run every day rather than weekly.If you're worried about space, the reports don't take up too much room, and (since they're fairly repetitive) they zip pretty well. Fantastic article. Congrats. thanks :) Hi @Murray.Oldfield, we hv such need of Cache on Windows OS. Really appreciate if you can help do one. Thx!
Question
sansa stark · Aug 24, 2016
Hi All,Now I installed the Cache 5.0 that terminal is not open can any one know the default username and password for Cache 5.0 Try username SYS password XXX. You can change the password using Control Panel|Security|User accounts ( TRM: account).Thanks. As I remember, you could create user TRM or TELNET, with routine %PMODE, and will get access without authentication.Don't remember how it was in 5.0, but currently default login/password is _SYSTEM/SYS
Announcement
Janine Perkins · Oct 12, 2016
Learn to successfully design your HL7 production.After you have built your first HL7 production in a test environment, you can start applying what you have learned and begin building in a development environment. Take this course to learn to create namespaces and databases for your production, apply recommended design principles to your production, and use appropriate naming conventions in your production.Learn More.
Announcement
Evgeny Shvarov · Oct 19, 2016
Hi, Community!
I'm glad to announce UK Technology Summit 2016 started!
#ISCTech2016 tag will help be in touch with all is happening in the Summit.
You are very welcome to discuss the Summit here in comments too!
Announcement
Janine Perkins · Feb 2, 2016
Do you need to quickly build a web page to interact with your database?
Take a look at these two courses to learn how Zen Mojo can help you display collections and make your collections respond to user interactions.
Displaying Collections and Using the Zen Mojo Documentation
Learn the steps for displaying a collection of Caché data on a Zen Mojo page, find crucial information in the Zen Mojo documentation, and find sample code in the Widget Reference. Learn More.
Zen Mojo: Handling Events and Updating Layouts
This is an entirely hands-on course devoted to event handling and updating the display of a Zen Mojo page in response to user interaction. Learn to create a master-detail view that responds to user selections. Learn More.
Article
Murray Oldfield · Mar 11, 2016
In the last post we scheduled 24-hour collections of performance metrics using pButtons. In this post we are going to be looking at a few of the key metrics that are being collected and how they relate to the underlying system hardware. We will also start to explore the relationship between Caché (or any of the InterSystems Data Platforms) metrics and system metrics. And how you can use these metrics to understand the daily beat rate of your systems and diagnose performance problems.
[A list of other posts in this series is here](https://community.intersystems.com/post/capacity-planning-and-performance-series-index)
***Edited Oct 2016...***
*[Example of script to extract pButtons data to a .csv file is here.](https://community.intersystems.com/post/extracting-pbuttons-data-csv-file-easy-charting)*
***Edited March 2018...***
Images had disappeared, added them back in.
# Hardware food groups

As you will see as we progress through this series of posts the server components affecting performance can be itemised as:
- CPU
- Memory
- Storage IO
- Network IO
If any of these components is under stress then system performance and user experience will likely suffer. These components are all related to each other as well, changes to one component can affect another, sometimes with unexpected consequences. I have seen an example where fixing an IO bottleneck in a storage array caused CPU usage to jump to 100% resulting in even worse user experience as the system was suddenly free to do more work but did not have the CPU resources to service increased user activity and throughput.
We will also see how Caché system activity has a direct impact on server components. If there are limited storage IO resources a positive change that can be made is increasing system memory and increasing memory for __Caché global buffers__ which in turn can lower __system storage read IO__ (but perhaps increase CPU!).
One of the most obvious system metrics to monitor regularly or check when users report problems is CPU usage. Looking at _top_ or _nmon_ on Linux or AIX, or _Windows Performance Monitor_. Because most system administrators look at CPU data regularly, especially if it is presented graphically, a quick glance gives you a good feel for the current health of your system -- what is normal or a sudden spike in activity that might be abnormal or indicates a problem. In this post we are going to look quickly at CPU metrics, but will concentrate on Caché metrics, we will start by looking at _mgstat_ data and how looking at the data graphically can give a feel for system health at a glance.
# Introduction to mgstat
mgstat is one of the Caché commands included and run in pButtons. mgstat is a great tool for collecting basic performance metrics to help you understand your systems health. We will look at mgstat data collected from a 24 hour pButtons, but if you want to capture data outside pButtons mgstat can also be run on demand interactively or as a background job from Caché terminal.
To run mgstat on demand from the %SYS namespace the general format is.
do mgstat(sample_time,number_of_samples,"/file_path/file.csv",page_length)
For example to run a background job for a one hour run with 5 seconds sample period and output to a csv file.
job ^mgstat(5,720,"/data/mgstat_todays_date_and_time.csv")
For example to display to the screen but dropping some columns use the dsp132 entry. I will leave as homework for you to check the output to understand the difference.
do dsp132^mgstat(5,720,"",60)
> Detailed information of the columns in mgstat can be found in the _Caché Monitoring Guide_ in the most recent Caché documentation:
> [InterSystems online documentation](https://docs.intersystems.com)
# Looking at mgstat data
pButtons has been designed to be collated into a single HTML file for easy navigation and packaging for sending to WRC support specialists to diagnose performance problems. However when you run pButtons for yourself and want to graphically display the data it can be separated again to a csv file for processing into graphs, for example with Excel, by command line script or simple cut and paste.
In this post we will dig into just a few of the mgstat metrics to show how even a quick glance at data can give you a feel for whether the system is performing well or there are current or potential problems that will effect the user experience.
## Glorefs and CPU
The following chart shows database server CPU usage at a site running a hospital application at a high transaction rate. Note the morning peak in activity when there are a lot of outpatient clinics with a drop-off at lunch time then tailing off in the afternoon and evening. In this case the data came from Windows Performance Monitor _(_Total)\% Processor Time_ - the shape of the graph fits the working day profile - no unusual peaks or troughs so this is normal for this site. By doing the same for your site you can start to get a baseline for "normal". A big spike, especially an extended one can be an indicator of a problem, there is a future post that focuses on CPU.

As a reference this database server is a Dell R720 with two E5-2670 8-core processors, the server has 128 GB of memory, and 48 GB of global buffers.
The next chart shows more data from mgstat — Glorefs (Global references) or database accesses for the same day as the CPU graph. Glorefs Indicates the amount of work that is occurring on behalf of the current workload; although global references consume CPU time, they do not always consume other system resources such as physical reads because of the way Caché uses the global memory buffer pool.

Typical of Caché applications there is a very strong correlation between Glorefs and CPU usage.
>Another way of looking at this CPU and gloref data is to say that _reducing glorefs will reduce CPU utilisation_, enabling deployment on lower core count servers or to scale further on existing systems. There may be ways to reduce global reference by making an application more efficient, we will revisit this concept in later posts.
## PhyRds and Rdratio
The shape of data from graphing mgstat data _PhyRds_ (Physical Reads) and _Rdratio_ (Read ratio) can also give you an insight into what to expect of system performance and help you with capacity planning. We will dig deeper into storage IO for Caché in future posts.
_PhyRds_ are simply physical read IOPS from disk to the Caché databases, you should see the same values reflected in operating system metrics for logical and physical disks. Remember looking at operating system IOPS may be showing IOPS coming from non-Caché applications as well. Sizing storage and not accounting for expected IOPS is a recipe for disaster, you need to know what IOPS your system is doing at peak times for proper capacity planning. The following graph shows _PhyRds_ between midnight and 15:30.

Note the big jump in reads between 05:30 and 10:00. With other shorter peaks at 11:00 and just before 14:00. What do you think these are caused by? Do you see these type of peaks on your servers?
_Rdratio_ is a little more interesting — it is the ratio of logical block reads to physical block reads. So a ratio of how many reads are from global buffers (logical) from memory and how many are from disk which is orders of magnitude slower. A high _Rdratio_ is a good thing, dropping close to zero for extended periods is not good.

Note that the same time as high reads _Rdratio_ drops close to zero. At this site I was asked to investigate when the IT department started getting phone calls from users reporting the system was slow for extended periods. This had been going on seemingly at random for several weeks when I was asked to look at the system.
> _**Because pButtons had been scheduled for daily 24-hour runs it was relatively simple to go back through several weeks data to start seeing a pattern of high _PhyRds_ and low _Rdratio_ which correlated with support calls.**_
After further analysis the cause was tracked to a new shift worker who was running several reports entering 'bad' parameters combined with badly written queries without appropriate indexes causing the high database reads. This accounted for the seemingly random slowness. Because these long running reports are reading data into global buffers the result is interactive user’s data is being fetched from physical storage, rather than memory as well as storage being stressed to service the reads.
Monitoring _PhyRds_ and _Rdratio_ will give you an idea of the beat rate of your systems and maybe allow you to track down bad reports or queries. There may be valid reason for high _PhyRds_ -- perhaps a report must be run at a certain time. With modern 64-bit operating systems and servers with large physical memory capacity you should be able to minimise _PhyRds_ on your production systems.
> If you do see high _PhyRds_ on your system there are a couple of strategies you can consider:
> - Improve the performance by increasing the number of database (global) buffers (and system memory).
> - Long running reports or extracts can be moved out of business hours.
> - Long running read only reports, batch jobs or data extracts can be run on a separate shadow server or asynchronous mirror to minimise the impact on interactive users and to offload system resource use such as CPU and IOPS.
Usually low _PhyRds_ is a good thing and it's what we aim for when we size systems. However if you have low _PhyRds_ and users are complaining about performance there are still things that can be checked to ensure storage is not a bottleneck - the reads may be low because the system cannot service any more. We will look at storage closer in a future post.
# Summary
In this post we looked at how graphing the metrics collected in pButtons can give a health check at a glance. In upcoming posts I will dig deeper into the relationship between the system and Caché metrics and how you can use these to plan for the future. Murray, thank you for the series of articles.A couple of questions I have.1) Documentation (2015.1) states that Rdratio is a Ratio of physical block reads to logical block reads,while one can see in mgstat log Rdratio values >> 1 (usually 1000 and more). Don't you think that the definition should be reversed?2) You wrote that:If you do see high PhyRds on your system there are a couple of strategies you can consider:...- Long running read only reports, batch jobs or data extracts can be run on a separate shadow server or asynchronous mirror to minimise impact on interactive users and to offload system resource use such as CPU and IOPS.I heard this advice many times, but how to return report results back to primary member? (ECP) mounting of remote database that resides on primary member is prohibited on backup member, and vise versa. Or these restrictions does not apply to asynchronous members (never played with them yet)? Murray thanks for your articles. But I think, should be mentioned metrics related to Write Daemon too, such as WDphase and WDQsz. Some time when our system looks works too slow, it may depends how quickly our disks can write. And I think in this case it is very usefull metrics. In my own experience I saw when in usual day our server started to work to slow in we saw that writedaemon all time was in 8 phase, and with PhyWrs we can count how many blocks were really written on a disk, and it was not so big count in that time, and so we found a problem in our storage, something related with snapshots. And when storage was reconfigured, our witedaemon continued to work as before quickly. I believe the correct statement is that Rdratio is the ratio of logical block reads to physical block reads, but is zero if physical block reads is zero. Thanks! Yes I wrote the wrong way around in the post. I have fixed this now. The latest Caché documentation has details and examples for setting up read only or read/write asynchronous report mirror. The asynch reporting mirror is special because it is not used for high availability. For example it is not a DR server.At the highest level running reports or extracts on a shadow is possible simply because the data exists on the other server in near real time. Operational or time-critcal reports should be run on the primary servers. The suggestion is that resource heavy reports or extracts can use the shadow or reporting server. While setting up a shadow or reporting asynch mirror is part of Caché, how a report or extract is scheduled or run is an application design question, and not something I can answer - hopefully someone else can jump in here with some advice or experience.Posibilities may include web services or if you use ODBC your application could direct queries to the shadow or a reporting asynch mirror. For batch reports or extracts routines could be scheduled on the shadow/reporting asynch mirror via task manager. Or you may have a sepearte application module for this type of reporting. If you need to have results returned to the application on the primary production that is also application dependant.You should also consider how to handle (e.g. via global mapping) any read/write application databases such as audit or logs which may be overwritten by the primary server. If you are going to do reporting on a shadow server search the online documentation for special considerations for "Purging Cached Queries". There are several more aticles to come before we are done with storage IO I will focus more on IOPS and writes in comming weeks. And will show some examples and solutions to the type of problem you mentioned.Thanks, for the comment. I have quite a few more articles (in my head) for this series, I will be using the comments to help me decide which topics you all are interested in. Rdratio is a little more interesting — it is the ratio of logical block reads to physical block reads.Don't you think that zero values of Rdratio is a special case, as David Marcus mentioned?In mgstat (per second) logs I have at hand I've found them always accompanied with zero values of PhyRds. Just one thing; one good tool to use on Linux is dstat.It is not installed by default, but once you have it (apt-get install dstat on Debian and derivates, yum install dstat on RHEL), you can observe the live behavior of your system as a whole with:dstat -ldcymsnIt gives quite a lot of information! Would it be possible to fix the image links to this post? Hi Mack, sorry about that, the images are back!
Announcement
Evgeny Shvarov · Mar 14, 2017
Hi Community!
We want to invite you to join the InterSystems Gamification Platform called Global Masters Advocate Hub!
The Global Masters Advocacy Hub is our customer engagement platform where you will be invited to have some fun completing entertaining challenges, earning badges for the contribution to Developer Community, communicating with other advocates, and accumulating points which you can redeem for a variety of rewards and special honors.
In addition to the challenges we prepared special Global Masters rewards for you.
How to get rewards? It is simple — just redeem your points and get the reward you want.
Here are some prizes from our Rewards Catalog:
What's more?
There is also Global Masters Leaderboard which counts your advocacy activity and contribution to InterSystems Developer Community.
Update:Join Global Masters now: use your InterSystems SSO credentials to access the program.
See you on InterSystems Global Masters today! There is a problem with the global masters link - the certificate is invalid. Thanks, Jon!I think I've fixed that I also have a problem with the certificate. Nope, certificate error still remains. Our web security policy blocks access to sites with certificate errors.
Error details are as follows
VERIFY DENY: depth=0, CommonName "*.influitive.com" does not match URL "globalmasters.intersystems.com"
Hi, Stephen!Thanks for the comment!We are looking into that. Should this post be a sticky post on the homepage?Should there be a link to it in main menu? I assume Evgeny's initial fix was to change the hyperlink in the article so it's an http one rather than an https one. Hi,I'd like to join.Cheers,Juergen I would like a join code. Thanks.Stephen No more certificate errors. I believe the issue with the certificate is now resolved. Great! The invitation sent. Hi, Stephen!You are invited! Stephen, are you still using the original https URL Evgeny posted, i.e. https://globalmasters.intersystems.com/ ? My browser still reports an issue with the certificate for that. Hi, John and Steve!The issue is not resolved yet, but would be solved in a few days. Thanks, Jon!The link to Global Masters introduced to the Community menu. I would like a join code too, I'm very interested Hi, Francisco!You're invited! I would like to join :) I would like to join. I would like to join! My e-mail on the community is Amir.Samary@intersystems.com. Hi, Chris! You are invited! Hi, Scott! You are invited! Hi, Amir! You are invited! I'll give it a try! Hi, Jack!You are invited! I would like a join code. Thanks.Dietmar Hi, Deitmar! You are invited! I want to join to gain knowledge on cache. I would like a join code.Thanks Hi, Kishan!You are invited! Hi, Henrique!See the invitation link in your box! I am not able to get in the global masters is there any code to get me log in like that? Hi, Kishan!I sent the personal invitation to your email. Maybe your anti-spam filter is too cruel?Resent it now. I would like a join code. Thanks. Hi, Felix!You are invited! I would like a join code as well. Thanks! If still available I'd like to join.Many thanks! Hi, Stephen!You are invited! Hi, John!It is and you are invited.Welcome to the club! I would like a join code. Thanks. Hi, Charles!You are invited! I'd like to join as well. Thx Hi, Josef!You are invited! I'd like to joinThanks Hi, Thiago!You are invited! I'd like to join! Hi, Anna!You are invited! Welcome to Global Masters! Hi, could I join the Global Masters Lonely Hearts Club Band? Thx. Michal Hi, Michal!Sure! The invitation has been sent. Welcome to the club! ) I'd like to join as well, Evgeny. Thank you. Sure, Jeffrey! You are invited. Hi,I would like to join too ! Hi,I would like to join the community. I have got two invites previously but i couldn't join can you please help me.ThanksKishan R I'd like to join please :) Hi, Paul!You are invited! Hi, Kishan!You are invited! If still available I'd like to join. Hi, Thiago! It is!You are invited! Hi EvgenyCould you send me a join code?RegardsCristiano José da Silva Hi, Cristiano!You are invited! Hi Evgeny,Could you send me again the invitation?Thanks. Hi, Cristiano! Did it again) I'd like to join as well. Thanks :D Hi, Wesley! You are invited! Hi. I'd like to join! Hi, Gilberto!You are invited! Hi, Community!We introduced Single Sign On for Global Masters (GM).So you can use your WRC credentials to join globalmasters now. First time it would also ask you for a GM specific password.How it works:For those who have no active WRC account yet the invite is a way to join! Comment this post to get the personal invite to the Intersystems Global Masters Advocates Hub.And you can use your login and password if you are already a member. Please add me tooPeter Hi, Peter!You are invited! Hi,I am in, can you please add me. Thanks Hi, Aman!You are invited. I'd like to join, Evgeny. Thanks. Thank you Evgeny! Hi, Joe! You are invited!And you can use your WRC account if you want to join GM. Hi,I would like to joinThanks. Hi, Guilherme!You are invited!Also you are able to join with your WRC account, as it is shown here.Welcome to the club! HII'd like to join. Hi, Josnei!You are invited! I would like to join. Thank you
i want to join. Very Nice ! :) I will get some informations on the site. Hi, Minsu!Sent you the invite to Global Masters. I want to join to gain knowledge on cache. I want to join to gain knowledge on cache. Hi, Ron!You are invited!And to gain knowledge on InterSystems Caché I recommend you InterSystems online Learning and this Developer Community: ask your questions! Hi,I want join global masters.Thanks Hi, Kuldeep!You are invited. Hi.I'd like to join! Hi, Derek!You are invited! I want to join You are invited! I would like to join Global Masters.Regards I joined Global Masters ages ago, but still can't find a way how to suppress it's notifications. How can I do it? Hi Goran! You are invited Alexey, sorry for disturbing with GM notifications.You can turn it off in Profile settings. See the gif: Evgeny --Thanks, I've already tried it before writing here. My current settings were: Prospect Accepted Unsubscribe from all except privacy, security, and account emailswhile I still received eMails about new challenges, etc. Just tried to turn off "Prospect Accepted", guessing that it can help, but this setting seems to be unswitchable (still active). It is hardly the source of a "problem" as I was never notified on the "prospect" stuff, even don't know its meaning.All these eMails are not a great deal of disturbance, I just dislike when I can't control their frequency. I don't insist on correction, if it causes extra efforts of your team - just sign me off from GM. I would like to Join :) Sounds fun.. May I have the link. Thanks Hi Jimmy!You're invited. Please check you mail! Hi, Thank you for your invitation. Cheers I want to join Global Masters. Thanks! Hi Rayana!You are invited! Hi Rayana,You're invited. Please check you mail! Hi. If it's possible, I would like a join code too.Thanks!Félix Hi Félix,You're invited, please check you mail. Welcome to Global Masters! I would like to join Thanks Evgeny! I did it. Please add me. Hi Suman,You are invited. Welcome to the club! :) Hi Walter,You are invited. Please check you mail! :) Please send an invite, thanks! Hi Drew,Welcome to the club! Please invite me ;) Hi Jan,
You're invited. Please check your mail! I would like yo join. Thanks! Hi Daniel, you are invited! Please check your mail I'd like to join Hi Relton,
Please check your mail and welcome to the club! 😉 I just got a badge! Evidently a post of mine was added to favorites 10 times.
How does one now which post this was though?
Best,
Mike Hi, Mike!
I think we have a problem with this badge. Investigating. Thanks for the feedback! hi,
i would like to join to global masters . please guide me.
thanks
hasmathulla I would like to join. I would like a join code too, I'm very interested.
Announcement
Anastasia Dyubaylo · Mar 23, 2018
Hi Community!Please welcome the InterSystems Developer Community SubReddit!Here you will find all the most interesting announcements from InterSystems Developer Community, i.e. useful articles, discussions, announcements and events.A little about Reddit: — is a social news aggregation, web content rating, and discussion website.See how DC SubReddit looks like:Subscribe to the DC SubReddit and maybe it will become one of your favorites!
Announcement
Paul Gomez · Apr 11, 2016
Please use the following link to see all Global Summit 2016 sessions and links to additional content, including session materials.https://community.intersystems.com/global-summit-2016