Clear filter
Article
Murray Oldfield · Mar 11, 2016
In the last post we scheduled 24-hour collections of performance metrics using pButtons. In this post we are going to be looking at a few of the key metrics that are being collected and how they relate to the underlying system hardware. We will also start to explore the relationship between Caché (or any of the InterSystems Data Platforms) metrics and system metrics. And how you can use these metrics to understand the daily beat rate of your systems and diagnose performance problems.
[A list of other posts in this series is here](https://community.intersystems.com/post/capacity-planning-and-performance-series-index)
***Edited Oct 2016...***
*[Example of script to extract pButtons data to a .csv file is here.](https://community.intersystems.com/post/extracting-pbuttons-data-csv-file-easy-charting)*
***Edited March 2018...***
Images had disappeared, added them back in.
# Hardware food groups

As you will see as we progress through this series of posts the server components affecting performance can be itemised as:
- CPU
- Memory
- Storage IO
- Network IO
If any of these components is under stress then system performance and user experience will likely suffer. These components are all related to each other as well, changes to one component can affect another, sometimes with unexpected consequences. I have seen an example where fixing an IO bottleneck in a storage array caused CPU usage to jump to 100% resulting in even worse user experience as the system was suddenly free to do more work but did not have the CPU resources to service increased user activity and throughput.
We will also see how Caché system activity has a direct impact on server components. If there are limited storage IO resources a positive change that can be made is increasing system memory and increasing memory for __Caché global buffers__ which in turn can lower __system storage read IO__ (but perhaps increase CPU!).
One of the most obvious system metrics to monitor regularly or check when users report problems is CPU usage. Looking at _top_ or _nmon_ on Linux or AIX, or _Windows Performance Monitor_. Because most system administrators look at CPU data regularly, especially if it is presented graphically, a quick glance gives you a good feel for the current health of your system -- what is normal or a sudden spike in activity that might be abnormal or indicates a problem. In this post we are going to look quickly at CPU metrics, but will concentrate on Caché metrics, we will start by looking at _mgstat_ data and how looking at the data graphically can give a feel for system health at a glance.
# Introduction to mgstat
mgstat is one of the Caché commands included and run in pButtons. mgstat is a great tool for collecting basic performance metrics to help you understand your systems health. We will look at mgstat data collected from a 24 hour pButtons, but if you want to capture data outside pButtons mgstat can also be run on demand interactively or as a background job from Caché terminal.
To run mgstat on demand from the %SYS namespace the general format is.
do mgstat(sample_time,number_of_samples,"/file_path/file.csv",page_length)
For example to run a background job for a one hour run with 5 seconds sample period and output to a csv file.
job ^mgstat(5,720,"/data/mgstat_todays_date_and_time.csv")
For example to display to the screen but dropping some columns use the dsp132 entry. I will leave as homework for you to check the output to understand the difference.
do dsp132^mgstat(5,720,"",60)
> Detailed information of the columns in mgstat can be found in the _Caché Monitoring Guide_ in the most recent Caché documentation:
> [InterSystems online documentation](https://docs.intersystems.com)
# Looking at mgstat data
pButtons has been designed to be collated into a single HTML file for easy navigation and packaging for sending to WRC support specialists to diagnose performance problems. However when you run pButtons for yourself and want to graphically display the data it can be separated again to a csv file for processing into graphs, for example with Excel, by command line script or simple cut and paste.
In this post we will dig into just a few of the mgstat metrics to show how even a quick glance at data can give you a feel for whether the system is performing well or there are current or potential problems that will effect the user experience.
## Glorefs and CPU
The following chart shows database server CPU usage at a site running a hospital application at a high transaction rate. Note the morning peak in activity when there are a lot of outpatient clinics with a drop-off at lunch time then tailing off in the afternoon and evening. In this case the data came from Windows Performance Monitor _(_Total)\% Processor Time_ - the shape of the graph fits the working day profile - no unusual peaks or troughs so this is normal for this site. By doing the same for your site you can start to get a baseline for "normal". A big spike, especially an extended one can be an indicator of a problem, there is a future post that focuses on CPU.

As a reference this database server is a Dell R720 with two E5-2670 8-core processors, the server has 128 GB of memory, and 48 GB of global buffers.
The next chart shows more data from mgstat — Glorefs (Global references) or database accesses for the same day as the CPU graph. Glorefs Indicates the amount of work that is occurring on behalf of the current workload; although global references consume CPU time, they do not always consume other system resources such as physical reads because of the way Caché uses the global memory buffer pool.

Typical of Caché applications there is a very strong correlation between Glorefs and CPU usage.
>Another way of looking at this CPU and gloref data is to say that _reducing glorefs will reduce CPU utilisation_, enabling deployment on lower core count servers or to scale further on existing systems. There may be ways to reduce global reference by making an application more efficient, we will revisit this concept in later posts.
## PhyRds and Rdratio
The shape of data from graphing mgstat data _PhyRds_ (Physical Reads) and _Rdratio_ (Read ratio) can also give you an insight into what to expect of system performance and help you with capacity planning. We will dig deeper into storage IO for Caché in future posts.
_PhyRds_ are simply physical read IOPS from disk to the Caché databases, you should see the same values reflected in operating system metrics for logical and physical disks. Remember looking at operating system IOPS may be showing IOPS coming from non-Caché applications as well. Sizing storage and not accounting for expected IOPS is a recipe for disaster, you need to know what IOPS your system is doing at peak times for proper capacity planning. The following graph shows _PhyRds_ between midnight and 15:30.

Note the big jump in reads between 05:30 and 10:00. With other shorter peaks at 11:00 and just before 14:00. What do you think these are caused by? Do you see these type of peaks on your servers?
_Rdratio_ is a little more interesting — it is the ratio of logical block reads to physical block reads. So a ratio of how many reads are from global buffers (logical) from memory and how many are from disk which is orders of magnitude slower. A high _Rdratio_ is a good thing, dropping close to zero for extended periods is not good.

Note that the same time as high reads _Rdratio_ drops close to zero. At this site I was asked to investigate when the IT department started getting phone calls from users reporting the system was slow for extended periods. This had been going on seemingly at random for several weeks when I was asked to look at the system.
> _**Because pButtons had been scheduled for daily 24-hour runs it was relatively simple to go back through several weeks data to start seeing a pattern of high _PhyRds_ and low _Rdratio_ which correlated with support calls.**_
After further analysis the cause was tracked to a new shift worker who was running several reports entering 'bad' parameters combined with badly written queries without appropriate indexes causing the high database reads. This accounted for the seemingly random slowness. Because these long running reports are reading data into global buffers the result is interactive user’s data is being fetched from physical storage, rather than memory as well as storage being stressed to service the reads.
Monitoring _PhyRds_ and _Rdratio_ will give you an idea of the beat rate of your systems and maybe allow you to track down bad reports or queries. There may be valid reason for high _PhyRds_ -- perhaps a report must be run at a certain time. With modern 64-bit operating systems and servers with large physical memory capacity you should be able to minimise _PhyRds_ on your production systems.
> If you do see high _PhyRds_ on your system there are a couple of strategies you can consider:
> - Improve the performance by increasing the number of database (global) buffers (and system memory).
> - Long running reports or extracts can be moved out of business hours.
> - Long running read only reports, batch jobs or data extracts can be run on a separate shadow server or asynchronous mirror to minimise the impact on interactive users and to offload system resource use such as CPU and IOPS.
Usually low _PhyRds_ is a good thing and it's what we aim for when we size systems. However if you have low _PhyRds_ and users are complaining about performance there are still things that can be checked to ensure storage is not a bottleneck - the reads may be low because the system cannot service any more. We will look at storage closer in a future post.
# Summary
In this post we looked at how graphing the metrics collected in pButtons can give a health check at a glance. In upcoming posts I will dig deeper into the relationship between the system and Caché metrics and how you can use these to plan for the future. Murray, thank you for the series of articles.A couple of questions I have.1) Documentation (2015.1) states that Rdratio is a Ratio of physical block reads to logical block reads,while one can see in mgstat log Rdratio values >> 1 (usually 1000 and more). Don't you think that the definition should be reversed?2) You wrote that:If you do see high PhyRds on your system there are a couple of strategies you can consider:...- Long running read only reports, batch jobs or data extracts can be run on a separate shadow server or asynchronous mirror to minimise impact on interactive users and to offload system resource use such as CPU and IOPS.I heard this advice many times, but how to return report results back to primary member? (ECP) mounting of remote database that resides on primary member is prohibited on backup member, and vise versa. Or these restrictions does not apply to asynchronous members (never played with them yet)? Murray thanks for your articles. But I think, should be mentioned metrics related to Write Daemon too, such as WDphase and WDQsz. Some time when our system looks works too slow, it may depends how quickly our disks can write. And I think in this case it is very usefull metrics. In my own experience I saw when in usual day our server started to work to slow in we saw that writedaemon all time was in 8 phase, and with PhyWrs we can count how many blocks were really written on a disk, and it was not so big count in that time, and so we found a problem in our storage, something related with snapshots. And when storage was reconfigured, our witedaemon continued to work as before quickly. I believe the correct statement is that Rdratio is the ratio of logical block reads to physical block reads, but is zero if physical block reads is zero. Thanks! Yes I wrote the wrong way around in the post. I have fixed this now. The latest Caché documentation has details and examples for setting up read only or read/write asynchronous report mirror. The asynch reporting mirror is special because it is not used for high availability. For example it is not a DR server.At the highest level running reports or extracts on a shadow is possible simply because the data exists on the other server in near real time. Operational or time-critcal reports should be run on the primary servers. The suggestion is that resource heavy reports or extracts can use the shadow or reporting server. While setting up a shadow or reporting asynch mirror is part of Caché, how a report or extract is scheduled or run is an application design question, and not something I can answer - hopefully someone else can jump in here with some advice or experience.Posibilities may include web services or if you use ODBC your application could direct queries to the shadow or a reporting asynch mirror. For batch reports or extracts routines could be scheduled on the shadow/reporting asynch mirror via task manager. Or you may have a sepearte application module for this type of reporting. If you need to have results returned to the application on the primary production that is also application dependant.You should also consider how to handle (e.g. via global mapping) any read/write application databases such as audit or logs which may be overwritten by the primary server. If you are going to do reporting on a shadow server search the online documentation for special considerations for "Purging Cached Queries". There are several more aticles to come before we are done with storage IO I will focus more on IOPS and writes in comming weeks. And will show some examples and solutions to the type of problem you mentioned.Thanks, for the comment. I have quite a few more articles (in my head) for this series, I will be using the comments to help me decide which topics you all are interested in. Rdratio is a little more interesting — it is the ratio of logical block reads to physical block reads.Don't you think that zero values of Rdratio is a special case, as David Marcus mentioned?In mgstat (per second) logs I have at hand I've found them always accompanied with zero values of PhyRds. Just one thing; one good tool to use on Linux is dstat.It is not installed by default, but once you have it (apt-get install dstat on Debian and derivates, yum install dstat on RHEL), you can observe the live behavior of your system as a whole with:dstat -ldcymsnIt gives quite a lot of information! Would it be possible to fix the image links to this post? Hi Mack, sorry about that, the images are back!
Announcement
Paul Gomez · Apr 11, 2016
Please use the following link to see all Global Summit 2016 sessions and links to additional content, including session materials.https://community.intersystems.com/global-summit-2016
Announcement
Janine Perkins · Apr 13, 2016
Learn how to start troubleshooting productions.Troubleshooting ProductionsLearn how to start troubleshooting productions, with a focus on locating and understanding some of the key Ensemble Management Portal pages when troubleshooting. Learn More.
Question
sansa stark · Aug 24, 2016
Hi All,Now I installed the Cache 5.0 that terminal is not open can any one know the default username and password for Cache 5.0 Try username SYS password XXX. You can change the password using Control Panel|Security|User accounts ( TRM: account).Thanks. As I remember, you could create user TRM or TELNET, with routine %PMODE, and will get access without authentication.Don't remember how it was in 5.0, but currently default login/password is _SYSTEM/SYS
Announcement
Evgeny Shvarov · Mar 14, 2017
Hi Community!
We want to invite you to join the InterSystems Gamification Platform called Global Masters Advocate Hub!
The Global Masters Advocacy Hub is our customer engagement platform where you will be invited to have some fun completing entertaining challenges, earning badges for the contribution to Developer Community, communicating with other advocates, and accumulating points which you can redeem for a variety of rewards and special honors.
In addition to the challenges we prepared special Global Masters rewards for you.
How to get rewards? It is simple — just redeem your points and get the reward you want.
Here are some prizes from our Rewards Catalog:
What's more?
There is also Global Masters Leaderboard which counts your advocacy activity and contribution to InterSystems Developer Community.
Update:Join Global Masters now: use your InterSystems SSO credentials to access the program.
See you on InterSystems Global Masters today! There is a problem with the global masters link - the certificate is invalid. Thanks, Jon!I think I've fixed that I also have a problem with the certificate. Nope, certificate error still remains. Our web security policy blocks access to sites with certificate errors.
Error details are as follows
VERIFY DENY: depth=0, CommonName "*.influitive.com" does not match URL "globalmasters.intersystems.com"
Hi, Stephen!Thanks for the comment!We are looking into that. Should this post be a sticky post on the homepage?Should there be a link to it in main menu? I assume Evgeny's initial fix was to change the hyperlink in the article so it's an http one rather than an https one. Hi,I'd like to join.Cheers,Juergen I would like a join code. Thanks.Stephen No more certificate errors. I believe the issue with the certificate is now resolved. Great! The invitation sent. Hi, Stephen!You are invited! Stephen, are you still using the original https URL Evgeny posted, i.e. https://globalmasters.intersystems.com/ ? My browser still reports an issue with the certificate for that. Hi, John and Steve!The issue is not resolved yet, but would be solved in a few days. Thanks, Jon!The link to Global Masters introduced to the Community menu. I would like a join code too, I'm very interested Hi, Francisco!You're invited! I would like to join :) I would like to join. I would like to join! My e-mail on the community is Amir.Samary@intersystems.com. Hi, Chris! You are invited! Hi, Scott! You are invited! Hi, Amir! You are invited! I'll give it a try! Hi, Jack!You are invited! I would like a join code. Thanks.Dietmar Hi, Deitmar! You are invited! I want to join to gain knowledge on cache. I would like a join code.Thanks Hi, Kishan!You are invited! Hi, Henrique!See the invitation link in your box! I am not able to get in the global masters is there any code to get me log in like that? Hi, Kishan!I sent the personal invitation to your email. Maybe your anti-spam filter is too cruel?Resent it now. I would like a join code. Thanks. Hi, Felix!You are invited! I would like a join code as well. Thanks! If still available I'd like to join.Many thanks! Hi, Stephen!You are invited! Hi, John!It is and you are invited.Welcome to the club! I would like a join code. Thanks. Hi, Charles!You are invited! I'd like to join as well. Thx Hi, Josef!You are invited! I'd like to joinThanks Hi, Thiago!You are invited! I'd like to join! Hi, Anna!You are invited! Welcome to Global Masters! Hi, could I join the Global Masters Lonely Hearts Club Band? Thx. Michal Hi, Michal!Sure! The invitation has been sent. Welcome to the club! ) I'd like to join as well, Evgeny. Thank you. Sure, Jeffrey! You are invited. Hi,I would like to join too ! Hi,I would like to join the community. I have got two invites previously but i couldn't join can you please help me.ThanksKishan R I'd like to join please :) Hi, Paul!You are invited! Hi, Kishan!You are invited! If still available I'd like to join. Hi, Thiago! It is!You are invited! Hi EvgenyCould you send me a join code?RegardsCristiano José da Silva Hi, Cristiano!You are invited! Hi Evgeny,Could you send me again the invitation?Thanks. Hi, Cristiano! Did it again) I'd like to join as well. Thanks :D Hi, Wesley! You are invited! Hi. I'd like to join! Hi, Gilberto!You are invited! Hi, Community!We introduced Single Sign On for Global Masters (GM).So you can use your WRC credentials to join globalmasters now. First time it would also ask you for a GM specific password.How it works:For those who have no active WRC account yet the invite is a way to join! Comment this post to get the personal invite to the Intersystems Global Masters Advocates Hub.And you can use your login and password if you are already a member. Please add me tooPeter Hi, Peter!You are invited! Hi,I am in, can you please add me. Thanks Hi, Aman!You are invited. I'd like to join, Evgeny. Thanks. Thank you Evgeny! Hi, Joe! You are invited!And you can use your WRC account if you want to join GM. Hi,I would like to joinThanks. Hi, Guilherme!You are invited!Also you are able to join with your WRC account, as it is shown here.Welcome to the club! HII'd like to join. Hi, Josnei!You are invited! I would like to join. Thank you
i want to join. Very Nice ! :) I will get some informations on the site. Hi, Minsu!Sent you the invite to Global Masters. I want to join to gain knowledge on cache. I want to join to gain knowledge on cache. Hi, Ron!You are invited!And to gain knowledge on InterSystems Caché I recommend you InterSystems online Learning and this Developer Community: ask your questions! Hi,I want join global masters.Thanks Hi, Kuldeep!You are invited. Hi.I'd like to join! Hi, Derek!You are invited! I want to join You are invited! I would like to join Global Masters.Regards I joined Global Masters ages ago, but still can't find a way how to suppress it's notifications. How can I do it? Hi Goran! You are invited Alexey, sorry for disturbing with GM notifications.You can turn it off in Profile settings. See the gif: Evgeny --Thanks, I've already tried it before writing here. My current settings were: Prospect Accepted Unsubscribe from all except privacy, security, and account emailswhile I still received eMails about new challenges, etc. Just tried to turn off "Prospect Accepted", guessing that it can help, but this setting seems to be unswitchable (still active). It is hardly the source of a "problem" as I was never notified on the "prospect" stuff, even don't know its meaning.All these eMails are not a great deal of disturbance, I just dislike when I can't control their frequency. I don't insist on correction, if it causes extra efforts of your team - just sign me off from GM. I would like to Join :) Sounds fun.. May I have the link. Thanks Hi Jimmy!You're invited. Please check you mail! Hi, Thank you for your invitation. Cheers I want to join Global Masters. Thanks! Hi Rayana!You are invited! Hi Rayana,You're invited. Please check you mail! Hi. If it's possible, I would like a join code too.Thanks!Félix Hi Félix,You're invited, please check you mail. Welcome to Global Masters! I would like to join Thanks Evgeny! I did it. Please add me. Hi Suman,You are invited. Welcome to the club! :) Hi Walter,You are invited. Please check you mail! :) Please send an invite, thanks! Hi Drew,Welcome to the club! Please invite me ;) Hi Jan,
You're invited. Please check your mail! I would like yo join. Thanks! Hi Daniel, you are invited! Please check your mail I'd like to join Hi Relton,
Please check your mail and welcome to the club! 😉 I just got a badge! Evidently a post of mine was added to favorites 10 times.
How does one now which post this was though?
Best,
Mike Hi, Mike!
I think we have a problem with this badge. Investigating. Thanks for the feedback! hi,
i would like to join to global masters . please guide me.
thanks
hasmathulla I would like to join. I would like a join code too, I'm very interested.
Announcement
Janine Perkins · Oct 12, 2016
Learn to successfully design your HL7 production.After you have built your first HL7 production in a test environment, you can start applying what you have learned and begin building in a development environment. Take this course to learn to create namespaces and databases for your production, apply recommended design principles to your production, and use appropriate naming conventions in your production.Learn More.
Article
Murray Oldfield · Mar 8, 2016
Your application is deployed and everything is running fine. Great, hi-five! Then out of the blue the phone starts to ring off the hook – it’s users complaining that the application is sometimes ‘slow’. But what does that mean? Sometimes? What tools do you have and what statistics should you be looking at to find and resolve this slowness? Is your system infrastructure up to the task of the user load? What infrastructure design questions should you have asked before you went into production? How can you capacity plan for new hardware with confidence and without over-spec'ing? How can you stop the phone ringing? How could you have stopped it ringing in the first place?
A list of other posts in this series is here
This will be a journey
This is the first post in a series that will explore the tools and metrics available to monitor, review and troubleshoot systems performance as well as system and architecture design considerations that effect performance. Along the way we will head off down a quite a few tracks to understand performance for Caché, operating systems, hardware, virtualization and other areas that become topical from your feedback in the comments.
We will follow the feedback loop where performance data gives a lens to view the advantages and limitations of the applications and infrastructure that is deployed, and then back to better design and capacity planning.
It should go without saying that you should be reviewing performance metrics constantly, it is unfortunate the number of times customers are surprised by performance problems that would have been visible for a long time, if only they were looking at the data. But of course the question is - what data? We will start the journey by collecting some basic Caché and system metrics so we can get a feel for the health of your system today. In later posts we will dive into the meaning of key metrics.
There are many options available for system monitoring – from within Caché and external and we will explore a lot of them in this series.
To start we will look at my favorite go-to tool for continuous data collection which is already installed on every Caché system – ^pButtons.
To make sure you have the latest copy of pButtons please review the following post:
https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-how-update-pbuttons
Collecting system performance metrics - ^pButtons
The Caché pButtons utility generates a readable HTML performance report from log files it creates. Performance metrics output by pButtons can easily be extracted, charted and reviewed.
Data collected in the pButtons html file includes;
Caché set up: with configuration, drive mappings, etc.
mgstat: Caché performance metrics - most values are average per second.
Unix: vmstat and iostat: Operating system resource and performance metrics.
Windows: performance monitor: Windows resource and performance metrics.
Other metrics that will be useful.
pButtons data collection has very little impact on system performance, the metrics are being collected already by the system, pButtons simply packages these for easy filing and transport.
To keep a baseline, for trend analysis and for troubleshooting it is good practice to collect a 24-hour pButtons (midnight to midnight) every day for a complete business cycle. A business cycle could be a month or more, for example to capture data from end of month processing. If you do not have any other external performance monitoring or collection you can run pButtons year-round.
The following key points should be noted:
Change the log directory to a location away from production data to store accumulated output files to avoid disk full problems!
Run an operating system script or otherwise compress and archive the pButtons file regularly, this is especially important on Windows as the files can be large.
Review the data regularly!
In event of a problem needing immediate analysis pButtons data can be previewed (collected immediately) while metrics continue to be stored for collection at the end of the days run.
For more information on pButtons including preview, stopping a run and adding custom data gathering please see the Caché Monitoring Guide in the most recent Caché documentation:
http://docs.intersystems.com
The pButtons HTML file data can be separated and extracted (to CSV files for example) for processing into graphs or other analysis by scripting or simple cut and paste. We will see examples of the output in graphs later in the next post.
Of course if you have urgent performance problems contact the WRC.
Schedule 24 hour pButtons data collection
^pButtons can be started manually from the terminal prompt or scheduled. To schedule a 24-hour daily collection:
1. Start Caché terminal, switch to %SYS namespace and run pButtons manually once to set up pButtons file structures:
%SYS>d ^pButtons Current log directory: /db/backup/benchout/pButtonsOut/
Available profiles:
1 12hours - 12 hour run sampling every 10 seconds
2 24hours - 24 hour run sampling every 10 seconds
3 30mins - 30 minute run sampling every 1 second
4 4hours - 4 hour run sampling every 5 seconds
5 8hours - 8 hour run sampling every 10 seconds
6 test - A 5 minute TEST run sampling every 30 seconds
Select option 6. for test, 5 minute TEST run sampling every 30 seconds. Note your numbering may be different, but the test should be obvious.
During the run, run a Collect^pButtons (as shown below), you will see information including the runid. In this case “20160303_1851_test”.
%SYS>d Collect^pButtons
Current Performance runs:
20160303_1851_test ready in 6 minutes 48 seconds nothing available to collect at the moment.
%SYS>
Notice that this 5 minute run has 6 minutes and 48 seconds to go? pButtons adds a 2 minute grace period to all runs to allow time for collection and collation of the logs into html format.
2. IMPORTANT! Change pButtons log output directory – the default output location is the <cache install path>/mgr folder. For example on unix the path to the log directory may look like this:
do setlogdir^pButtons("/somewhere_with_lots_of_space/perflogs/")
Ensure Caché has write permissions for the directory and there is enough disk space available for accumulating the output files.
3. Create a new 24 hour profile with 30 second intervals by running the following:
write $$addprofile^pButtons("My_24hours_30sec","24 hours 30 sec interval",30,2880)
Check the profile has been added to pButtons:
%SYS>d ^pButtons
Current log directory: /db/backup/benchout/pButtonsOut/
Available profiles:
1 12hours - 12 hour run sampling every 10 seconds
2 24hours - 24 hour run sampling every 10 seconds
3 30mins - 30 minute run sampling every 1 second
4 4hours - 4 hour run sampling every 5 seconds
5 8hours - 8 hour run sampling every 10 seconds
6 My_24hours_30sec- 24 hours 30 sec interval
7 test - A 5 minute TEST run sampling every 30 seconds
select profile number to run:
Note: You can vary the collection interval – 30 seconds is fine for routine monitoring. I would not go below 5 seconds for a routine 24 hour run (…”,5,17280) as the output files can become very large as pButtons collects data at every tick of the interval. If you are trouble-shooting a particular time of day and want more granular data use one of the default profiles or create a new custom profile with a shorter time period, for example 1 hour with 5 second interval (…”,5,720). Multiple pButtons can run at the same time so you could have a short pButtons with 5 second interval at running at the same time as the 24-hour pButtons.
4. Tip For UNIX sites review the disk command. The default parameters used with the 'iostat' command may not include disk response times. First display what disk commands are currently configured:
%SYS>zw ^pButtons("cmds","disk")
^pButtons("cmds","disk")=2
^pButtons("cmds","disk",1)=$lb("iostat","iostat ","interval"," ","count"," > ")
^pButtons("cmds","disk",2)=$lb("sar -d","sar -d ","interval"," ","count"," > ")
In order to collect disk statistics, use the appropriate command to edit the syntax for your UNIX installation. Note the trailing space. Here are some examples:
LINUX: set $li(^pButtons("cmds","disk",1),2)="iostat -xt "
AIX: set $li(^pButtons("cmds","disk",1),2)="iostat -sadD "
VxFS: set ^pButtons("cmds","disk",3)=$lb("vxstat","vxstat -g DISKGROUP -i ","interval"," -c ","count"," > ")
You can create very large pButton html files by having both iostat and sar commands running. For regular performance reviews I usually only use iostat. To configure only one command:
set ^pButtons("cmds","disk")=1
More details on configuring pButtons are in the online documentation.
5. Schedule pButtons to start at midnight in the Management Portal > System Operation > Task Manager:
Namespace: %SYS
Task Type: RunLegacyTask
ExecuteCode: Do run^pButtons("My_24hours_30sec")
Task Priority: Normal
User: superuser
How often: Once daily at 00:00:01
Collecting pButtons data
pButtons shipped in more recent versions of InterSystems data platforms include automatic collection. To manually collect and collate the data into an html file; In %SYS namespace, run the following command to generate any outstanding pButtons html output files:
do Collect^pButtons
The html file will be in the logdir you set at step 2 (if you did not set it go and do it now!). Otherwise the default location is the <Caché install dir/mgr>
Files are named <hostname_instance_Name_date_time_profileName.html> e.g. vsan-tc-db1_H2015_20160218_0255_test.html
Windows Performance Monitor considerations
If the operating system is Windows then Windows Performance Monitor (perfmon) can be used to collect data in synch with the other metrics collected. On older Caché distributions of pButtons, Windows perfmon needs to be configured manually. If there is demand from the post comments I will write a post about creating a perfmon template to define the performance counters to monitor and schedule to run for the same period and interval as pButtons.
Summary
This post got us started collecting some data to look at. Later in the week I will start to look at some sample data and what it means. You can follow along with data you have collected on your own systems. See you then.
http://docs.intersystems.com Great article Murray. I'm looking forward to reading subsequent ones. Thanks for this Murray. I am just sending to our system manager!(We don't run pButtons all the time but perhaps we should) Thank you, Murray! Great article!People! See also other articles in InterSystems Data Platform Blog and subscribe not to miss new ones. When that's not enough, there are a ton of tools to go deeper and look at the OS side of things: http://www.brendangregg.com/linuxperf.html A nice intro., moving on to part two. Thank you. this one is so popular it's showing up three times in the posting list. Will try to figure out why... Just adding my 2c to "4. Tip For UNIX sites":All necessary unix/linux packages should be installed before the first invocation of
Do run^pButtons(<any profile>)
otherwise some commands may be missed in ^pButtons("cmds"). I've recently faced it at the server where sysstat wasn't installed: `sar -d` and `sar -u` commands were absent. If you decide to install it later (`sudo yum install sysstat`in my case), ^pButtons("cmds") would not be automatically updated without little help from you: just kill it before calling the run^pButtons().
This is actual at least for pButtons v.1.15c-1.16c and v.5 (which recently occurred on ftp://ftp.intersys.com/pub/performance/), in Caché 2015.1.2. Excellent Article. I have a question about options for running pButtons. Please Note: It has been awhile since I last used pButtons. But I can attest that it is a very useful tool. My question: Is there an option to run pButtons in "basic mode" versus "advanced mode"? I thought I recalled such a feature, but cannot seem to remember how to select the run mode. And from what I recall, basic mode collects less data/information than advanced mode. This can be helpful for support teams and others when they only need the high-level details.Thanks! And I look forward to reading the next 5 articles in this series. Basic and advanced mode were in an old version of another tool named ^Buttons. With ^pButtons you have an option to reduce the number of OS commands being performed, as it was shown in Tip #4. It is a pretty good idea to run pButtons all the time. That way you know you’ll have data for any acutely weird performance behavior during the time of the problem.The documentation has an example of setting up a 24 hour pButtons to run every week at a specific time using Task Manager:http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_pbuttons#GCM_pButtons_runsmpYou can also just set the task to run every day rather than weekly.If you're worried about space, the reports don't take up too much room, and (since they're fairly repetitive) they zip pretty well. Fantastic article. Congrats. thanks :) Hi @Murray.Oldfield, we hv such need of Cache on Windows OS. Really appreciate if you can help do one. Thx!
Question
Evgeny Shvarov · Apr 24, 2016
Hi!
Suppose I have full access to Caché database instance A and want to export consistent part of the data and import it into another Caché instance B. Classes are equal.
What are the most general and convenient options for me?
TIA! Thank you Kenneth!But what if you need a part of data? Say the records only from current year or from particular customer?And what if you need not all the classes, but part of them - what globals should I choose to export?I believe in this cases we should use SQL to gather data. The question is how to export/import it. If so, create a new database on instance A and then use GBLOCKCOPY to copy from the existing database to the new one. Then just move the new database to instance B.That can help sometimes. Thank you. Just move - you mean unmount and download cache.dat file?Is this a one-time migration of data from instance A to instance B?My question is a request for general approaches. But my task now is to extract some part of consistent data from the large database to use it as test data in my local database for development purposes. One approach would be to use %XML.DataSet to convert SQL results into XML:
Set result=##class(%XML.DataSet).%New()
Do result.Prepare("SELECT TOP 3 ID, Name FROM Sample.Person")
Do result.Execute() 1
Do result.WriteXML("root",,,,,1)
Outputs:
<root>
<s:schema id="DefaultDataSet" xmlns="" attributeFormDefault="qualified" elementFormDefault="qualified" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">
<s:element name="DefaultDataSet" msdata:IsDataSet="true">
<s:complexType>
<s:choice maxOccurs="unbounded">
<s:element name="SQL">
<s:complexType>
<s:sequence>
<s:element name="ID" type="s:long" minOccurs="0" />
<s:element name="Name" type="s:string" minOccurs="0" />
</s:sequence>
</s:complexType>
</s:element>
</s:choice>
</s:complexType>
<s:unique name="Constraint1" msdata:PrimaryKey="true">
<s:selector xpath=".//SQL" />
<s:field xpath="ID" />
</s:unique>
</s:element>
</s:schema>
<diffgr:diffgram xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" xmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1">
<DefaultDataSet xmlns="">
<SQL diffgr:id="SQL1" msdata:rowOrder="0">
<ID>96</ID>
<Name>Adam,Wolfgang F.</Name>
</SQL>
<SQL diffgr:id="SQL2" msdata:rowOrder="1">
<ID>188</ID>
<Name>Adams,Phil H.</Name>
</SQL>
<SQL diffgr:id="SQL3" msdata:rowOrder="2">
<ID>84</ID>
<Name>Ahmed,Edward V.</Name>
</SQL>
</DefaultDataSet>
</diffgr:diffgram>
</root>
There is also %SQL.Export.Mgr class, which does SQL export. Thank you, Ed. And I can import the result on Instance B with class .... ? %XML.Reader and %SQL.Import.Mgr respectively. I am glad to see that you subsequently posted this as a new question here and it has already received an answer. You could use %GOF for export and %GIF for import from Terminal. These tools export block level data. The ultimate size of the export will be much less than other tools Is this a one-time migration of data from instance A to instance B?If so, create a new database on instance A and then use GBLOCKCOPY to copy from the existing database to the new one. Then just move the new database to instance B If it is more complex to determine the data set, because you have specific parameters in mind it makes sense to select the data via SQL and insert the selected record into the other instance via SQL. You can either use linked tables, which allows you to write this simple logic in Caché Object Script, or you can write a simple Java application and go directly via JDBC. Obviously, any supported client-side language can solve this challenge, Java is just one option.In case you have to migrate data where the model includes foreign-key constraints, you have to use the %NOCHECK keyword in your SQL INSERT statement: http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=RSQL_insertThis approach is definitely more work than just exporting/importing the data, but it allows to easily add simple logic with some benefits, e.g. anonymization, batch-loading and parallelization. Depending on your use case some of these topics may be relevant. Hi,Another option is try to use the SQL Data Migration Wizard. You can copy just the data and or create the schema as well.To select the data from a specific Year, Customer, etc. you can create a view on the source side and then use the migration wizard to migrate to importe the data.I hope it helps.Fábio. Hi All,I need urgent help,I want to export the values from Global to CSV file.Values are in global are :^Global1(1)="1,2,3,4"^Global1(2)="5,6,7,8"...^Global1(n)="n,n,n,n"I want output in CSV File as:1,2,3,45,6,7,8...n,n,n,nI made a class:ClassMethod ExportNewSchemaGlobals(pFile){ Set ary("^Global1")="" Set pFile = "C:/Test.csv" Set ary = ##class(%Library.Global).Export(,.ary,pFile)}But its not giving expected Output. Cannot make an answer on my own question. Anyway, here are some answers from Russian forum:DbVisualizer and Caché Monitor can export/import InterSystems Caché data partially via SQL queries.There is also %Global class wrapper for %GI, %GIF,..etc routines which can help to export/import global nodes partially. Documentation.
Announcement
Evgeny Shvarov · Oct 19, 2016
Hi, Community!
I'm glad to announce UK Technology Summit 2016 started!
#ISCTech2016 tag will help be in touch with all is happening in the Summit.
You are very welcome to discuss the Summit here in comments too!
Announcement
Anastasia Dyubaylo · Mar 20, 2020
Hi Developers,
Please welcome the new video from Global Summit 2019 on InterSystems Developers YouTube:
⏯ InterSystems IRIS Cloud Roadmap
This video outlines "what's new and what's next" for InterSystems support of cloud deployment. We will discuss our current product offerings and reference architectures, and talk about some capabilities that are on the drawing board.
🗣 Presenter: @Luca.Ravazzolo, Product Manager, InterSystems
Additional materials to this video you can find in this InterSystems Online Learning Course.
➡️ In addition, check out the Cloud Deployment Resource Guide.
Enjoy and stay tuned! 👍🏼
Announcement
Evgeny Shvarov · Mar 22, 2020
Hi Community!
The registration phase for InterSystems Online Programming Contest ends today and we will start the voting week!
Now we have 20 applications - so you have a set of applications to choose from!
How to vote?
This is easy: you will have one vote, and your vote goes either in Experts nomination or in Community nomination.
Experts Nomination
If you are InterSystems Product Manager, or DC moderator, or Global Master from Expert level end above - your vote goes for Experts nomination.
Community Nomination
If you ever contributed to DC (posts or replies) and this was not considered as spam you are able to vote for the applications in Community nomination.
Voting
Voting takes place on Open Exchange Contest page and you need to sign in to Open Exchange - you can do it with your DC account credentials.
If you changed your mind, cancel the choice and give your vote to another application - you have 7 days to choose.
Contest participants are allowed to fix the bugs and make improvements to their applications during the voting week, so don't miss and subscribe to application releases!
How to test
According to the requirements, developers should use Docker version of InterSystems IRIS Community edition or InterSystems IRIS Community Edition for Health.
So every solution could be launched as:
$ git clone https://github.com/repository
$ cd repository
$ docker-compose up -d
$ docker-compose exec iris iris session iris
And then you'll see the IRIS Terminal where you can follow the application instruction to test its functionality.
Winner criteriaChoose the application you like most. But the general criteria are:
Idea and value - the app makes the world a better place or makes the life of a developer better at least;
Functionality and usability - how well and how much the application/library does;
The beauty of code - has a readable and qualitative ObjectScript code.
Give the vote to the best solution on InterSystems IRIS! You decide! How to test
According to the requirements, developers should use Docker version of InterSystems IRIS Community edition or InterSystems IRIS Community Edition for Health.
So every solution could be launched as:
$ git clone https://github.com/repository
$ cd repository
$ docker-compose up -d
$ docker-compose exec iris iris session iris
And then you'll see the IRIS Terminal where you can follow the application instruction to test its functionality.
Winner criteriaChoose the application you like most. But the general criteria are:
Idea and value - the app makes the world a better place or makes the life of a developer better at least;
Functionality and usability - how well and how much the application/library does;
The beauty of code - has a readable and qualitative ObjectScript code.
The post is updated accordingly. Ok!
After the first day of the voting we have:
Expert Nomination, Top 3
BlocksExplorer - 2
IDP DV - 1
sql-builder - 1
______________
The leaderboard.
Community Nomination, Top 3
sql-builder - 6
isc-generate-db - 4
declarative-objectscript - 3
______________
The leaderboard.
Developers! Support the applications you like!
Participants! Improve and promote your solutions! Hey Developers,
Here are the results after 2 days of voting:
Expert Nomination, Top 3
BlocksExplorer - 2
ISC DEV - 2
IDP DV - 1
➡️ The leaderboard.
Community Nomination, Top 3
sql-builder - 6
isc-generate-db - 5
BlocksExplorer - 4
➡️ The leaderboard.
So, the voting continues! Full speed ahead! Voting for the IRIS Programming Contest goes ahead!
And here're the results at the moment:
Expert Nomination, Top 3
BlocksExplorer - 4
sql-builder - 3
isc-utils - 3
➡️ The leaderboard.
Community Nomination, Top 3
sql-builder - 9
isc-generate-db - 6
BlocksExplorer - 5
➡️ The leaderboard.
Who will lead tomorrow? Make your bets! Hey Developers!
3 days left before the end of voting!
Please check out the Contest Board and vote for the applications you like! 👍🏼 Please check out today's voting results:
Expert Nomination, Top 3
BlocksExplorer - 4
sql-builder - 3
isc-utils - 3
➡️ The leaderboard.
Community Nomination, Top 3
sql-builder - 10
BlocksExplorer - 6
isc-generate-db - 6
➡️ The leaderboard.
Support the best apps with your votes!
Happy weekends! Hi Developers!
And this is the last day fo Voting in InterSystems Online Contest before we know the names of the winners!
Please check out today's voting results:
Expert Nomination, Top 3
ISC DEV - 8
BlocksExplorer - 7
isc-utils - 7
➡️ The leaderboard.
Community Nomination, Top 3
sql-builder - 10
BlocksExplorer - 10
isc-generate-db - 6
➡️ The leaderboard.
Make your choice!
Announcement
Anastasia Dyubaylo · Jun 2, 2020
Hey Everyone!
At InterSystems, the developer community is an integral part of our ecosystem and this role is pivotal in the development of our products and services. As a result, we wanted to undertake market research in recognition of this to discover how best businesses can support developers. We have already asked 300 developers their thoughts, but we wanted to extend this survey to reach more people and also get your views on the current climate as we have all had to adapt our working environments.
➡️ Please click this link to take part in our survey!
Note: The survey will take less than 5 minutes to complete.
Come on!
COS what else ? Hey developers,
We want to hear from you! What resources do you use to help you in your developments/app builds?
✅ Please take part in our 5-min survey.
Article
Yuri Marx · Jun 3, 2020
SAP offers a broad support to OData in all your products. So, OData can be an excellent option to exchange data between SAP and InterSystems IRIS.
Follow the instructions in the article https://community.intersystems.com/post/odata-and-intersystems-iris to expose your IRIS data as REST OData services.
To consume InterSystems IRIS data from SAP using OData, follow these steps (credits to the next steps of this tutorial: https://sapyard.com/sapui5-for-abapers-consuming-odata-service-from-sapui5-application-crud-operations/) :
Create a new SAPUI5 application by the name crud_demo.
Create a XML view ‘crud_demo.view’. Write below code in it.
<core:View xmlns:core="sap.ui.core" xmlns:mvc="sap.ui.core.mvc" xmlns="sap.m"
controllerName="crud_demo.crud_demo" xmlns:html="http://www.w3.org/1999/xhtml" xmlns:l="sap.ui.commons.layout">
<Page title="CRUD Operations">
<content>
<l:AbsoluteLayout width="10rem" height="10rem"></l:AbsoluteLayout>
<VBox xmlns="sap.m" id="vboxid">
<items>
<HBox xmlns="sap.m">
<items>
<l:AbsoluteLayout width="20px" height="20px"></l:AbsoluteLayout>
<Button xmlns="sap.m" id="cbtn" press="oDataCall" text="Create"></Button>
<l:AbsoluteLayout width="20px" height="20px"></l:AbsoluteLayout>
<Button xmlns="sap.m" id="rbtn" press="oDataCall" text="Read"></Button>
<l:AbsoluteLayout width="20px" height="20px"></l:AbsoluteLayout>
<Button xmlns="sap.m" id="ubtn" press="oDataCall" text="Update"></Button>
<l:AbsoluteLayout width="20px" height="20px"></l:AbsoluteLayout>
<Button xmlns="sap.m" id="dbtn" press="oDataCall" text="Delete"></Button>
</items>
</HBox>
<HBox xmlns="sap.m">
<items>
<l:AbsoluteLayout width="20px" height="20px"></l:AbsoluteLayout>
<Input xmlns="sap.m" id="uniqueid" placeholder="ID" value="1"></Input>
<l:AbsoluteLayout width="20px" height="20px"></l:AbsoluteLayout>
<Input xmlns="sap.m" id="nameid" placeholder="Name" value="test"></Input>
<l:AbsoluteLayout width="20px" height="20px"></l:AbsoluteLayout>
<Input xmlns="sap.m" id="emailid" placeholder="Email" value="test@gmail.com"></Input>
<l:AbsoluteLayout width="20px" height="20px"></l:AbsoluteLayout>
<Input xmlns="sap.m" id="mobid" placeholder="Mobile" value="8888888888"></Input>
</items>
</HBox>
<HBox xmlns="sap.m">
<items>
<l:AbsoluteLayout width="20px" height="20px"></l:AbsoluteLayout>
<Table xmlns="sap.m"
id="userdatatable" headerText="User Data">
<items>
<ListItemBase xmlns="sap.m" id="id1"></ListItemBase>
</items>
<columns> <!-- sap.m.Column -->
<Column xmlns="sap.m"> <header> <Text xmlns="sap.m" text="Id" ></Text></header></Column>
<Column xmlns="sap.m"> <header> <Text xmlns="sap.m" text="Name" ></Text></header></Column>
<Column xmlns="sap.m"> <header> <Text xmlns="sap.m" text="Email" ></Text></header></Column>
<Column xmlns="sap.m"> <header> <Text xmlns="sap.m" text="Mobile" ></Text></header></Column>
</columns>
</Table>
</items>
</HBox>
</items> <!-- sap.ui.core.Control -->
</VBox>
</content>
</Page>
</core:View>
Create crud_demo.controller.js. Write below code in it.
onInit: function() {
that = this;
// Create Model Instance of the oData service
var oModel = new sap.ui.model.odata.v2.ODataModel("/sap/opu/odata/sap/ZCRUD_DEMO_SRV");
sap.ui.getCore().setModel(oModel, "myModel");
},
oDataCall:function(oEvent)
{
// call oData service's function based on which button is clicked.
debugger;
var myModel = sap.ui.getCore().getModel("myModel");
myModel.setHeaders({
"X-Requested-With" : "X"
});
// CREATE******************
if ('Create' == oEvent.oSource.mProperties.text) {
var obj = {};
obj.id = that.getView().byId("uniqueid").getValue();
obj.name = that.getView().byId("nameid").getValue();
obj.email = that.getView().byId("emailid").getValue();
obj.mobile = that.getView().byId("mobid").getValue();
myModel.create('/userdataSet', obj, {
success : function(oData, oResponse) {
debugger;
alert('Record Created Successfully...');
},
error : function(err, oResponse) {
debugger;
alert('Error while creating record - '
.concat(err.response.statusText));
}
});
}
// READ******************
else if ('Read' == oEvent.oSource.mProperties.text) {
var readurl = "/userdataSet?$filter=(id eq '')";
myModel.read(readurl, {
success : function(oData, oResponse) {
debugger;
var userdata = new sap.ui.model.json.JSONModel({
"Result" : oData.results
});
var tab = that.getView().byId("userdatatable");
tab.setModel(userdata);
var i = 0;
tab.bindAggregation("items", {
path : "/Result",
template : new sap.m.ColumnListItem({
cells : [ new sap.ui.commons.TextView({
text : "{id}",
design : "H5",
semanticColor : "Default"
}), new sap.ui.commons.TextView({
text : "{name}",
design : "H5",
semanticColor : "Positive"
}), new sap.ui.commons.TextView({
text : "{email}",
design : "H5",
semanticColor : "Positive"
}), new sap.ui.commons.TextView({
text : "{mobile}",
design : "H5",
semanticColor : "Positive"
}), ]
})
});
},
error : function(err) {
debugger;
}
});
}
// UPDATE******************
if ('Update' == oEvent.oSource.mProperties.text) {
var obj = {};
obj.id = that.getView().byId("uniqueid").getValue();
obj.email = that.getView().byId("emailid").getValue();
var updateurl = "/userdataSet(id='"
+ that.getView().byId("uniqueid").getValue() + "')";
myModel.update(updateurl, obj, {
success : function(oData, oResponse) {
debugger;
alert('Record Updated Successfully...');
},
error : function(err, oResponse) {
debugger;
alert('Error while updating record - '
.concat(err.response.statusText));
}
});
}
// DELETE******************
if ('Delete' == oEvent.oSource.mProperties.text) {
var delurl = "/userdataSet(id='"
+ that.getView().byId("uniqueid").getValue() + "')";
myModel.remove(delurl, {
success : function(oData, oResponse) {
debugger;
alert('Record Removed Successfully...');
},
error : function(err, oResponse) {
debugger;
alert('Error while removing record - '
.concat(err.response.statusText));
}
});
}
}
Save, Deploy and Run the application. You should be able to run the application using below URL http://hostname:8000/sap/bc/ui5_ui5/sap/zcrud_demo/index.html.
Output:
Announcement
Evgeny Shvarov · Jun 4, 2020
Hi Developers!
Want to share with you what we did in May 2020 to improve InterSystems Developers Community. Here are the new features:
* New, better design for DC notification letters.
* Editor in markdown: images import with hosting, decoration, etc.
* Unanswered questions management improvements.
Here we go!
## Better DC notifications design
Since this release, we send you emails with an improved design which makes DC letters more readable, clear, accurate, and even beautiful. If this is not what you experience, please send your feedback in this post or into the issues section for DC.
Here is an example of the notification:
## Markdown Editor Enhancements
Since this release we make markdown as a first-class-citizen on DC: Images import and hosting, code highlighting, teaser brakes. Images can be drag-n-dropped with new markdown editor! We have the e - all that you asked!
Even the Editor toolbar!
Bravo DC Developers team!
Moreover, I write this announcement in Markdown :) Because I can do it now with pleasure. I Hope, you will like this new feature as I do.
## Engagement for unanswered questions
Developers! When we ask questions and get answers we often forget to mark replies as answers - please, don't forget to do this if the reply satisfies you and you can accept it as an answer. This makes the difference: everybody sees that problem is solved, the developer who replied you, understands that his answer was helpful and thus the world is getting a little bit better place.
Since this release, we introduced a couple of new email notifications (very beautiful of course) which remind you that you have questions with new replies. Also, we added a new menu in your Member profile which gives you the quick access to unaccepted answers:
We also introduced a number of other features and solved a lot of bugs.
You can check this in [May's kanban](https://github.com/intersystems-community/developer-community/projects/26).
And here is a new [June 2020 kanban. ](https://github.com/intersystems-community/developer-community/projects/27)
You are very welcome for new [enhancement requests and bug reports](https://github.com/intersystems-community/developer-community/issues)!
Stay tuned!
Announcement
Evgeny Shvarov · Aug 11, 2020
Hi Developers!
InterSystems FHIR contest has been started and in this post, I want to introduce several topics you may find interesting and use to build FHIR applications with InterSystems IRIS for Health.
Amongst them are:
a frontend mobile or web app for FHIR server FHIR REST API or SQL;
Backend apps to perform complex queries and maintain rules;
work with different HL7 data standards.
See the details below.
Mobile or Web application
Mobile and web frontend applications are perhaps the most typical use cases to work with a FHIR server. If you use InterSystems IRIS for Health e.g. with the template we provide for the contest you get a FHIR server which exposes REST API ready to service the API calls to any FHIR resources according to FHIR R4 documentation. The related task. Tagging @Patrick.Jamieson3621 to provide more information.
Also, you can consume FHIR data using SQL working with HSFHIR_I0001_R schema for full FHIR resources and HSFHIR_I0001_S schema to have SQL queries for certain entities of resources.
E.g. you can visualize the blood pressure change of a patient with the time, the related task request.
Another idea of the application is to maintain electronic immunization history, the request, tagging @Yuri.Gomes to provide more info.
Examples of complex queries
The medical information is very complex and thus implementations and examples of queries for FHIR data to support different real-life scenarios seem very helpful and demanded.
E.g. the example of a query which looks for patients that have diabetes medications but no diabetes diagnosis. Tagging @Qi.Li to provide more information.
Backend app upon rulesYou could build a backend application which analyzes the FHIR data and send alerts or perform different business operations. For example, consider the rule where the patient meets criteria (new lab result and new diagnosis) and send notifications. E.g. COVID-19 positive results. Related task requests: One, two. Tag @Rasha.Sadiq and @Qi.Li for more details.
Data transformation apps
As you know there are numerous legacy and comprehensive data formats for health data and all of them even the oldest are still in use or could be met. Thus the application which can deal with them and transfer it into FHIR R4 format and back could be always in demand.
InterSystems IRIS for Health supports the majority of health standards and its data transformations so use it and introduce apps that can work with it handily and robustly.
The related task request, tagging @Udo.Leimberger3230 for more details.
CDA to FHIR data transformation
iPhone can export health data to CDA, see the details. Transform this data into FHIR format using IRIS for Health and submit to the FHIR server.
Here is an example of how you can transform CDA to FHIR data using IRIS for HEalth. Tagging @Guillaume.Rongier7183 for more details. A current set of proposed topics could be found here. Great topics! Thanks, @Evgeny.Shvarov for organizing everything Here is the new topic for the FHIR contest:
CDA to FHIR data transformation
iPhone can export health data to CDA, see the details. Transform this data into FHIR format using IRIS for Health and submit to the FHIR server.
Here is an example of how you can transform CDA to FHIR data using IRIS for HEalth. Tagging @Guillaume.Rongier7183 for more details.
The main announcement is updated too. Hey @Evgeny.Shvarov, I had an idea about creating a heat map using google or bing maps based on fhir data that shows a geographic representation of certain observation results? Looks like a great idea, please go ahead, @Rehman.Masood!