Clear filter
Question
Vishnu G · Dec 14, 2022
Does InterSystems has CDS Hook implementations?
if yes, where I could get the details. What's CDS? I suppose this is it Hi Vishnu!
InterSystems has recently released a beta of the Healthcare Action Engine for limited customer testing. The Healthcare Action Engine is an extended healthcare decision support service; it includes workflows for implementing your own CDS Hooks services and for brokering connections between clients, data sources, and third-party CDS Hooks Services.
The Healthcare Action Engine is a stand-alone product, but it is designed to integrate seamlessly with InterSystems HealthShare product ecosystem. Because of this, documentation for the product is included as part of our HealthShare documentation, here. (Note that, for security reasons, you must authenticate as a registered HealthShare customer with your WRC credentials to access HealthShare documentation.)
If you'd like to learn more about the Healthcare Action Engine, I encourage you to contact your organization's InterSystems sales associate. As the technical writer responsible for documenting the Healthcare Action Engine as it continues to develop, I'm also happy to answer further questions. Following up on Shawn's response, these resources might also be helpful in the meantime, and perhaps for others -
A Global Summit 2022 session titled Healthcare Action Engine and CDS Hooks: Sneak Peek (includes PDF slides and recording).
An online exercise titled Configuring Alerts for Clinicians with the Healthcare Action Engine.
"See how to use key features of the Healthcare Action Engine to set up real-time alerts for clinicians. In this exercise, you will build decision support, design a notification using a CDS Hooks card, and write a rule to deliver it."
[I believe the same comment Shawn mentioned about being required to be a HealthShare customer in order to access this content is relevant here as well.] Thank you very much. I got it. Thank you very much. I got it.
Announcement
Anastasia Dyubaylo · Feb 23, 2023
Hello Community,
After the last heated programming contest, we're happy to announce the next InterSystems technical article writing competition!
✍️ Tech Article Contest: InterSystems IRIS Tutorials ✍️
Write an article that can be considered a tutorial for InterSystems IRIS programmers of any level: beginner / middle / senior from March 1st to March 31st.
🎁 Prizes for everyone: A special prize pack for each author who takes part in the competition!
🏆 Main Prizes: There are 6 prizes to choose from.
Prizes
1. Everyone is a winner in InterSystems Tech Article Contest! Any member who writes an article during the competition period will receive special prizes:
🎁 Branded Organic Canvas Tote Bag
🎁 Moleskine Lined Notebook
2. Expert Awards – articles will be judged by InterSystems experts:
🥇 1st place: Mars Pro Bluetooth speakers / AirPods Max
🥈 2nd place: Apple AirPods Pro with Wireless Charging Case /
JBL Pulse 4 Light Show Speaker
🥉 3rd place: Magic Keyboard Folio for iPad / Bose Soundlink Micro Bluetooth Speaker
Or as an alternative, any winner can choose a prize from a lower prize tier than his own.
3. Developer Community Award – article with the most likes. The winner will have the option to choose one of the following prizes:
🎁 Magic Keyboard Folio for iPad
🎁 Bose Soundlink Micro Bluetooth Speaker
Note:
The author can only be awarded once per category (in total the author will win 2 prizes: one for Expert and one for the Community)
In the event of a tie, the number of votes of the experts for the tied articles will be considered as a tie-breaking criterion.
Who can participate?
Any Developer Community member, except for InterSystems employees. Create an account!
Contest period
📝 March 1st - March 31st: Publication of articles and voting time.
Publish an article(s) throughout this period. DC members can vote for published articles with Likes – votes in the Community award.
Note: The sooner you publish the article(s), the more time you will have to collect both Expert & Community votes.
What are the requirements?
❗️ Any article written during the contest period and satisfying the requirements below will automatically* enter the competition:
The article must be a tutorial** on the InterSystems IRIS topic. It can be either for beginners, middle or senior developers.
The article must be in English (incl. inserting code, screenshots, etc.).
The article must be 100% new (it can be a continuation of an existing article).
The article cannot be a translation of an article already published in other communities.
The article should contain only correct and reliable information about InterSystems technology.
The article has to contain the Tutorial tag.
Article size: 400 words minimum (links and code are not counted towards the word limit).
Max 3 entries from the same author are allowed.
Articles on the same topic but with dissimilar examples from different authors are allowed.
* Articles will be moderated by our experts. Only valid content will be eligible to enter the contest.
** Tutorials provide step-by-step instructions that a developer can follow to complete a specific task or set of tasks.
🎯 EXTRA BONUSES
This time we decided to add additional bonuses that will help you to win the prize! Please welcome:
Bonus
Nominal
Details
Topic bonus
5
If your article is on the topic from the list of the proposed topics (listed below), you will receive a bonus of 5 Expert votes (vs 1st place selected by an Expert = 3 votes).
Video bonus
3
Format of the presentation of the content of the article: besides publishing the article make an explanatory video.
Discussion bonus
1
Article with the most useful discussion, as decided by InterSystems experts. Only 1 article will get this bonus.
Translation bonus
1
Publish a translation of your article on any of the regional Communities. Learn more.
Note: Only 1 vote per article.
New member bonus
3
If you haven't participated in the previous contests, your article(s) will get 3 Expert votes.
Proposed topics
Here's a list of proposed topics that will give your article extra bonuses:
✔️ Working with IRIS from C#✔️ Working with IRIS from Java✔️ Working with IRIS from Python✔️ Using Embedded Python✔️ Using Python API✔️ Using Embedded SQL✔️ Using ODBC/JDBC✔️ Working with %Query/%SQLQuery✔️ Using indexes✔️ Using triggers✔️ Using JSON✔️ Using XML✔️ Using REST✔️ Using containers✔️ Using kubernetes
Note: Articles on the same topic from different authors are allowed.
➡️ Join InterSystems Discord to chat about the rules, topics & bonuses.
So,
It's time to show off your writing skills! Good luck ✨
Important note: Delivery of prizes varies by country and may not be possible for some of them. A list of countries with restrictions can be requested from @Liubka.Zelenskaia
Great, I love to write tutorials. The winner will get a badge on GM too? I'm uncovering my keyboard and stretching my fingers Is the article word count a minimum or maximum? Neither; all articles must be exactly 400 words.
JK; that needs clarification. 400 words minimum – updated the announcement ;)
Thanks for the heads up, guys! Hi Devs)
What's great news! 🤩
5 news articles have appeared on the Contest page:
Tutorial - Streams in Pieces by @Robert.Cemper1003 Quick sample database tutorial by @Heloisa.Paiva Tutorial - Working with %Query #1 by @Robert.Cemper1003 Tutorial - Working with %Query #2 by @Robert.Cemper1003 Tutorial - Working with %Query #3 by @Robert.Cemper1003
Waiting for other interesting articles) 😎 of course! there will be special badges on Global Masters ;)
Example:
Robert Cemper is so powerful Hey Devs!
The 8 new articles have been added to the contest!
SQLAlchemy - the easiest way to use Python and SQL with IRIS's databases by @Heloisa.Paiva
Creating an ODBC connection - Step to Step by @Heloisa.Paiva
Tutorial - Develop IRIS using SSH by @wang.zhe
Getting Started with InterSystems IRIS: A Beginner's Guide by @A.R.N.H.Hafeel
Creating a DataBase, namespace, inserting data and visualizing data in the front end. by @A.R.N.H.Hafeel
InterSystems Embedded Python in glance by @Muhammad.Waseem
Query as %Query or Query based on ObjectScript by @Irene.Mikhaylova
Setting up VS Code to work with InterSystems technologies by @Maria.Gladkova
Important note:
We do not prohibit authors from using AI when creating content, however, articles must contain only correct and reliable information about InterSystems technology. Before entering the contest, articles are moderated by our experts.
In this regard, we have updated the contest requirements. Hi, Community!
Seven more tutorials have been added to the contest!
Tutorial: Improving code quality with the visual debug tool's color-coded logs by @Yone.Moreno
Kinds of properties in IRIS by @Irene.Mikhaylova
Backup and rebuilding procedure for the IRIS server by @Akio.Hashimoto1419
Stored Procedures the Swiss army knife of SQL by @Daniel.Aguilar
Tutorial how to analyze requests and responses received and processed in webgateway pods by @Oliver.Wilms
InterSystems's Embedded Python with Pandas by @Rizmaan.Marikar2583
Check them out! Hey, Developers!
Two more articles have been added to the contest!
Tutorial - Creating a HL7 TCP Operation for Granular Error Handling by @Julian.Matthews7786
Tutorial for Middle/Senior Level Developer: General Query Solution by @姚.鑫
There are already 19 tutorials that have been uploaded!Almost one week left to the end of the contest, and we are waiting forward to more articles!
Developers!
A lot of tutorials have been added to the contest!🤩
And only four days left to the end of publication and voting!
Hurry up and upload your articles!🚀 Community!
Three more articles have been added to the contest!
Perceived gaps to GPT assisted COS development automatons by @Zhong.Li7025
SQL IRIS Editor and IRIS JAVA CONNECTION by @Jude.Mukkadayil
Set up an IRIS docker image on a Raspberry Pi 4 by @Roger.Merchberger
Devs, only one day left to the end of the contest! Upload your tutorials and join the contest!
Our Tech Article contest is over! Thank you all for participating in this writing competition :)
As a result, 🔥 24 AMAZING ARTICLES 🔥
Now, let's add an element of intrigue...
The winners will be announced on Monday! Stay tuned for the updates.
Article
Anastasia Dyubaylo · Apr 7, 2023
Hi Community!
We know that sometimes you may need to find info or people on our Community! So to make it easier here is a post on how to use different kinds of searches:
find something on the Community
find something in your own posts
find a member by name
➡️ Find some info on the Community
Use the search bar at the top of every page - a quick DC search. Enter a word or phrase, or a tag, or a DC member name, and you'll get the results in a drop-down list:
Also, you can press Enter or click on a magnifying glass and the search results will open.
On this page, you can refine your results:
You can choose if you want to search only in Questions, Articles, etc, or your own posts.
You can look for posts of a particular member.
You can look for posts with specific tags.
You can set up a time limit and sort them by date/relevance.
➡️ Find something in your own posts
Go to your profile, choose Posts on the left-hand side, and after the page refreshes just write what you want to find in the search bar:
➡️ Find a Community member
If you know his or her name or e-mail open the Menu in the top left:
and click on Members:
This will open a table with all the members of this Community and at the top of it there is a Search box:
Hope you'll find these explanations useful.
Happy searching! ;)
Article
Mikhail Khomenko · May 15, 2017
Prometheus is one of the monitoring systems adapted for collecting time series data.
Its installation and initial configuration are relatively easy. The system has a built-in graphic subsystem called PromDash for visualizing data, but developers recommend using a free third-party product called Grafana. Prometheus can monitor a lot of things (hardware, containers, various DBMS's), but in this article, I would like to take a look at the monitoring of a Caché instance (to be exact, it will be an Ensemble instance, but the metrics will be from Caché). If you are interested – read along.
In our extremely simple case, Prometheus and Caché will live on a single machine (Fedora Workstation 24 x86_64). Caché version:
%SYS>write $zvCache for UNIX (Red Hat Enterprise Linux for x86-64) 2016.1 (Build 656U) Fri Mar 11 2016 17:58:47 EST
Installation and configuration
Let's download a suitable Prometheus distribution package from the official site and save it to the /opt/prometheus folder.
Unpack the archive, modify the template config file according to our needs and launch Prometheus. By default, Prometheus will be displaying its logs right in the console, which is why we will be saving its activity records to a log file.
Launching Prometheus
# pwd/opt/prometheus# lsprometheus-1.4.1.linux-amd64.tar.gz# tar -xzf prometheus-1.4.1.linux-amd64.tar.gz# lsprometheus-1.4.1.linux-amd64 prometheus-1.4.1.linux-amd64.tar.gz# cd prometheus-1.4.1.linux-amd64/# lsconsole_libraries consoles LICENSE NOTICE prometheus prometheus.yml promtool# cat prometheus.yml global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.scrape_configs: - job_name: 'isc_cache' metrics_path: '/metrics/cache' static_configs: - targets: ['localhost:57772']# ./prometheus > /var/log/prometheus.log 2>&1 &[1] 7117# head /var/log/prometheus.logtime=«2017-01-01T09:01:11+02:00» level=info msg=«Starting prometheus (version=1.4.1, branch=master, revision=2a89e8733f240d3cd57a6520b52c36ac4744ce12)» source=«main.go:77»time=«2017-01-01T09:01:11+02:00» level=info msg=«Build context (go=go1.7.3, user=root@e685d23d8809, date=20161128-09:59:22)» source=«main.go:78»time=«2017-01-01T09:01:11+02:00» level=info msg=«Loading configuration file prometheus.yml» source=«main.go:250»time=«2017-01-01T09:01:11+02:00» level=info msg=«Loading series map and head chunks...» source=«storage.go:354»time=«2017-01-01T09:01:11+02:00» level=info msg=«23 series loaded.» source=«storage.go:359»time=«2017-01-01T09:01:11+02:00» level=info msg="Listening on :9090" source=«web.go:248»
The prometheus.yml configuration is written in the YAML language, which doesn't like tabulation symbols, which is why you should use spaces only. We have already mentioned that metrics will be downloaded from http://localhost:57772 and we'll be sending requests to /metrics/cache (the name of the application is arbitrary), i.e. the destination address for collecting metrics will be http://localhost:57772/metrics/cache. A "job=isc_cache" tag will be added to each metric. A tag, very roughly, is the equivalent of WHERE in SQL. In our case, it won't be used, but will do just fine for more than one server. For example, names of servers (and/or instances) can be saved to tags and you can then use tags to parameterize requests for drawing graphs. Let's make sure that Prometheus is working (we can see the port it's listening to in the output above – 9090):
A web interface opens, which means that Prometheus is working. However, it doesn't see Caché metrics yet (let's check it by clicking Status → Targets):
Preparing metrics
Our task is to make metrics available to Prometheus in a suitable format at http://localhost:57772/metrics/cache. We'll be using the REST capabilities of Caché because of their simplicity. It should be noted that Prometheus only “understands” numeric metrics, so we will not export string metrics. To get the latter, we will use the API of the SYS.Stats.Dashboard class. These metrics are used by Caché itself for displaying the System toolbar:
Example of the same in the Terminal:
%SYS>set dashboard = ##class(SYS.Stats.Dashboard).Sample() %SYS>zwrite dashboarddashboard=<OBJECT REFERENCE>[2@SYS.Stats.Dashboard]+----------------- general information ---------------| oref value: 2| class name: SYS.Stats.Dashboard| reference count: 2+----------------- attribute values ------------------| ApplicationErrors = 0| CSPSessions = 2| CacheEfficiency = 2385.33| DatabaseSpace = "Normal"| DiskReads = 14942| DiskWrites = 99278| ECPAppServer = "OK"| ECPAppSrvRate = 0| ECPDataServer = "OK"| ECPDataSrvRate = 0| GloRefs = 272452605| GloRefsPerSec = "70.00"| GloSets = 42330792| JournalEntries = 16399816| JournalSpace = "Normal"| JournalStatus = "Normal"| LastBackup = "Mar 26 2017 09:58AM"| LicenseCurrent = 3| LicenseCurrentPct = 2. . .
The USER space will be our sandbox. To begin with, let's create a REST application /metrics. To add come very basic security, let's protect our log-in with a password and associate the web application with some resource – let's call it PromResource. We need to disable public access to the resource, so let's do the following:
%SYS>write ##class(Security.Resources).Create("PromResource", "Resource for Metrics web page", "")1
Our web app settings:
We will also need a user with access to this resource. The user should also be able to read from our database (USER in our case) and save data to it. Apart from this, this user will need read rights for the CACHESYS system database, since we will be switching to the %SYS space later in the code. We'll follow the standard scheme, i.e. create a PromRole role with these rights and then create a PromUser user assigned to this role. For password, let’s use "Secret":
%SYS>write ##class(Security.Roles).Create("PromRole","Role for PromResource","PromResource:U,%DB_USER:RW,%DB_CACHESYS:R")1%SYS>write ##class(Security.Users).Create("PromUser","PromRole","Secret")1
It is this user PromUser that we will used for authentication in the Prometheus config. Once done, we'll re-read the config by sending a SIGNUP signal to the server process.
A safer config
# cat /opt/prometheus/prometheus-1.4.1.linux-amd64/prometheus.ymlglobal: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.scrape_configs: - job_name: 'isc_cache' metrics_path: '/metrics/cache' static_configs: - targets: ['localhost:57772'] basic_auth: username: 'PromUser' password: 'Secret'## kill -SIGHUP $(pgrep prometheus) # or kill -1 $(pgrep prometheus)
Prometheus can now successfully pass authentication for using the web application with metrics.
Metrics will be provided by the my.Metrics request processing class. Here is the implementation:
Class my.Metrics Extends %CSP.REST
{
Parameter ISCPREFIX = "isc_cache";
Parameter DASHPREFIX = {..#ISCPREFIX_"_dashboard"};
XData UrlMap [ XMLNamespace = "http://www.intersystems.com/urlmap" ]
{
<Routes>
<Route Url="/cache" Method="GET" Call="getMetrics"/>
</Routes>
}
/// Output should obey the Prometheus exposition formats. Docs:
/// https://prometheus.io/docs/instrumenting/exposition_formats/
///
/// The protocol is line-oriented. A line-feed character (\n) separates lines.
/// The last line must end with a line-feed character. Empty lines are ignored.
ClassMethod getMetrics() As %Status
{
set nl = $c(10)
do ..getDashboardSample(.dashboard)
do ..getClassProperties(dashboard.%ClassName(1), .propList, .descrList)
for i=1:1:$ll(propList) {
set descr = $lg(descrList,i)
set propertyName = $lg(propList,i)
set propertyValue = $property(dashboard, propertyName)
// Prometheus supports time series database
// so if we get empty (for example, backup metrics) or non-digital metrics
// we just omit them.
if ((propertyValue '= "") && ('$match(propertyValue, ".*[-A-Za-z ]+.*"))) {
set metricsName = ..#DASHPREFIX_..camelCase2Underscore(propertyName)
set metricsValue = propertyValue
// Write description (help) for each metrics.
// Format is that the Prometheus requires.
// Multiline descriptions we have to join in one string.
write "# HELP "_metricsName_" "_$replace(descr,nl," ")_nl
write metricsName_" "_metricsValue_nl
}
}
write nl
quit $$$OK
}
ClassMethod getDashboardSample(Output dashboard)
{
new $namespace
set $namespace = "%SYS"
set dashboard = ##class(SYS.Stats.Dashboard).Sample()
}
ClassMethod getClassProperties(className As %String, Output propList As %List, Output descrList As %List)
{
new $namespace
set $namespace = "%SYS"
set propList = "", descrList = ""
set properties = ##class(%Dictionary.ClassDefinition).%OpenId(className).Properties
for i=1:1:properties.Count() {
set property = properties.GetAt(i)
set propList = propList_$lb(property.Name)
set descrList = descrList_$lb(property.Description)
}
}
/// Converts metrics name in camel case to underscore name with lower case
/// Sample: input = WriteDaemon, output = _write_daemon
ClassMethod camelCase2Underscore(metrics As %String) As %String
{
set result = metrics
set regexp = "([A-Z])"
set matcher = ##class(%Regex.Matcher).%New(regexp, metrics)
while (matcher.Locate()) {
set result = matcher.ReplaceAll("_"_"$1")
}
// To lower case
set result = $zcvt(result, "l")
// _e_c_p (_c_s_p) to _ecp (_csp)
set result = $replace(result, "_e_c_p", "_ecp")
set result = $replace(result, "_c_s_p", "_csp")
quit result
}
}
Let's use the console to check that our efforts have not been vain (added the --silent key, so that curl doesn't impede us with its progress bar):
# curl --user PromUser:Secret --silent -XGET 'http://localhost:57772/metrics/cache' | head -20# HELP isc_cache_dashboard_application_errors Number of application errors that have been logged.isc_cache_dashboard_application_errors 0# HELP isc_cache_dashboard_csp_sessions Most recent number of CSP sessions.isc_cache_dashboard_csp_sessions 2# HELP isc_cache_dashboard_cache_efficiency Most recently measured cache efficiency (Global references / (physical reads + writes))isc_cache_dashboard_cache_efficiency 2378.11# HELP isc_cache_dashboard_disk_reads Number of physical block read operations since system startup.isc_cache_dashboard_disk_reads 15101# HELP isc_cache_dashboard_disk_writes Number of physical block write operations since system startupisc_cache_dashboard_disk_writes 106233# HELP isc_cache_dashboard_ecp_app_srv_rate Most recently measured ECP application server traffic in bytes/second.isc_cache_dashboard_ecp_app_srv_rate 0# HELP isc_cache_dashboard_ecp_data_srv_rate Most recently measured ECP data server traffic in bytes/second.isc_cache_dashboard_ecp_data_srv_rate 0# HELP isc_cache_dashboard_glo_refs Number of Global references since system startup.isc_cache_dashboard_glo_refs 288545263# HELP isc_cache_dashboard_glo_refs_per_sec Most recently measured number of Global references per second.isc_cache_dashboard_glo_refs_per_sec 273.00# HELP isc_cache_dashboard_glo_sets Number of Global Sets and Kills since system startup.isc_cache_dashboard_glo_sets 44584646
We can now check the same in the Prometheus interface:
And here is the list of our metrics:
We won’t focus on viewing them in Prometheus. You can select the necessary metric and click the “Execute” button. Select the “Graph” tab to see the graph (shows the cache efficiency):
Visualization of metrics
For visualization purposes, let’s install Grafana. For this article, I chose installation from a tarball. However, there are other installation options, from packages to a container. Let's perform the following steps (after creating the /opt/grafana folder and switching to it):
Let's leave the config unchanged for now. On our last step, we launch Grafana in the background mode. We'll be saving Grafan's log to a file, just like we did with Prometheus:
# ./bin/grafana-server > /var/log/grafana.log 2>&1 &
By default, Grafana’s web interface is accessible via port 3000. Login/password: admin/admin.
A detailed instruction on making Prometheus work with Grafana is available here. In short, we need to add a new Data Source of the Prometheus type. Select your option for direct/proxy access:
Once done, we need to add a dashboard with the necessary panels. The test sample of a dashboard is publicly available, along with the code of the metrics collection class. A dashboard can be simply imported to Grafana (Dashboards → Import):
We'll get the following after import:
Save the dashboard:
Time range and update period can be selected in the top right corner:
Examples of monitoring types
Let's test the monitoring of calls to globals:
USER>for i=1:1:1000000 {set ^prometheus(i) = i}USER>kill ^prometheus
We can see that the number of references to globals per second has increased, while cache efficiency dropped (the ^Prometheus global hasn’t been cached yet):
Let's check our license usage. To do this, let's create a primitive CSP page called PromTest.csp in the USER namespace:
<html><head><title>Prometheus Test Page</title></head><body>Monitoring works fine!</body></html>
And visit it so many times (we assume that the /csp/user application is not password-protected):
# ab -n77 http://localhost:57772/csp/user/PromTest.csp
We'll see the following picture for license usage:
Conclusions
As we can see, implementing the monitoring functionality is not hard at all. Even after a few initial steps, we can get important information about the work of the system, such as: license usage, efficiency of globals caching, application errors. We used the SYS.Stats.Dashboard for this tutorial, but other classes of SYS, %SYSTEM, %SYS packages also deserve attention. You can also write your own class that will supply custom metrics for your own application – for instance, the number of documents of a particular type. Some useful metrics will eventually be compiled into a separate template for Grafana.
To be continued
If you are interested in learning more about this, I will write more on the subject. Here are my plans:
Preparing a Grafana template with metrics for the logging daemon. It would be nice to make some sort of a graphical equivalent of the ^mgstat tool – at least for some of its metrics.
Password protection for web applications is good, but it would be nice to verify the possibility of using certificates.
Use of Prometheus, Grafana and some exporters for Prometheus as Docker containers.
Use of discovery services for automatically adding new Caché instances to the Prometheus monitoring list. It's also where I'd like to demonstrate (in practice) how convenient Grafana and its templates are. This is something like dynamic panels, where metrics for a particular selected server are shown, all on the same Dashboard.
Prometheus Alert Manager.
Prometheus configuration settings related to the duration of storing data, as well possible optimizations for systems with a large number of metrics and a short statistics collection interval.
Various subtleties and nuances that will transpire along the way.
Links
During the preparation of this article, I visited a number of useful sites and watched a great deal of videos:
Prometheus project website
Grafana project website
Blog of one of Prometheus developers called Brian Brazil
Tutorial on DigitalOcean
Some videos from Robust Perception
Many videos from a conference devoted to Prometheus
Thank you for reading all the way down to this line! Hi Mikhail, very nice! And very useful.Have you done any work on templating, e.g. so one or more systems may be selected to be displayed on one dashboard? Hi, Murray! Thanks for your comment! I'll try to describe templating soon. I should add it to my plan. For now, an article about ^mgstat is almost ready (in head -)). So stay tuned for more. Hi, Murray!You can read continuation of article here. There templates are used. Hi Mikhail, very nice! I am configure here the process! Where are the metric values stored?
What kind of metrics can be added via SetSensor? Only counters or also gauges and histograms? Metrics in this approach are stored in Prometheus. As Prometheus is time-series database, you can store any numeric metric there either counter or gauge. Hum... ok for that part I understand it, but if I extend the %SYS.Monitor.SAM.Abstract (https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_rest) and SetTimer method, do you know where is it stored? I'm not sure, but think that SAM-implementation is based on System Monitor (https://docs.intersystems.com/iris20194/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_healthmon) in some fashion. Could you clarify your task? Do you want to know an exact name of Global where samples are stored? For what?Link you've shared describes how to expose your custom numeric metric that could be read by Prometheus, for instance, and then stored there. 💡 This article is considered as InterSystems Data Platform Best Practice.
Announcement
Anastasia Dyubaylo · Mar 23, 2018
Hi Community!Please welcome the InterSystems Developer Community SubReddit!Here you will find all the most interesting announcements from InterSystems Developer Community, i.e. useful articles, discussions, announcements and events.A little about Reddit: — is a social news aggregation, web content rating, and discussion website.See how DC SubReddit looks like:Subscribe to the DC SubReddit and maybe it will become one of your favorites!
Announcement
Anastasia Dyubaylo · Oct 16, 2017
Hi, Community!We've launched Facebook Page for InterSystems Developer Community!Follow to be in touch what happens on InterSystems Developer Community. Here you can find useful articles, answers,announcements, hot discussions, best practices based on: InterSystems IRIS, Caché, Ensemble, DeepSee and iKnow.The permalink to join DC Facebook Page can be found also on the right of every DC page:Follow us now to be in the know and don`t forget to like the page!
Announcement
Michelle Spisak · Oct 17, 2017
Learning Services Live Webinars are back! At this year’s Global Summit, InterSystems debuted InterSystems IRIS Data Platform™, a single, comprehensive product that provides capabilities spanning data management, interoperability, transaction processing, and analytics. InterSystems IRIS sets a new level of performance for the rapid development and deployment of data-rich and mission-critical applications. Now is your chance to learn more! Joe Lichtenberg, Director of Product and Industry Marketing for InterSystems, presents "Introducing InterSystems IRIS Data Platform", a high-level description of the business drivers and capabilities behind InterSystems IRIS.Webinar recording available now! Webinar recording is available!
Announcement
Anastasia Dyubaylo · Oct 26, 2017
Hi, Community!
Please find a new session recording from Global Summit 2017:
An InterSystems Guide to the Data Galaxy
This video shows how to understand the concept of an open analytics platform for the enterprise.
@Benjamin.Deboe tells how InterSystems technology pairs up with industry standards and open-source technology to provide a solid platform for analytics.
You can also see the additional resources here.
Don't forget to subscribe to the InterSystems Developers YouTube Channel
Enjoy!
Article
David Loveluck · Nov 8, 2017
Application Performance MonitoringTools in InterSystems technologyBack in August in preparation for Global Summit I published a brief explanation of Application Performance Management (APM). To follow up on that I have written and will be publishing over the coming weeks a series of articles on APM.One major element of APM is the construction of a historic record of application activity, performance and resource usage. Crucially for APM the measurement starts with the application and what users are doing with the application. By relating everything to business activity you can focus on improving the level of service provided to users and value to the line business that is ultimately paying for the application.Ideally an application includes instrumentation that reports on activity in business terms, such as ‘display the operating theater floor plan’ or ‘register student for course’ and give the count, the response time and resources used for each on an hourly or daily basis. However many applications don’t have this capability and you have to make do with the closest approximation you can.There are many tools (some expensive, some open source) available to monitor applications, ranging from java script injection for monitoring user experience to middleware and network probes for measuring application communication. The articles will focus on the tools that are available within InterSystems products. I will describe how I have used these tools to manage the performance of applications and improve customer experiences.The tools described include:CSP Page StatisticsSQL Query StatisticsEnsemble Activity MonitorEven if you do have good application instrumentation, additional system monitoring can provide valuable insights and help you and I will include an explanation of how to configure and use:Caché History MonitorI will also expanded my earlier explanation of APM, the reasons for monitoring performance and the different audiences you are trying to help with the information you gather. added a link to 'Using the Cache History Monitor'
Announcement
Evgeny Shvarov · Dec 24, 2017
Hi, Community!I'm pleased to announce that in this December 2017 we have 2 years of InterSystems Developer Community up and running! Together we did a lot this year, and a lot more is planned for the next year!Our Community is growing: In November we had 3,700 registered members (2,200 last November) and 13,000 web users visited the site in November 2017 (7,000 last year).Thank you for using it, thanks for making it useful, thanks for your knowledge, experience, and your passion!And, may I ask you to share in the comments the article or question which was most helpful for you this year?Happy Birthday, Developer Community! And to start it: for me the most helpful article this year was REST FORMS Queries - yes, I'm using REST FORMS a lot, thanks [@Eduard.Lebedyuk]!Another is Search InterSystems documentation using iKnow and iFind technologiesTwo helpful questions were mine (of course ;):How to find duplicates in a large text fieldand Storage Schema in VCS: to Store Or Not to Store?and How to get the measure for the last day in a month in DeepSee There were many interesting articles and discussions this year. I'd like to thank all of you who participated and helped our community grow. @Murray.Oldfield series on InterSystems Data Platforms Capacity Planning and Performance was a highly informative read.
Article
Developer Community Admin · Oct 21, 2015
InterSystems Caché 2015.1 soars from 6 million to more than 21 million end-user database accesses per second on the Intel® Xeon® processor E7 v2 family compared to Caché 2013.1 on the Intel® Xeon® processor E5 familyOverviewWith data volumes soaring and the opportunities to derive value from data rising, database scalability has become a crucial challenge for a wide range of industries. In healthcare, the rising demands for healthcare services and significant changes in the regulatory and business climates can make the challenges particularly acute. How can organizations scale their databases in an efficient and cost-effective way?The InterSystems Caché 2015.1 data platform offers a solution. Identified as a Leader in the Gartner Magic Quadrant for Operational Database Management Systems,1 Caché combines advanced data management, integration, and analytics. Caché 2015.1 is optimized to take advantage of modern multi-core architectures and represents a new generation of ultra-high-performance database technology. Running on the Intel® Xeon® processor E7 v2 family, Caché 2015.1 provides a robust, affordable solution for scalable, data-intensive computing.To examine the scalability of Caché 2015.1, InterSystems worked with performance engineers at Epic, whose electronic medical records (EMRs) and other healthcare applications are deployed by some of the world’s largest hospitals, delivery systems, and other healthcare organizations. The test team found that Caché 2015.1 with Enterprise Cache Protocol® (ECP®) technology on the Intel Xeon processor E7 v2 family achieved more than 21 million end-user database accesses per second (known in the Caché environment as Global References per Second or GREFs) while maintaining excellent response times. This was more than triple the load levels of 6 million GREFs achieved by Caché 2013.1 on the Intel® Xeon® processor E5 family."The scalability and performance improvements of Caché version 2015.1 are terrific. Almost doubling the scalability, this version provides a key strategic advantage for our user organizations who are pursuing large-scale medical informatics programs as well as aggressive growth strategies in preparation for the volume-to-value transformation in healthcare."– Carl Dvorak, President, Epic
Article
Developer Community Admin · Oct 21, 2015
Providing a reliable infrastructure for rapid, unattended, automated failoverTechnology OverviewTraditional availability and replication solutions often require substantial capital investments in infrastructure, deployment, configuration, software licensing, and planning. Caché Database Mirroring (Mirroring) is designed to provide an economical solution for rapid, reliable, robust, automatic failover between two Caché systems, making mirroring the ideal automatic failover high-availability solution for the enterprise.In addition to providing an availability solution for unplanned downtime, mirroring offers the flexibility to incorporate certain planned downtimes on a particular Caché system while minimizing the overall SLA's for the organization. Combining InterSystems Enterprise Cache Protocol (ECP) application servers with mirroring provides an additional level of availability. Application servers allow processing to seamlessly continue on the new system once the failover is complete, thus greatly minimizing workflow and user disruption. Configuring the two mirror members in separate data centers offers additional redundancy and protection from catastrophic events.Key Features and BenefitsEconomical high availability database solution with automatic failoverRedundant components minimize shared-resource related risksLogical data replication minimizes risks of carry-forward physical corruptionProvides a solution for both planned and unplanned downtimeProvides business continuity benefits via a geographically dispersed disaster recovery configurationProvides Business Intelligence and reporting benefits via a centralized Enterprise Data Warehouse configurationTraditional availability solutions that rely on shared resources (such as shared disk) are often susceptible to a single point of failure with respect to that shared resource. Mirroring reduces that risk by maintaining independent components on the primary and backup mirror systems. Further, by utilizing logical data replication, mirroring reduces the potential risks associated with physical replication, such as out-of-order updates and carry-forward corruption, which are possible with other replication technologies such as SAN-based replication.Finally, mirroring allows for a special Async Member, which can be configured to receive updates from multiple mirrors across the enterprise. This allows a single system to act as a comprehensive enterprise data store, enabling - through the use of InterSystems DeepSee - real-time business intelligence that uses enterprise-wide data. The async member can also be deployed in a Disaster Recovery model in which a single mirror can update up to six geographically-dispersed async members; this model provides a robust framework for distributed data replication, thus ensuring business continuity benefits to the organization. The async member can also be configured as a traditional reporting system so that application reporting can be offloaded from the main production system.
Article
Developer Community Admin · Oct 21, 2015
Providing a reliable infrastructure for rapid, unattended, automated failoverTechnology OverviewTraditional availability and replication solutions often require substantial capital investments in infrastructure, deployment, configuration, software licensing, and planning. Caché Database Mirroring (Mirroring) is designed to provide an economical solution for rapid, reliable, robust, automatic failover between two Caché systems, making mirroring the ideal automatic failover high-availability solution for the enterprise.In addition to providing an availability solution for unplanned downtime, mirroring offers the flexibility to incorporate certain planned downtimes on a particular Caché system while minimizing the overall SLA's for the organization. Combining InterSystems Enterprise Cache Protocol (ECP) application servers with mirroring provides an additional level of availability. Application servers allow processing to seamlessly continue on the new system once the failover is complete, thus greatly minimizing workflow and user disruption. Configuring the two mirror members in separate data centers offers additional redundancy and protection from catastrophic events.Key Features and BenefitsEconomical high availability solution with automatic failover for database systemsRedundant components minimize shared-resource related risksLogical data replication minimizes risks of carry-forward physical corruptionProvides a solution for both planned and unplanned downtimeProvides business continuity benefits via a geographically dispersed disaster recovery configurationTraditional availability solutions that rely on shared resources (such as shared disk) are often susceptible to a single point of failure with respect to that shared resource. Mirroring reduces that risk by maintaining independent components on the primary and backup mirror systems. Further, by utilizing logical data replication, mirroring reduces the potential risks associated with physical replication, such as out-of-order updates and carry-forward corruption, which are possible with other replication technologies such as SAN-based replication.Finally, mirroring allows for a special Async Member, which can be configured to receive updates from multiple mirrors across the enterprise. This allows a single system to act as a comprehensive enterprise data store, enabling - through the use of InterSystems DeepSee - real-time business intelligence that uses enterprise-wide data. The async member can also be deployed in a Disaster Recovery model in which a single mirror can update up to six geographically-dispersed async members; this model provides a robust framework for distributed data replication, thus ensuring business continuity benefits to the organization. The async member can also be configured as a traditional reporting system so that application reporting can be offloaded from the main production system.
Article
Developer Community Admin · Oct 21, 2015
IntroductionTo overcome the performance limitations of traditional relational databases, applications - ranging from those running on a single machine to large, interconnected grids - often use in-memory databases to accelerate data access. While in-memory databases and caching products increase throughput, they suffer from a number of limitations including lack of support for large data sets, excessive hardware requirements, and limits on scalability.InterSystems Caché is a high-performance object database with a unique architecture that makes it suitable for applications that typically use in-memory databases. Caché's performance is comparable to that of in-memory databases, but Caché also provides:Persistence - data is not lost when a machine is turned off or crashesRapid access to very large data setsThe ability to scale to hundreds of computers and tens of thousands of usersSimultaneous data access via SQL and objects: Java, C++, .NET, etc.This paper explains why Caché is an attractive alternative to in-memory databases for companies that need high-speed access to large amounts of data.