Clear filter
Announcement
Elena E · Feb 20, 2023
Dear all,
We would like to invite you to an exciting online event related to the InterSystems Open Exchange app gallery. The purpose is to discuss the features and upgrades of the Open Exchange and gather feedback from the Community.
Date: March 1st, 2023Time: 9:00 am Boston // 3:00 pm Berlin // 6:00 pm Dubai // 10:00 pm Beijing time>> Registration here <<
Topics that we will cover:
There are many packages.
There are duplications of functionality the packages provide.
Most packages only have one contributor - teamwork encouragement
Many packages haven't been updated in a long time.
Apps bundles
Demo for the apps
The format of the event will be an open table discussion, where we will present our ideas and the vector of development. Then, we will ask for your feedback and encourage everyone to participate in the discussion.
Agenda:
5 min - A short overview of app quality check feature (for moderators)
10 min - Questions and feedback
10 min - Our vision to address the issues mentioned above.
20 min - Questions and feedback
25 min - open talk - other ideas, wishes, comments towards Open Exchange
If you have any other ideas or topics that you would like to discuss, feel free to put them in the comments or announce them during the meeting.
We look forward to seeing you there and having an engaging discussion.
>> Register here << . You are correct module.xml indicates IPM/ZPM usage and that's OK.But "Packages" is much wider since it also includes those parts that do not contain installable code or data to load but (if well prepared) a bunch of additional information like (hopefully) user guides, installation guides, description of the purpose of the packages, as well as screenshots, examples, ..... All this is not part of the IPM module. for good reasons.I would feel "module" as a downgrade of the excellent work the contributors provided to the community. . I think by "packages" @Elena.E6756 means any applications on OEX. Fully agree:Package was used for the chapter of OEX from day zero!.Instead of semantic discussions, it would be much more important to take care of quality and completeness and easy-to-evaluate examples for the community.Every member of the community is a customer. To my understanding this is majorpoint of being different do other code exchange and discussion platforms.I have checked almost all packages (except the commercials) and most contributors seem to share this understanding. Though I have to admit that there are also lessservice-minded contributors that just don care about issues and PRs.A community of white sheep only is an illusion.I've seen too much to have such dreams Was this event recorded?
How did it go? Hi @Adam.Coppola We haven't recorded the event, unfortunately.We got some useful feedback from Dmitry Maslennikov, as he was the most active participant. Now the feedback is getting processed and as soon as we put anything on the roadmap I will let community know.If you have any questions, requests or ideas you may contact me anytime via direct messages.
Announcement
Bob Kuszewski · Feb 15, 2023
InterSystems Supported Platforms Update Feb-2023
Welcome to the very first Supported Platforms Update! We often get questions about recent and upcoming changes to the list of platforms and frameworks that are supported by the InterSystems IRIS data platform. This update aims to share recent changes as well as our best current knowledge on upcoming changes, but predicting the future is tricky business and this shouldn’t be considered a committed roadmap.
We’re planning to publish this kind of update approximately every 3 months and then re-evaluate in a year. If you find this update useful, let us know! We’d also appreciate suggestions for how to make it better.
With that said, on to the update…
IRIS Production Operating Systems and CPU Architectures
Red Hat Enterprise Linux
Recent Changes
IRIS 2022.1.2 adds support for RHEL 9.0. 9.0 is a major OS release that updates the Linux Kernel to 5.14, OpenSSL to 3.0, and Python 3.9
IRIS 2022.2.0 removes support for RHEL 7.x. RHEL 7.9 is still supported in earlier versions of IRIS.
Upcoming Changes
RHEL 9.1 was released in November, 2022. Red Hat is supporting this minor version only until RHEL 9.2 is released, so InterSystems will not be adding 9.1 as a supported platform.
RHEL 9.2 is planned to be released in late Q2, 2023 and Red Hat is planning to support 9.2 for 4 years. InterSystems is planning to do additional testing of IRIS on RHEL 9.2 through a new process we’re calling “Minor OS version certification” that is intended to provide additional security that a minor OS update didn’t break anything obvious.
RHEL 8.4 extended maintenance ends 5/31/2023, which means that IRIS will stop supporting this minor version at that time as well.
Further reading: RHEL Release Page
Ubuntu
Recent Changes
IRIS 2022.1.1 adds support for Ubuntu 22.04. 22.04 is a major OS release that updates the Linux Kernel to 5.15, OpenSSL to 3.0.2, and Python 3.10.6
IRIS 2022.2.0 removes support for Ubuntu 18.04. Ubuntu 18.04 is still supported in earlier versions of IRIS.
IRIS 2022.1.1 & up containers are based on Ubuntu 22.04.
Upcoming Changes
Ubuntu 20.04.05 LTS and 22.04.01 LTS have been recently released. InterSystems is planning to do additional testing of IRIS on 20.04.05 LTS and 22.04.01 LTS through a new process we’re calling “Minor OS version certification”. We’ll have more on these “Minor OS version certifications” in a future newsletter.
The next major update of Ubuntu is scheduled for April, 2024
Further Reading: Ubuntu Releases Page
SUSE Linux
Recent Changes
IRIS 2022.3.0 adds support for SUSE Linux Enterprise Server 15 SP4. 15 SP4 is a major OS release that updates the Linux Kernel to 5.14, OpenSSL to 3.0, and Python 3.9
Upcoming Changes
Based on their release rhythm, we expect SUSE to release 15 SP5 late Q2 or early Q3 with support added to IRIS after that.
Further Reading: SUSE lifecycle
Oracle Linux
Recent Changes
IRIS 2022.3.0 adds support for Oracle Linux 9. Oracle Linux 9 is a major OS release that tracks RHEL 9, so it, too, updates the Linux Kernel to 5.14, OpenSSL to 3.0, and Python 3.9
Upcoming Changes
Oracle Linux 9.1 was released in January, 2023.
Further Reading: Oracle Linux Support Policy
Microsoft Windows
Recent Changes
We haven’t made any changes to the list of supported Windows versions since Windows Server 2022 was added in IRIS 2022.1
Upcoming Changes
Windows Server 2012 will reach its end of extended support in October, 2023. If you’re still running on the platform, now is the time to plan migration.
Further Reading: Microsoft Lifecycle
AIX
Recent Changes
We haven’t made any changes to the list of supported AIX versions since AIX 7.3 was added and 7.1 removed in IRIS 2022.1
Upcoming Changes
InterSystems is working closely with IBM to add support for OpenSSL 3.0. This will not be included in IRIS 2023.1.0 as IBM will need to target the feature in a further TL release. The good news is that IBM is looking to release OpenSSL 3.0 for both AIX 7.2 & 7.3.
IBM released AIX 7.3 TL1 in December and certification is in progress.
The next TL releases are expected in April.
Further Reading: AIX Lifecycle
Containers
Recent Changes
We are now publishing multi-architecture manifests for IRIS containers. This means that pulling the IRIS container tagged 2022.3.0.606.0 will download the right container for your machine’s CPU architecture (Intel/AMD or ARM).
If you need to pull a container for a specific CPU architecture, tags are available for architecture-specific containers. For example, 2022.3.0.606.0-linux-amd64 will pull the Intel/AMD container and 2022.3.0.606.0-linux-arm64v8 will pull the ARM container.
Upcoming Changes
We will phase out the arm-specific image names, such as iris-arm64 in favor of the multi-architecture manifests in the second-half of the year.
We will also start to tag the preview containers with “-preview” so that it’s clear which container is the most recent GA release.
IRIS Development Operating Systems and CPU Architectures
MacOS
Recent Changes
We haven’t made any changes to the list of supported MacOS versions since we moved to MacOS 11 in IRIS 2022.1
Upcoming Changes
We’re planning to add support for MacOS 13 in 2023, perhaps as early as IRIS 2023.1.
CentOS
We are considering removing support for CentOS/CentOS Stream. See reasoning below.
Red Hat has been running a developer program for a few years now, which gives developers access to free licenses for non-production environments. Developers currently using CentOS are encouraged to switch to RHEL via this program.
CentOS Stream is now “upstream” of RHEL, meaning that it has bugs & features not yet included in RHEL. It also updates daily, which can cause problems for developers building on the platform (to say nothing of our own testing staff).
We haven’t made any changes to the list of supported CentOS versions since we added support for CentOS 8-Stream and removed support for CentOS 7.9 in IRIS 2022.1
Caché & Ensemble Production Operating Systems and CPU Architectures
Recent Changes
Cache 2018.1.7 adds support for Windows 11
InterSystems Supported Platforms Documentation
The InterSystems Supported Platforms documentation is source for definitive list of supported technologies.
IRIS 2020.1 Supported Server Platforms
IRIS 2021.1 Supported Server Platforms
IRIS 2022.1 Supported Server Platforms
IRIS 2022.3 Supported Server Platforms
Caché & Ensemble 2018.1.7 Supported Server Platforms
… and that’s all folks. Again, if there’s something more that you’d like to know about, please let us know.
Nice work on the containers side @Robert.Kuszewski !
Announcement
Dmitry Maslennikov · Sep 16, 2022
Great news, the extension to Docker Desktop for InterSystems Container Registry is now publicly available for everyone.
It is already available in the marketplace in Docker Desktop. It was published today, and it requires restarting Docker Desktop to see it.
Feel free to post any feedback in the GitHub repository, here
Announcement
Jessica Simmons · Nov 1, 2022
VetsEZ is seeking a full-time remote InterSystems HealthShare / IRIS Architect to design and deliver integrated health solutions for our federal healthcare clients with a focus on automation and semantic interoperability. This role focuses on understanding complex technical and business requirements and translating them into highly scalable, robust integration architectures. Act as a technical lead, and mentor team members while maintaining a hands-on role.
The candidate must reside within the continental US.
Responsibilities:
Architecting and configuring the InterSystems HealthShare / IRIS platform with a focus on Automation and CI/CD
Utilizing Healthcare Interoperability Standards (HL7v2, FHIR, CCDA, IHE, X12) appropriately in solution designs
Understanding and translating business and technical requirements, particularly those that are unique to healthcare, into architectures and solution designs
Lead development teams in building solutions as an architect and automating processes
Developing and implementing APIs and Web Services (REST, SOAP) and designing API-based solutions
Utilizing CI/CD technologies and processes including but not limited to Git, Jenkins, Docker
Using AWS-based cloud architectures, VM and Containers, and deployment strategies to Architect solutions
Implementing and using authorization frameworks, including OAuth 2.0, OIDC, and SAML 2.0
Utilizing Enterprise Integration Patterns in solution designs
Requirements:
Bachelor's degree in Information Technology, Computer Science, or other related fields
3+ years of experience in InterSystems ObjectScript, .NET (C#), Java, or other Object-Oriented Programming languages
Strong interpersonal skills with hands-on Architect leadership experience to guide and mentor a team
Flexible and able to adapt to frequently changing standards and priorities
Proven experience developing and implementing highly scalable enterprise-level interoperability solutions
Passion for deploying scalable, robust solutions with an eye towards futureproofing
HealthShare Unified Care Record Technical Specialist
Additional Qualifications:
Must be able to obtain and maintain Public Trust Clearance
Background in the Department of Veterans Affairs and healthcare preferred
Certified Maximo Deployment Professional and Certified SAFe Agile Architect
Benefits:
Medical/Dental/Vision
401k with Employer Match
Corporate Laptop
PTO + Federal Holidays
Training opportunities
Remote work options
Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability, or protected veteran status.
Sorry, we are unable to offer sponsorship currently.
https://vetsez.breezy.hr/p/391438edae0f01-intersystems-healthshare-architect-remote-opportunity?state=published
Question
Vishnu G · Dec 14, 2022
Does InterSystems has CDS Hook implementations?
if yes, where I could get the details. What's CDS? I suppose this is it Hi Vishnu!
InterSystems has recently released a beta of the Healthcare Action Engine for limited customer testing. The Healthcare Action Engine is an extended healthcare decision support service; it includes workflows for implementing your own CDS Hooks services and for brokering connections between clients, data sources, and third-party CDS Hooks Services.
The Healthcare Action Engine is a stand-alone product, but it is designed to integrate seamlessly with InterSystems HealthShare product ecosystem. Because of this, documentation for the product is included as part of our HealthShare documentation, here. (Note that, for security reasons, you must authenticate as a registered HealthShare customer with your WRC credentials to access HealthShare documentation.)
If you'd like to learn more about the Healthcare Action Engine, I encourage you to contact your organization's InterSystems sales associate. As the technical writer responsible for documenting the Healthcare Action Engine as it continues to develop, I'm also happy to answer further questions. Following up on Shawn's response, these resources might also be helpful in the meantime, and perhaps for others -
A Global Summit 2022 session titled Healthcare Action Engine and CDS Hooks: Sneak Peek (includes PDF slides and recording).
An online exercise titled Configuring Alerts for Clinicians with the Healthcare Action Engine.
"See how to use key features of the Healthcare Action Engine to set up real-time alerts for clinicians. In this exercise, you will build decision support, design a notification using a CDS Hooks card, and write a rule to deliver it."
[I believe the same comment Shawn mentioned about being required to be a HealthShare customer in order to access this content is relevant here as well.] Thank you very much. I got it. Thank you very much. I got it.
Announcement
Anastasia Dyubaylo · Feb 23, 2023
Hello Community,
After the last heated programming contest, we're happy to announce the next InterSystems technical article writing competition!
✍️ Tech Article Contest: InterSystems IRIS Tutorials ✍️
Write an article that can be considered a tutorial for InterSystems IRIS programmers of any level: beginner / middle / senior from March 1st to March 31st.
🎁 Prizes for everyone: A special prize pack for each author who takes part in the competition!
🏆 Main Prizes: There are 6 prizes to choose from.
Prizes
1. Everyone is a winner in InterSystems Tech Article Contest! Any member who writes an article during the competition period will receive special prizes:
🎁 Branded Organic Canvas Tote Bag
🎁 Moleskine Lined Notebook
2. Expert Awards – articles will be judged by InterSystems experts:
🥇 1st place: Mars Pro Bluetooth speakers / AirPods Max
🥈 2nd place: Apple AirPods Pro with Wireless Charging Case /
JBL Pulse 4 Light Show Speaker
🥉 3rd place: Magic Keyboard Folio for iPad / Bose Soundlink Micro Bluetooth Speaker
Or as an alternative, any winner can choose a prize from a lower prize tier than his own.
3. Developer Community Award – article with the most likes. The winner will have the option to choose one of the following prizes:
🎁 Magic Keyboard Folio for iPad
🎁 Bose Soundlink Micro Bluetooth Speaker
Note:
The author can only be awarded once per category (in total the author will win 2 prizes: one for Expert and one for the Community)
In the event of a tie, the number of votes of the experts for the tied articles will be considered as a tie-breaking criterion.
Who can participate?
Any Developer Community member, except for InterSystems employees. Create an account!
Contest period
📝 March 1st - March 31st: Publication of articles and voting time.
Publish an article(s) throughout this period. DC members can vote for published articles with Likes – votes in the Community award.
Note: The sooner you publish the article(s), the more time you will have to collect both Expert & Community votes.
What are the requirements?
❗️ Any article written during the contest period and satisfying the requirements below will automatically* enter the competition:
The article must be a tutorial** on the InterSystems IRIS topic. It can be either for beginners, middle or senior developers.
The article must be in English (incl. inserting code, screenshots, etc.).
The article must be 100% new (it can be a continuation of an existing article).
The article cannot be a translation of an article already published in other communities.
The article should contain only correct and reliable information about InterSystems technology.
The article has to contain the Tutorial tag.
Article size: 400 words minimum (links and code are not counted towards the word limit).
Max 3 entries from the same author are allowed.
Articles on the same topic but with dissimilar examples from different authors are allowed.
* Articles will be moderated by our experts. Only valid content will be eligible to enter the contest.
** Tutorials provide step-by-step instructions that a developer can follow to complete a specific task or set of tasks.
🎯 EXTRA BONUSES
This time we decided to add additional bonuses that will help you to win the prize! Please welcome:
Bonus
Nominal
Details
Topic bonus
5
If your article is on the topic from the list of the proposed topics (listed below), you will receive a bonus of 5 Expert votes (vs 1st place selected by an Expert = 3 votes).
Video bonus
3
Format of the presentation of the content of the article: besides publishing the article make an explanatory video.
Discussion bonus
1
Article with the most useful discussion, as decided by InterSystems experts. Only 1 article will get this bonus.
Translation bonus
1
Publish a translation of your article on any of the regional Communities. Learn more.
Note: Only 1 vote per article.
New member bonus
3
If you haven't participated in the previous contests, your article(s) will get 3 Expert votes.
Proposed topics
Here's a list of proposed topics that will give your article extra bonuses:
✔️ Working with IRIS from C#✔️ Working with IRIS from Java✔️ Working with IRIS from Python✔️ Using Embedded Python✔️ Using Python API✔️ Using Embedded SQL✔️ Using ODBC/JDBC✔️ Working with %Query/%SQLQuery✔️ Using indexes✔️ Using triggers✔️ Using JSON✔️ Using XML✔️ Using REST✔️ Using containers✔️ Using kubernetes
Note: Articles on the same topic from different authors are allowed.
➡️ Join InterSystems Discord to chat about the rules, topics & bonuses.
So,
It's time to show off your writing skills! Good luck ✨
Important note: Delivery of prizes varies by country and may not be possible for some of them. A list of countries with restrictions can be requested from @Liubka.Zelenskaia
Great, I love to write tutorials. The winner will get a badge on GM too? I'm uncovering my keyboard and stretching my fingers Is the article word count a minimum or maximum? Neither; all articles must be exactly 400 words.
JK; that needs clarification. 400 words minimum – updated the announcement ;)
Thanks for the heads up, guys! Hi Devs)
What's great news! 🤩
5 news articles have appeared on the Contest page:
Tutorial - Streams in Pieces by @Robert.Cemper1003 Quick sample database tutorial by @Heloisa.Paiva Tutorial - Working with %Query #1 by @Robert.Cemper1003 Tutorial - Working with %Query #2 by @Robert.Cemper1003 Tutorial - Working with %Query #3 by @Robert.Cemper1003
Waiting for other interesting articles) 😎 of course! there will be special badges on Global Masters ;)
Example:
Robert Cemper is so powerful Hey Devs!
The 8 new articles have been added to the contest!
SQLAlchemy - the easiest way to use Python and SQL with IRIS's databases by @Heloisa.Paiva
Creating an ODBC connection - Step to Step by @Heloisa.Paiva
Tutorial - Develop IRIS using SSH by @wang.zhe
Getting Started with InterSystems IRIS: A Beginner's Guide by @A.R.N.H.Hafeel
Creating a DataBase, namespace, inserting data and visualizing data in the front end. by @A.R.N.H.Hafeel
InterSystems Embedded Python in glance by @Muhammad.Waseem
Query as %Query or Query based on ObjectScript by @Irene.Mikhaylova
Setting up VS Code to work with InterSystems technologies by @Maria.Gladkova
Important note:
We do not prohibit authors from using AI when creating content, however, articles must contain only correct and reliable information about InterSystems technology. Before entering the contest, articles are moderated by our experts.
In this regard, we have updated the contest requirements. Hi, Community!
Seven more tutorials have been added to the contest!
Tutorial: Improving code quality with the visual debug tool's color-coded logs by @Yone.Moreno
Kinds of properties in IRIS by @Irene.Mikhaylova
Backup and rebuilding procedure for the IRIS server by @Akio.Hashimoto1419
Stored Procedures the Swiss army knife of SQL by @Daniel.Aguilar
Tutorial how to analyze requests and responses received and processed in webgateway pods by @Oliver.Wilms
InterSystems's Embedded Python with Pandas by @Rizmaan.Marikar2583
Check them out! Hey, Developers!
Two more articles have been added to the contest!
Tutorial - Creating a HL7 TCP Operation for Granular Error Handling by @Julian.Matthews7786
Tutorial for Middle/Senior Level Developer: General Query Solution by @姚.鑫
There are already 19 tutorials that have been uploaded!Almost one week left to the end of the contest, and we are waiting forward to more articles!
Developers!
A lot of tutorials have been added to the contest!🤩
And only four days left to the end of publication and voting!
Hurry up and upload your articles!🚀 Community!
Three more articles have been added to the contest!
Perceived gaps to GPT assisted COS development automatons by @Zhong.Li7025
SQL IRIS Editor and IRIS JAVA CONNECTION by @Jude.Mukkadayil
Set up an IRIS docker image on a Raspberry Pi 4 by @Roger.Merchberger
Devs, only one day left to the end of the contest! Upload your tutorials and join the contest!
Our Tech Article contest is over! Thank you all for participating in this writing competition :)
As a result, 🔥 24 AMAZING ARTICLES 🔥
Now, let's add an element of intrigue...
The winners will be announced on Monday! Stay tuned for the updates.
Article
Anastasia Dyubaylo · Apr 7, 2023
Hi Community!
We know that sometimes you may need to find info or people on our Community! So to make it easier here is a post on how to use different kinds of searches:
find something on the Community
find something in your own posts
find a member by name
➡️ Find some info on the Community
Use the search bar at the top of every page - a quick DC search. Enter a word or phrase, or a tag, or a DC member name, and you'll get the results in a drop-down list:
Also, you can press Enter or click on a magnifying glass and the search results will open.
On this page, you can refine your results:
You can choose if you want to search only in Questions, Articles, etc, or your own posts.
You can look for posts of a particular member.
You can look for posts with specific tags.
You can set up a time limit and sort them by date/relevance.
➡️ Find something in your own posts
Go to your profile, choose Posts on the left-hand side, and after the page refreshes just write what you want to find in the search bar:
➡️ Find a Community member
If you know his or her name or e-mail open the Menu in the top left:
and click on Members:
This will open a table with all the members of this Community and at the top of it there is a Search box:
Hope you'll find these explanations useful.
Happy searching! ;)
Article
Mikhail Khomenko · May 15, 2017
Prometheus is one of the monitoring systems adapted for collecting time series data.
Its installation and initial configuration are relatively easy. The system has a built-in graphic subsystem called PromDash for visualizing data, but developers recommend using a free third-party product called Grafana. Prometheus can monitor a lot of things (hardware, containers, various DBMS's), but in this article, I would like to take a look at the monitoring of a Caché instance (to be exact, it will be an Ensemble instance, but the metrics will be from Caché). If you are interested – read along.
In our extremely simple case, Prometheus and Caché will live on a single machine (Fedora Workstation 24 x86_64). Caché version:
%SYS>write $zvCache for UNIX (Red Hat Enterprise Linux for x86-64) 2016.1 (Build 656U) Fri Mar 11 2016 17:58:47 EST
Installation and configuration
Let's download a suitable Prometheus distribution package from the official site and save it to the /opt/prometheus folder.
Unpack the archive, modify the template config file according to our needs and launch Prometheus. By default, Prometheus will be displaying its logs right in the console, which is why we will be saving its activity records to a log file.
Launching Prometheus
# pwd/opt/prometheus# lsprometheus-1.4.1.linux-amd64.tar.gz# tar -xzf prometheus-1.4.1.linux-amd64.tar.gz# lsprometheus-1.4.1.linux-amd64 prometheus-1.4.1.linux-amd64.tar.gz# cd prometheus-1.4.1.linux-amd64/# lsconsole_libraries consoles LICENSE NOTICE prometheus prometheus.yml promtool# cat prometheus.yml global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.scrape_configs: - job_name: 'isc_cache' metrics_path: '/metrics/cache' static_configs: - targets: ['localhost:57772']# ./prometheus > /var/log/prometheus.log 2>&1 &[1] 7117# head /var/log/prometheus.logtime=«2017-01-01T09:01:11+02:00» level=info msg=«Starting prometheus (version=1.4.1, branch=master, revision=2a89e8733f240d3cd57a6520b52c36ac4744ce12)» source=«main.go:77»time=«2017-01-01T09:01:11+02:00» level=info msg=«Build context (go=go1.7.3, user=root@e685d23d8809, date=20161128-09:59:22)» source=«main.go:78»time=«2017-01-01T09:01:11+02:00» level=info msg=«Loading configuration file prometheus.yml» source=«main.go:250»time=«2017-01-01T09:01:11+02:00» level=info msg=«Loading series map and head chunks...» source=«storage.go:354»time=«2017-01-01T09:01:11+02:00» level=info msg=«23 series loaded.» source=«storage.go:359»time=«2017-01-01T09:01:11+02:00» level=info msg="Listening on :9090" source=«web.go:248»
The prometheus.yml configuration is written in the YAML language, which doesn't like tabulation symbols, which is why you should use spaces only. We have already mentioned that metrics will be downloaded from http://localhost:57772 and we'll be sending requests to /metrics/cache (the name of the application is arbitrary), i.e. the destination address for collecting metrics will be http://localhost:57772/metrics/cache. A "job=isc_cache" tag will be added to each metric. A tag, very roughly, is the equivalent of WHERE in SQL. In our case, it won't be used, but will do just fine for more than one server. For example, names of servers (and/or instances) can be saved to tags and you can then use tags to parameterize requests for drawing graphs. Let's make sure that Prometheus is working (we can see the port it's listening to in the output above – 9090):
A web interface opens, which means that Prometheus is working. However, it doesn't see Caché metrics yet (let's check it by clicking Status → Targets):
Preparing metrics
Our task is to make metrics available to Prometheus in a suitable format at http://localhost:57772/metrics/cache. We'll be using the REST capabilities of Caché because of their simplicity. It should be noted that Prometheus only “understands” numeric metrics, so we will not export string metrics. To get the latter, we will use the API of the SYS.Stats.Dashboard class. These metrics are used by Caché itself for displaying the System toolbar:
Example of the same in the Terminal:
%SYS>set dashboard = ##class(SYS.Stats.Dashboard).Sample() %SYS>zwrite dashboarddashboard=<OBJECT REFERENCE>[2@SYS.Stats.Dashboard]+----------------- general information ---------------| oref value: 2| class name: SYS.Stats.Dashboard| reference count: 2+----------------- attribute values ------------------| ApplicationErrors = 0| CSPSessions = 2| CacheEfficiency = 2385.33| DatabaseSpace = "Normal"| DiskReads = 14942| DiskWrites = 99278| ECPAppServer = "OK"| ECPAppSrvRate = 0| ECPDataServer = "OK"| ECPDataSrvRate = 0| GloRefs = 272452605| GloRefsPerSec = "70.00"| GloSets = 42330792| JournalEntries = 16399816| JournalSpace = "Normal"| JournalStatus = "Normal"| LastBackup = "Mar 26 2017 09:58AM"| LicenseCurrent = 3| LicenseCurrentPct = 2. . .
The USER space will be our sandbox. To begin with, let's create a REST application /metrics. To add come very basic security, let's protect our log-in with a password and associate the web application with some resource – let's call it PromResource. We need to disable public access to the resource, so let's do the following:
%SYS>write ##class(Security.Resources).Create("PromResource", "Resource for Metrics web page", "")1
Our web app settings:
We will also need a user with access to this resource. The user should also be able to read from our database (USER in our case) and save data to it. Apart from this, this user will need read rights for the CACHESYS system database, since we will be switching to the %SYS space later in the code. We'll follow the standard scheme, i.e. create a PromRole role with these rights and then create a PromUser user assigned to this role. For password, let’s use "Secret":
%SYS>write ##class(Security.Roles).Create("PromRole","Role for PromResource","PromResource:U,%DB_USER:RW,%DB_CACHESYS:R")1%SYS>write ##class(Security.Users).Create("PromUser","PromRole","Secret")1
It is this user PromUser that we will used for authentication in the Prometheus config. Once done, we'll re-read the config by sending a SIGNUP signal to the server process.
A safer config
# cat /opt/prometheus/prometheus-1.4.1.linux-amd64/prometheus.ymlglobal: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.scrape_configs: - job_name: 'isc_cache' metrics_path: '/metrics/cache' static_configs: - targets: ['localhost:57772'] basic_auth: username: 'PromUser' password: 'Secret'## kill -SIGHUP $(pgrep prometheus) # or kill -1 $(pgrep prometheus)
Prometheus can now successfully pass authentication for using the web application with metrics.
Metrics will be provided by the my.Metrics request processing class. Here is the implementation:
Class my.Metrics Extends %CSP.REST
{
Parameter ISCPREFIX = "isc_cache";
Parameter DASHPREFIX = {..#ISCPREFIX_"_dashboard"};
XData UrlMap [ XMLNamespace = "http://www.intersystems.com/urlmap" ]
{
<Routes>
<Route Url="/cache" Method="GET" Call="getMetrics"/>
</Routes>
}
/// Output should obey the Prometheus exposition formats. Docs:
/// https://prometheus.io/docs/instrumenting/exposition_formats/
///
/// The protocol is line-oriented. A line-feed character (\n) separates lines.
/// The last line must end with a line-feed character. Empty lines are ignored.
ClassMethod getMetrics() As %Status
{
set nl = $c(10)
do ..getDashboardSample(.dashboard)
do ..getClassProperties(dashboard.%ClassName(1), .propList, .descrList)
for i=1:1:$ll(propList) {
set descr = $lg(descrList,i)
set propertyName = $lg(propList,i)
set propertyValue = $property(dashboard, propertyName)
// Prometheus supports time series database
// so if we get empty (for example, backup metrics) or non-digital metrics
// we just omit them.
if ((propertyValue '= "") && ('$match(propertyValue, ".*[-A-Za-z ]+.*"))) {
set metricsName = ..#DASHPREFIX_..camelCase2Underscore(propertyName)
set metricsValue = propertyValue
// Write description (help) for each metrics.
// Format is that the Prometheus requires.
// Multiline descriptions we have to join in one string.
write "# HELP "_metricsName_" "_$replace(descr,nl," ")_nl
write metricsName_" "_metricsValue_nl
}
}
write nl
quit $$$OK
}
ClassMethod getDashboardSample(Output dashboard)
{
new $namespace
set $namespace = "%SYS"
set dashboard = ##class(SYS.Stats.Dashboard).Sample()
}
ClassMethod getClassProperties(className As %String, Output propList As %List, Output descrList As %List)
{
new $namespace
set $namespace = "%SYS"
set propList = "", descrList = ""
set properties = ##class(%Dictionary.ClassDefinition).%OpenId(className).Properties
for i=1:1:properties.Count() {
set property = properties.GetAt(i)
set propList = propList_$lb(property.Name)
set descrList = descrList_$lb(property.Description)
}
}
/// Converts metrics name in camel case to underscore name with lower case
/// Sample: input = WriteDaemon, output = _write_daemon
ClassMethod camelCase2Underscore(metrics As %String) As %String
{
set result = metrics
set regexp = "([A-Z])"
set matcher = ##class(%Regex.Matcher).%New(regexp, metrics)
while (matcher.Locate()) {
set result = matcher.ReplaceAll("_"_"$1")
}
// To lower case
set result = $zcvt(result, "l")
// _e_c_p (_c_s_p) to _ecp (_csp)
set result = $replace(result, "_e_c_p", "_ecp")
set result = $replace(result, "_c_s_p", "_csp")
quit result
}
}
Let's use the console to check that our efforts have not been vain (added the --silent key, so that curl doesn't impede us with its progress bar):
# curl --user PromUser:Secret --silent -XGET 'http://localhost:57772/metrics/cache' | head -20# HELP isc_cache_dashboard_application_errors Number of application errors that have been logged.isc_cache_dashboard_application_errors 0# HELP isc_cache_dashboard_csp_sessions Most recent number of CSP sessions.isc_cache_dashboard_csp_sessions 2# HELP isc_cache_dashboard_cache_efficiency Most recently measured cache efficiency (Global references / (physical reads + writes))isc_cache_dashboard_cache_efficiency 2378.11# HELP isc_cache_dashboard_disk_reads Number of physical block read operations since system startup.isc_cache_dashboard_disk_reads 15101# HELP isc_cache_dashboard_disk_writes Number of physical block write operations since system startupisc_cache_dashboard_disk_writes 106233# HELP isc_cache_dashboard_ecp_app_srv_rate Most recently measured ECP application server traffic in bytes/second.isc_cache_dashboard_ecp_app_srv_rate 0# HELP isc_cache_dashboard_ecp_data_srv_rate Most recently measured ECP data server traffic in bytes/second.isc_cache_dashboard_ecp_data_srv_rate 0# HELP isc_cache_dashboard_glo_refs Number of Global references since system startup.isc_cache_dashboard_glo_refs 288545263# HELP isc_cache_dashboard_glo_refs_per_sec Most recently measured number of Global references per second.isc_cache_dashboard_glo_refs_per_sec 273.00# HELP isc_cache_dashboard_glo_sets Number of Global Sets and Kills since system startup.isc_cache_dashboard_glo_sets 44584646
We can now check the same in the Prometheus interface:
And here is the list of our metrics:
We won’t focus on viewing them in Prometheus. You can select the necessary metric and click the “Execute” button. Select the “Graph” tab to see the graph (shows the cache efficiency):
Visualization of metrics
For visualization purposes, let’s install Grafana. For this article, I chose installation from a tarball. However, there are other installation options, from packages to a container. Let's perform the following steps (after creating the /opt/grafana folder and switching to it):
Let's leave the config unchanged for now. On our last step, we launch Grafana in the background mode. We'll be saving Grafan's log to a file, just like we did with Prometheus:
# ./bin/grafana-server > /var/log/grafana.log 2>&1 &
By default, Grafana’s web interface is accessible via port 3000. Login/password: admin/admin.
A detailed instruction on making Prometheus work with Grafana is available here. In short, we need to add a new Data Source of the Prometheus type. Select your option for direct/proxy access:
Once done, we need to add a dashboard with the necessary panels. The test sample of a dashboard is publicly available, along with the code of the metrics collection class. A dashboard can be simply imported to Grafana (Dashboards → Import):
We'll get the following after import:
Save the dashboard:
Time range and update period can be selected in the top right corner:
Examples of monitoring types
Let's test the monitoring of calls to globals:
USER>for i=1:1:1000000 {set ^prometheus(i) = i}USER>kill ^prometheus
We can see that the number of references to globals per second has increased, while cache efficiency dropped (the ^Prometheus global hasn’t been cached yet):
Let's check our license usage. To do this, let's create a primitive CSP page called PromTest.csp in the USER namespace:
<html><head><title>Prometheus Test Page</title></head><body>Monitoring works fine!</body></html>
And visit it so many times (we assume that the /csp/user application is not password-protected):
# ab -n77 http://localhost:57772/csp/user/PromTest.csp
We'll see the following picture for license usage:
Conclusions
As we can see, implementing the monitoring functionality is not hard at all. Even after a few initial steps, we can get important information about the work of the system, such as: license usage, efficiency of globals caching, application errors. We used the SYS.Stats.Dashboard for this tutorial, but other classes of SYS, %SYSTEM, %SYS packages also deserve attention. You can also write your own class that will supply custom metrics for your own application – for instance, the number of documents of a particular type. Some useful metrics will eventually be compiled into a separate template for Grafana.
To be continued
If you are interested in learning more about this, I will write more on the subject. Here are my plans:
Preparing a Grafana template with metrics for the logging daemon. It would be nice to make some sort of a graphical equivalent of the ^mgstat tool – at least for some of its metrics.
Password protection for web applications is good, but it would be nice to verify the possibility of using certificates.
Use of Prometheus, Grafana and some exporters for Prometheus as Docker containers.
Use of discovery services for automatically adding new Caché instances to the Prometheus monitoring list. It's also where I'd like to demonstrate (in practice) how convenient Grafana and its templates are. This is something like dynamic panels, where metrics for a particular selected server are shown, all on the same Dashboard.
Prometheus Alert Manager.
Prometheus configuration settings related to the duration of storing data, as well possible optimizations for systems with a large number of metrics and a short statistics collection interval.
Various subtleties and nuances that will transpire along the way.
Links
During the preparation of this article, I visited a number of useful sites and watched a great deal of videos:
Prometheus project website
Grafana project website
Blog of one of Prometheus developers called Brian Brazil
Tutorial on DigitalOcean
Some videos from Robust Perception
Many videos from a conference devoted to Prometheus
Thank you for reading all the way down to this line! Hi Mikhail, very nice! And very useful.Have you done any work on templating, e.g. so one or more systems may be selected to be displayed on one dashboard? Hi, Murray! Thanks for your comment! I'll try to describe templating soon. I should add it to my plan. For now, an article about ^mgstat is almost ready (in head -)). So stay tuned for more. Hi, Murray!You can read continuation of article here. There templates are used. Hi Mikhail, very nice! I am configure here the process! Where are the metric values stored?
What kind of metrics can be added via SetSensor? Only counters or also gauges and histograms? Metrics in this approach are stored in Prometheus. As Prometheus is time-series database, you can store any numeric metric there either counter or gauge. Hum... ok for that part I understand it, but if I extend the %SYS.Monitor.SAM.Abstract (https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_rest) and SetTimer method, do you know where is it stored? I'm not sure, but think that SAM-implementation is based on System Monitor (https://docs.intersystems.com/iris20194/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_healthmon) in some fashion. Could you clarify your task? Do you want to know an exact name of Global where samples are stored? For what?Link you've shared describes how to expose your custom numeric metric that could be read by Prometheus, for instance, and then stored there. 💡 This article is considered as InterSystems Data Platform Best Practice.
Announcement
Anastasia Dyubaylo · Mar 23, 2018
Hi Community!Please welcome the InterSystems Developer Community SubReddit!Here you will find all the most interesting announcements from InterSystems Developer Community, i.e. useful articles, discussions, announcements and events.A little about Reddit: — is a social news aggregation, web content rating, and discussion website.See how DC SubReddit looks like:Subscribe to the DC SubReddit and maybe it will become one of your favorites!
Announcement
Anastasia Dyubaylo · Oct 16, 2017
Hi, Community!We've launched Facebook Page for InterSystems Developer Community!Follow to be in touch what happens on InterSystems Developer Community. Here you can find useful articles, answers,announcements, hot discussions, best practices based on: InterSystems IRIS, Caché, Ensemble, DeepSee and iKnow.The permalink to join DC Facebook Page can be found also on the right of every DC page:Follow us now to be in the know and don`t forget to like the page!
Announcement
Michelle Spisak · Oct 17, 2017
Learning Services Live Webinars are back! At this year’s Global Summit, InterSystems debuted InterSystems IRIS Data Platform™, a single, comprehensive product that provides capabilities spanning data management, interoperability, transaction processing, and analytics. InterSystems IRIS sets a new level of performance for the rapid development and deployment of data-rich and mission-critical applications. Now is your chance to learn more! Joe Lichtenberg, Director of Product and Industry Marketing for InterSystems, presents "Introducing InterSystems IRIS Data Platform", a high-level description of the business drivers and capabilities behind InterSystems IRIS.Webinar recording available now! Webinar recording is available!
Announcement
Anastasia Dyubaylo · Oct 26, 2017
Hi, Community!
Please find a new session recording from Global Summit 2017:
An InterSystems Guide to the Data Galaxy
This video shows how to understand the concept of an open analytics platform for the enterprise.
@Benjamin.Deboe tells how InterSystems technology pairs up with industry standards and open-source technology to provide a solid platform for analytics.
You can also see the additional resources here.
Don't forget to subscribe to the InterSystems Developers YouTube Channel
Enjoy!
Article
David Loveluck · Nov 8, 2017
Application Performance MonitoringTools in InterSystems technologyBack in August in preparation for Global Summit I published a brief explanation of Application Performance Management (APM). To follow up on that I have written and will be publishing over the coming weeks a series of articles on APM.One major element of APM is the construction of a historic record of application activity, performance and resource usage. Crucially for APM the measurement starts with the application and what users are doing with the application. By relating everything to business activity you can focus on improving the level of service provided to users and value to the line business that is ultimately paying for the application.Ideally an application includes instrumentation that reports on activity in business terms, such as ‘display the operating theater floor plan’ or ‘register student for course’ and give the count, the response time and resources used for each on an hourly or daily basis. However many applications don’t have this capability and you have to make do with the closest approximation you can.There are many tools (some expensive, some open source) available to monitor applications, ranging from java script injection for monitoring user experience to middleware and network probes for measuring application communication. The articles will focus on the tools that are available within InterSystems products. I will describe how I have used these tools to manage the performance of applications and improve customer experiences.The tools described include:CSP Page StatisticsSQL Query StatisticsEnsemble Activity MonitorEven if you do have good application instrumentation, additional system monitoring can provide valuable insights and help you and I will include an explanation of how to configure and use:Caché History MonitorI will also expanded my earlier explanation of APM, the reasons for monitoring performance and the different audiences you are trying to help with the information you gather. added a link to 'Using the Cache History Monitor'
Announcement
Evgeny Shvarov · Dec 24, 2017
Hi, Community!I'm pleased to announce that in this December 2017 we have 2 years of InterSystems Developer Community up and running! Together we did a lot this year, and a lot more is planned for the next year!Our Community is growing: In November we had 3,700 registered members (2,200 last November) and 13,000 web users visited the site in November 2017 (7,000 last year).Thank you for using it, thanks for making it useful, thanks for your knowledge, experience, and your passion!And, may I ask you to share in the comments the article or question which was most helpful for you this year?Happy Birthday, Developer Community! And to start it: for me the most helpful article this year was REST FORMS Queries - yes, I'm using REST FORMS a lot, thanks [@Eduard.Lebedyuk]!Another is Search InterSystems documentation using iKnow and iFind technologiesTwo helpful questions were mine (of course ;):How to find duplicates in a large text fieldand Storage Schema in VCS: to Store Or Not to Store?and How to get the measure for the last day in a month in DeepSee There were many interesting articles and discussions this year. I'd like to thank all of you who participated and helped our community grow. @Murray.Oldfield series on InterSystems Data Platforms Capacity Planning and Performance was a highly informative read.
Article
Developer Community Admin · Oct 21, 2015
InterSystems Caché 2015.1 soars from 6 million to more than 21 million end-user database accesses per second on the Intel® Xeon® processor E7 v2 family compared to Caché 2013.1 on the Intel® Xeon® processor E5 familyOverviewWith data volumes soaring and the opportunities to derive value from data rising, database scalability has become a crucial challenge for a wide range of industries. In healthcare, the rising demands for healthcare services and significant changes in the regulatory and business climates can make the challenges particularly acute. How can organizations scale their databases in an efficient and cost-effective way?The InterSystems Caché 2015.1 data platform offers a solution. Identified as a Leader in the Gartner Magic Quadrant for Operational Database Management Systems,1 Caché combines advanced data management, integration, and analytics. Caché 2015.1 is optimized to take advantage of modern multi-core architectures and represents a new generation of ultra-high-performance database technology. Running on the Intel® Xeon® processor E7 v2 family, Caché 2015.1 provides a robust, affordable solution for scalable, data-intensive computing.To examine the scalability of Caché 2015.1, InterSystems worked with performance engineers at Epic, whose electronic medical records (EMRs) and other healthcare applications are deployed by some of the world’s largest hospitals, delivery systems, and other healthcare organizations. The test team found that Caché 2015.1 with Enterprise Cache Protocol® (ECP®) technology on the Intel Xeon processor E7 v2 family achieved more than 21 million end-user database accesses per second (known in the Caché environment as Global References per Second or GREFs) while maintaining excellent response times. This was more than triple the load levels of 6 million GREFs achieved by Caché 2013.1 on the Intel® Xeon® processor E5 family."The scalability and performance improvements of Caché version 2015.1 are terrific. Almost doubling the scalability, this version provides a key strategic advantage for our user organizations who are pursuing large-scale medical informatics programs as well as aggressive growth strategies in preparation for the volume-to-value transformation in healthcare."– Carl Dvorak, President, Epic