Clear filter
Article
Murray Oldfield · Nov 14, 2019
Released with no formal announcement in [IRIS preview release 2019.4](https://community.intersystems.com/post/intersystems-iris-and-iris-health-20194-preview-published "IRIS preview release 2019.1.4") is the /api/monitor service exposing IRIS metrics in **_Prometheus_** format. Big news for anyone wanting to use IRIS metrics as part of their monitoring and alerting solution. The API is a component of the new IRIS _System Alerting and Monitoring (SAM)_ solution that will be released in an upcoming version of IRIS.
>However, you do not have to wait for SAM to start planning and trialling this API to monitor your IRIS instances. In future posts, I will dig deeper into the metrics available and what they mean and provide example interactive dashboards. But first, let me start with some background and a few questions and answers.
IRIS (and Caché) is always collecting dozens of metrics about itself and the platform it is running on. There have always been [multiple ways to collect these metrics to monitor Caché and IRIS](https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GCM "multiple ways to collect these metrics to monitor Caché and IRIS"). I have found that few installations use IRIS and Caché built-in solutions. For example, History Monitor has been available for a long time as a historical database of performance and system usage metrics. However, there was no obvious way to surface these metrics and instrument systems in real-time.
IRIS platform solutions (along with the rest of the world) are moving from single monolithic applications running on a few on-premises instances to distributed solutions deployed 'anywhere'. For many use cases existing IRIS monitoring options do not fit these new paradigms. Rather than completely reinvent the wheel InterSystems looked to popular and proven current Open Source solutions for monitoring and alerting.
## Prometheus?
Prometheus is a well known and widely deployed open source monitoring system based on proven technology. It has a wide variety of plugins. It is designed to work well within the cloud environment, but also is just as useful for on-premises. Plugins include operating systems, web servers such as Apache and many other applications. Prometheus is often used with a front end client, for example, _Grafana_, which provides a great UI/UX experience that is extremely customisable.
## Grafana?
Grafana is also open source. As this series of posts progresses, I will provide sample templates of monitoring dashboards for common scenarios. You can use the samples as a base to design dashboards for what you care about. The real power comes when you combine IRIS metrics in context with metrics from your whole solution stack. From the platform components, operating system, IRIS and especially when you add instrumentation from your applications.
## Haven't I seen this before?
Monitoring IRIS and Caché with Prometheus and Grafana is not new. I have been using these applications for several years to monitor my development and test systems. If you search the Developer Community for "Prometheus", you will find other posts ([for example, some excellent posts by Mikhail Khomenko](https://community.intersystems.com/post/making-prometheus-monitoring-intersystems-cach%C3%A9 "or example, this one by Mikhail Khomenko")) that show how to expose Caché metrics for use by Prometheus.
>The difference now is that the /api/monitor API is included and enabled by default. There is no requirement to code your own classes to expose metrics.
# Prometheus Primer
Here is a quick orientation to Prometheus and some terminology. I want you to see the high level and to lay some groundwork and open the door to how you think of visualising or consuming the metrics provided by IRIS or other sources.
Prometheus works by _scraping_ or pulling time series data exposed from applications as HTTP endpoints (APIs such as IRIS /api/monitor). _Exporters_ and client libraries exist for many languages, frameworks, and open-source applications — for example, for web servers like Apache, operating systems, docker, Kubernetes, databases, and now IRIS.
Exporters are used to instrument applications and services and to expose relevant metrics on an endpoint for scraping. Standard components such as web servers, databases, and the like - are supported by core exporters. Many other exporters are available open-source from the Prometheus community.
## Prometheus Terminology
A few key terms are useful to know:
- **Targets** are where the services are that you care about, like a host or application or services like Apache or IRIS or your own application.
- Prometheus **scrapes** targets over HTTP collecting metrics as time-series data.
- **Time-series data** is exposed by applications, for example, IRIS or via exporters.
- **Exporters** are available for things you don't control like Linux kernel metrics.
- The resulting time-series data is stored locally on the Prometheus server in a database \*\*.
- The time-series database can be queried using an optimised **query language (PromQL)**. For example, to create alerts or by client applications such as Grafana, to display the metrics in a dashboard.
>\*\* **Spoiler Alert;** For security, scaling, high availability and some other operational efficiency reasons, for the new SAM solution the database used for Prometheus time-series data is IRIS! However, access to the Prometheus database -- on IRIS -- is transparent, and applications such as Grafana do not know or care.
### Prometheus Data Model
Metrics returned by the API are in Prometheus format. Prometheus uses a simple text-based metrics format with one metric per line, the format is;
[ (time n, value n), ....]
Metrics can have labels as (key, value) pairs. Labels are a powerful way to filter metrics as dimensions. As an example, examine a single metric returned for IRIS /api/monitor. In this case journal free space:
iris_jrn_free_space{id="WIJ",dir=”/fast/wij/"} 401562.83
The identifier tells you what the metric is and where it came from:
iris_jrn_free_space
Multiple labels can be used to decorate the metrics, and then used to filter and query. In this example, you can see the WIJ and the directory where the WIJ is stored:
id="WIJ",dir="/fast/wij/"
And a value: `401562.83` (MB).
## What IRIS metrics are available?
The [preview documentation](https://irisdocs.intersystems.com/iris20194/csp/docbook/Doc.View.cls?KEY=GCM_rest "Will be subject to changes") has a list of metrics. However, be aware there may be changes. You can also simply query the `/api/monitor/metrics` endpoint and see the list. I use [Postman](https://www.getpostman.com "Postman") which I will demonstrate in the next community post.
# What should I monitor?
Keep these points in mind as you think about how you will monitor your systems and applications.
- When you can, instrument key metrics that affect users.
- - Users don't care that one of your machines is short of CPU.
- - Users care if the service is slow or having errors.
- - For your primary dashboards focus on high-level metrics that directly impact users.
- For your dashboards avoid a wall of graphs.
- - Humans can't deal with too much data at once.
- - For example, have a dashboard per service.
- Think about services, not machines.
- - Once you have isolated a problem to one service, then you can drill down and see if one machine is the problem.
# References
**Documentation and downloads** for: [Prometheus](https://prometheus.io "Prometheus") and [Grafana](https://grafana.com "Grafana")
I presented a pre-release overview of SAM (including Prometheus and Grafana) at **InterSystems Global Summit 2019** you can find [the link at InterSystems learning services](https://learning.intersystems.com/mod/page/view.php?id=5599 "Learning Services"). If the direct link does not work go to the [InterSystems learning services web site](https://learning.intersystems.com "Learning Services") and search for: "System Alerting and Monitoring Made Easy"
Search here on the community for "Prometheus" and "Grafana".
Please include node_exporter setup.
What gets put into the isc_prometheus.yml
THis is what the doc says to do this in isc_prometheus.yml
- job_name: NODE metrics_path: /metrics scheme: http static_configs: - labels: cluster: "2" group: node targets: - csc2cxn00020924.cloud.kp.org:9100 - csc2cxn00021271.cloud.kp.org:9100
It does not work.
The node_exporter is installed and running.
From what I can see the values returned are updated very quickly - maybe every second? I'm unclear as to how to contextualize the metrics for a periodic collection. Specifically, if I call the API every minute I may get a value for global references that is very low or very high - but it may not be indicative of the value over time. Is there any information on how the metrics are calculated internally that might help? Single points in time may be very deceptive.
Article
Evgeny Shvarov · Nov 19, 2019
Hi developers!
I just want to share with you the knowledge aka experience which could save you a few hours someday.
If you are building REST API with IRIS which contains more than 1 level of "/", e.g. '/patients/all' don't forget to add parameter 'recurse=1' into your deployment script in %Installer, otherwise all the second and higher entries won't work. And all the entries of level=1 will work.
/patients
- will work, but
/patients/all
- won't.
Here is an example of CSPApplicatoin section which fix the issue and which you may want to use in your %Installer class:
<CSPApplication Url="${CSPAPP}"
Recurse="1"
Directory="${CSPAPPDIR}"
Grant="${RESOURCE},%SQL"
AuthenticationMethods="96"
/>
Article
Eduard Lebedyuk · Nov 22, 2019
This quick guide shows how to serve HTTPS requests with InterSystems API Management. Advantage here is that you have your certs on one separated server and you don't need to configure each backend web-server separately.
Here's how:
1. Buy the domain name.
2. Adjust DNS records from your domain to the IAM IP address.
3. Generate HTTPS certificate and private key. I use Let's Encrypt - it's free.
4. Start IAM if you didn't already.
5. Send this request to IAM:
POST http://host:8001/certificates/
{
"cert": "-----BEGIN CERTIFICATE-----...",
"key": "-----BEGIN PRIVATE KEY-----...",
"snis": [
"host"
]
}
Note: replace newlines in cert and key with \n.
You'll get a response, save id value from it.
6. Go to your IAM workspace, open SNIs, create new SNI with the name - your host and ssl_certificate_id - id from the previous step.
7. Update your routes to use https protocol (leave only https to force secure connection, or specify http, https to allow both protocols)
8. Test HTTPS requests by sending them to https://host:8443/<your route> - that's where IAM listens for HTTPS connections by default. Eduard, thank you for a very good webinar.
You mentioned that IAM can be helpful even if there is "service-mix": some services are IRIS based, others - not. How can IAM help with non-IRIS services? Can any Target Object be non-IRIS base?
Can any Target Object be non-IRIS base?
Absolutely. The services you offer via IAM can be sourced anywhere. Both from InterSystems IRIS and not.
How can IAM help with non-IRIS services?
All the benefits you get from using IAM (ease of administration, control, analytics) are available for both InterSystems IRIS-based and non InterSystems IRIS-based services
Announcement
Anastasia Dyubaylo · Sep 3, 2019
Hi Community!
We are super excited to announce the Boston FHIR @ InterSystems Meetup on 10th of September at the InterSystems meeting space!
There will be two talks with Q&A and networking.
Doors open at 5:30pm, we should start the first talk around 6pm. We will have a short break between talks for announcements, including job opportunities.
Please check the details below.
#1 We are in the middle of changes in healthcare technology that affect the strategies of companies and organizations across the globe, including many startups right here in Massachusetts. Micky Tripathi from the Massachusetts eHealth Collaborative is going to talk to us about the opportunities and consequences of API-based healthcare.
By Micky Tripathi - MAeHC#2 FHIR Analytics
The establishment of FHIR as a new healthcare data format creates new opportunities and challenges. Health professionals would like to acquire patient data from Electronic Health Records (EHR) with FHIR, and use it for population health management and research.FHIR provides resources and foundations based on XML and JSON data structures. However, traditional analytic tools are difficult to use with these structures. We created a prototype application to ingest FHIR bundles and save the Patient and Observation resources as objects/tables in InterSystems IRIS for Health. Developers can then easily create derived "fact tables" that de-normalize these tables for exploration and analytics.We will demo this application and our analytics tools using the InterSystems IRIS for Health platform.By Patrick Jamieson, M.D., Product Manager for InterSystems IRIS for Health and Carmen Logue, Product Manager - Analytics and AI
So, remember!
Date and time: Tuesday, 10 September 2019 5:30 pm to 7:30 pm
Venue: 1 Memorial Dr, Cambridge, MA 02142, USA
Event webpage: Boston FHIR @ InterSystems Meetup
Article
Evgeny Shvarov · Sep 6, 2019
Hi Developers!
InterSystems Package Manager (ZPM) is a great thing, but it is even better if you don't need to install it but can use immediately.
There are several ways how to do this and here is one approach of having IRIS container with ZPM built with dockerfile.
I've prepared a repository which has a few lines in dockerfile which perform the download and install the latest version of ZPM.
Add these lines to your standard dockerfile for IRIS community edition and you will have ZPM installed and ready to use.
To download the latest ZPM client:
RUN mkdir -p /tmp/deps \
&& cd /tmp/deps \
&& wget -q https://pm.community.intersystems.com/packages/zpm/latest/installer -O zpm.xml
to install ZPM into IRIS:
" Do \$system.OBJ.Load(\"/tmp/deps/zpm.xml\", \"ck\")" \
Great!
To try ZPM with this repository do the following:
$ git clone https://github.com/intersystems-community/objectscript-zpm-template.git
Build and run the repo:
$ docker-compose up -d
Open IRIS terminal:
$ docker-compose exec iris iris session iris
USER>
Call ZPM:
USER>zpm
zpm: USER>
Install webterminal
zpm: USER>install webterminal
webterminal] Reload START
[webterminal] Reload SUCCESS
[webterminal] Module object refreshed.
[webterminal] Validate START
[webterminal] Validate SUCCESS
[webterminal] Compile START
[webterminal] Compile SUCCESS
[webterminal] Activate START
[webterminal] Configure START
[webterminal] Configure SUCCESS
[webterminal] Activate SUCCESS
zpm: USER>
Use it!
And take a look at the whole process in this gif:
It turned out, that we don't need a special repository to add ZPM into your docker container.You just need another dockerfile - like this one. And here is the related docker-compose to make a handy start. See how it works:
Article
Dmitrii Kuznetsov · Oct 7, 2019
How can you allow computers to trust one another in your absence while maintaining security and privacy?
“A Dry Martini”, he said. “One. In a deep champagne goblet.”“Oui, monsieur.”“Just a moment. Three measures of Gordons, one of vodka, half a measure of Kina Lillet. Shake it very well until it’s ice-cold, then add a large thin slice of lemon peel. Got it?”"Certainly, monsieur." The barman seemed pleased with the idea.Casino Royale, Ian Fleming, 1953
OAuth helps to separate services with user credentials from “working” databases, both physically and geographically. It thereby strengthens the protection of identification data and, if necessary, helps you comply with the requirements of countries' data protection laws.
With OAuth, you can provide the user with the ability to work safely from multiple devices at once, while "exposing" personal data to various services and applications as little as possible. You can also avoid taking on "excess" data about users of your services (i.e. you can process data in a depersonalized form).
If you use Intersystems IRIS, you get a complete set of ready-made tools for testing and deploying OAuth and OIDC services, both autonomously and in cooperation with third-party software products.
OAuth 2.0 and OpenID Connect
OAuth and OpenID Connect — known as OIDC or simply OpenID — serve as a universal combination of open protocols for delegating access and identification — and in the 21st century, it seems to be a favorite. No one has come up with a better option for large-scale use. It's especially popular with frontenders because it sits on top of HTTP(S) protocols and uses a JWT (JSON Web Token) container.
OpenID works using OAuth — it is, in fact, a wrapper for OAuth. Using OpenID as an open standard for the authentication and creation of digital identification systems is nothing new for developers. As of 2019, it is in its 14th year (and its third version). It is popular in web and mobile development and in enterprise systems.
Its partner, the OAuth open standard for delegating access, is 12 years old, and it's been nine years since the relevant RFC 5849 standard appeared. For the purposes of this article, we will rely on the current version of the protocol, OAuth 2.0, and the current RFC 6749. (OAuth 2.0 is not compatible with its predecessor, OAuth 1.0.)
Strictly speaking, OAuth is not a protocol, but a set of rules (a scheme) for separating and transferring user identification operations to a separate trusted server when implementing an access-rights restriction architecture in software systems.
Be aware: OAuth can't say anything about a specific user! Who the user is, or where the user is, or even whether the user is currently at a computer or not. But OAuth makes it possible to interact with systems without user participation, using pre-issued access tokens. This is an important point (see "User Authentication with OAuth 2.0" on the OAuth site for more information).
The User-Managed Access (UMA) protocol is also based on OAuth. Using OAuth, OIDC and UMA together make it possible to implement a protected identity and access management (IdM, IAM) system in areas such as:
Using a patient's HEART (Health Relationship Trust) personal data profile in medicine.
Consumer Identity and Access Management (CIAM) platforms for manufacturing and trading companies.
Personalizing digital certificates for smart devices in IoT (Internet of Things) systems using the OAuth 2.0 Internet of Things (IoT) Client Credentials Grant.
A New Venn Of Access Control For The API Economy
Above all, do not store personal data in the same place as the rest of the system. Separate authentication and authorization physically. And ideally, give the identification and authentication to the individual person. Never store them yourself. Trust the owner's device.
Trust and Authentication
It is not a best practice to store users' personal data either in one’s own app or in a combined storage location along with a working database. In other words, we choose someone we trust to provide us with this service.
It is made up of the following parts:
The user
The client app
The identification service
The resource server
The action takes place in a web browser on the user's computer. The user has an account with the identification service. The client app has a signed contract with the identification service and reciprocal interfaces. The resource server trusts the identification service to issue access keys to anyone it can identify.
The user runs the client web app, requesting a resource. The client app must present a key to that resource to gain access.If the user doesn’t have a key, then the client app connects with an identification service with which it has a contract for issuing keys to the resource server (passing the user on to the identification service).
The Identification Service asks what kind of keys are required.
The user provides a password to access the resource. At this point, the user has been authenticated and identification of the user has been confirmed, thus providing the key to the resource (passing the user back to the client app), and the resource is made available to the user.
Implementing an Authorization Service
On the Intersystems IRIS platform, you can assemble a service from different platforms as needed. For example:
Configure and launch an OAuth server with the demo client registered on it.
Configure a demo OAuth client by associating it with an OAuth server and web resources.
Develop client apps that can use OAuth. You can use Java, Python, C#, or NodeJS. Below is an example of the application code in ObjectScript.
There are multiple settings in OAuth, so checklists can be helpful. Let's walk through an example. Go to the IRIS management portal and select the section System Administration > Security > OAuth 2.0 > Server.
Each item will then contain the name of a settings line and a colon, followed by an example or explanation, if necessary. As an alternative, you can use the screenshot hints in Daniel Kutac's three-part article, InterSystems IRIS Open Authorization Framework (OAuth 2.0) implementation - part 1, part 2, and part 3.
Note that all of the following screenshots are meant to serve as examples. You’ll likely need to choose different options when creating your own applications.
On the General Settings tab, use these settings:
Description: provide a description of the configuration, such as "Authorization server".
The endpoint of the generator (hereinafter EPG) host name: DNS name of your server.
Supported permission types (select at least one):
Authorization code
Implicit
Account details: Resource, Owner, Password
Client account details
SSL/TLS configuration: oauthserver
On the Scopes tab:
Add supported scopes: scope1 in our example
On the Intervals tab:
Access Key Interval: 3600
Authorization Code Interval: 60
Update Key Interval: 86400
Session Interruption Interval: 86400
Validity period of the client key (client secret): 0
On the JWT Settings tab:
Entry algorithm: RS512
Key Management Algorithm: RSA-OAEP
Content Encryption Algorithm: A256CBC-HS512
On the Customization tab:
Identify Class: %OAuth2.Server.Authenticate
Check User Class: %OAuth2.Server.Validate
Session Service Class: OAuth2.Server.Session
Generate Key Class: %OAuth2.Server.JWT
Custom Namespace: %SYS
Customization Roles (select at least one): %DB_IRISSYS and %Manager
Now save the changes.
The next step is registering the client on the OAuth server. Click the Customer Description button, then click Create Customer Description.
On the General Settings tab, enter the following information:
Name: OAuthClient
Description: provide a brief description
Client Type: Confidential
Redirect URLs: the address of the point to return to the app after identification from oauthclient.
Supported grant types:
Authorization code: yes
Implicit
Account details: Resource, Owner, Password
Client account details
JWT authorization
Supported response types: Select all of the following:
code
id_token
id_token key
token
Authorization type: Simple
The Client Account Details tab should be auto-completed, but ensure the information here is correct for the client.On the Client Information tab:
Authorization screen:
Client name
Logo URL
Client homepage URL
Policy URL
Terms of Service URL
Now configure the binding on the OAuth server client by going to System Administration > Security > OAuth 2.0 > Client.
Create a Server Description:
The endpoint of the generator: taken from the general parameters of the server (see above).
SSL/TLS configuration: choose from the preconfigured list.
Authorization server:
Authorization endpoint: EPG + /authorize
Key endpoint: EPG + /token
User endpoint: EPG + /userinfo
Key self-test endpoint: EPG + /revocation
Key termination endpoint: EPG + /introspection
JSON Web Token (JWT) settings:
Other source besides dynamic registration: choose JWKS from URL
URL: EPG + /jwks
From this list, for example, you can see (scopes_supported and claims_supported) that the server can provide the OAuth-client with different information about the user. And it's worth noting that when implementing your application, you should ask the user what data they are ready to share. In the example below, we will only ask for permission for scope1.
Now save the configuration.
If there is an error indicating the SSL configuration, then go to Settings > System Administration > Security > SSL/TSL Configurations and remove the configuration.
Now we're ready to set up an OAuth client:System Administration > Security > OAuth 2.0 > Client > Client configurations > Create Client configurationsOn the General tab, use these settings:
Application Name: OAuthClient
Client Name: OAuthClient
Description: enter a description
Enabled: Yes
Client Type: Confidential
SSL/TCL configuration: select oauthclient
Client Redirect URL: the DNS name of your server
Required Permission Types:
Authorization code: Yes
Implicit
Account details: Resource, Owner, Password
Client account details
JWT authorization
Authorization type: Simple
On Client Information tab:
Authorization screen:
Logo URL
Client homepage URL
Policy URL
Terms of Service URL
Default volume: taken from those specified earlier on the server (for example, scope1)
Contact email addresses: enter addresses, separated by commas
Default max age (in seconds): maximum authentication age or omit this option
On the JWT Settings tab:
JSON Web Token (JWT) settings
Creating JWT settings from X509 account details
IDToken Algorithms:
Signing: RS256
Encryption: A256CBC
Key: RSA-OAEP
Userinfo Algorithms
Access Token Algorithms
Query Algorithms
On the Client Credentials tab:
Client ID: as issued when the client registered on the server (see above).
Client ID Issued: isn't filled in
Client secret: as issued when the client registered on the server (see above).
Client Secret Expiry Period: isn't filled in
Client Registration URI: isn't filled in
Save the configuration.
Web app with OAuth authorization
OAuth relies on the fact that the communication channels between the interaction participants (server, clients, web application, user's browser, resource server) are somehow protected. Most often this role is played by protocols SSL/TLS. But OAuth will work and on unprotected channels. So, for example, server Keycloak, by default uses HTTP protocol and does without protection. It simplifies working out and debugging at working out. At real use of services, OAuth protection of channels should be included strictly obligatory is written down in the documentation Keycloak. Developers InterSystems IRIS adhere to a more strict approach for OAuth - use SSL/TSL is obligatory. The only simplification - you can use the self-signed certificates or take advantage of built-in IRIS service PKI (System administration >> Security >> Public key system).
Verification of the user's authorization is made with the explicit indication of two parameters - the name of your application registered on the OAuth server, and in the OAuth client scope.
Parameter OAUTH2APPNAME = "OAuthClient";
set isAuthorized = ##class(%SYS.OAuth2.AccessToken).IsAuthorized(
..#OAUTH2APPNAME,
.sessionId,
"scope1",
.accessToken,
.idtoken,
.responseProperties,
.error)
In the lack of authorization, we prepare a link to the request for user identification and obtaining permission to work with our application. Here we need to specify not only the name of the application registered on the OAuth server and in the OAuth client and the requested volume (scope) but also the backlink to which point of the web application to return the user.
Parameter OAUTH2CLIENTREDIRECTURI = "https://52773b-76230063.labs.learning.intersystems.com/oauthclient/"
set url = ##class(%SYS.OAuth2.Authorization).GetAuthorizationCodeEndpoint(
..#OAUTH2APPNAME,
"scope1",
..#OAUTH2CLIENTREDIRECTURI,
.properties,
.isAuthorized,
.sc)
We use IRIS and register users on the IRIS OAuth server. For example it is enough to set to the user only a name and the password.At transfer of the user under the received reference, the server will carry out the procedure of identification of the user and inquiry at it of the permissions for operation by the account data in the web application, and also will keep the result in itself in global OAuth2.Server.Session in the field %SYS:
3. Demonstrate the data of an authorized user. If the procedures are successful, we have, for example, an access token. Let's get it:
set valid = ##class(%SYS.OAuth2.Validation).ValidateJWT(
.#OAUTH2APPNAME,
accessToken,
"scope1",
.aud,
.JWTJsonObject,
.securityParameters,
.sc
)
The full working code of the OAuth example:
Class OAuthClient.REST Extends %CSP.REST
{
Parameter OAUTH2APPNAME = "OAuthClient";
Parameter OAUTH2CLIENTREDIRECTURI = "https://52773b-76230063.labs.learning.intersystems.com/oauthclient/";
// to keep sessionId
Parameter UseSession As Integer = 1;
XData UrlMap [ XMLNamespace = "http://www.intersystems.com/urlmap" ]
{
<Routes>
<Route Method="GET" Url = "/" Call = "Do" />
</Routes>
}
ClassMethod Do() As %Status
{
// Check for accessToken
set isAuthorized = ##class(%SYS.OAuth2.AccessToken).IsAuthorized(
..#OAUTH2APPNAME,
.sessionId,
"scope1",
.accessToken,
.idtoken,
.responseProperties,
.error)
// to show accessToken
if isAuthorized {
set valid = ##class(%SYS.OAuth2.Validation).ValidateJWT(
..#OAUTH2APPNAME,
accessToken,
"scope1",
.aud,
.JWTJsonObject,
.securityParameters,
.sc
)
&html< Hello!<br> >
w "You access token = ", JWTJsonObject.%ToJSON()
&html< </html> >
quit $$$OK
}
// perform the process of user and client identification and get accessToken
set url = ##class(%SYS.OAuth2.Authorization).GetAuthorizationCodeEndpoint(
..#OAUTH2APPNAME,
"scope1",
..#OAUTH2CLIENTREDIRECTURI,
.properties,
.isAuthorized,
.sc)
if $$$ISERR(sc) {
w "error handling here"
quit $$$OK
}
// url magic correction: change slashes in the query parameter to its code
set urlBase = $PIECE(url, "?")
set urlQuery = $PIECE(url, "?", 2)
set urlQuery = $REPLACE(urlQuery, "/", "%2F")
set url = urlBase _ "?" _ urlQuery
&html<
<html>
<h1>Authorization in IRIS via OAuth2</h1>
<a href = "#(url)#">Authorization in <b>IRIS</b></a>
</html>
>
quit $$$OK
}
}
You can also find a working copy of the code on the InterSystems GitHub repository: https://github.com/intersystems-community/iris-oauth-example.
If necessary, enable the advanced debug message mode on the OAuth server and OAuth client, which are written to the ISCLOG global in the %SYS area:
set ^%ISCLOG = 5
set ^%ISCLOG("Category", "OAuth2") = 5
set ^%ISCLOG("Category", "OAuth2Server") = 5
For more details, see the IRIS Using OAuth 2.0 and OpenID Connect documentation.
Conclusion
As you've seen, all OAuth features are easily accessible and completely ready to use. If necessary, you can replace the handler classes and user interfaces with your own. You can configure the OAuth server and the client settings from configuration files instead of using the management portal. Then that wonderful Ian Flemming intro gets reduced down to "vodka martini, shaken not stirred"
Announcement
Anastasia Dyubaylo · Oct 30, 2019
Hi Community,
Please join the upcoming InterSystems Israel Meetup in Herzelia which will be held on November 21st, 2019!
It will take place in the Spaces Herzliya Oxygen Ltd from 9:00 a.m. to 5:30 p.m.
The event will be focused on the InterSystems IRIS: it will be divided into IRIS for Healthcare and IRIS Data Platform. A joint lunch will be also included.
Please check the draft of the agenda below:
09:00 – 13:00 Non-Healthcare Sessions:
API Management
Showcase: InterSystems IRIS Directions
IRIS Adopting InterSystems IRIS
REST at Ease
IRIS Containers for Developers
13:00 – 14:00 Joint lunch for both morning and afternoon groups14:00 – 17:30 Healthcare Sessions:
API Management
Build HL7 Interfaces in a Flash
FHIR Update
Showcase: InterSystems IRIS Directions
Note: The final agenda will be published closer to the event.
So, remember:
⏱ Time: November 21st, 2019, from 9:30 a.m. to 5:30 p.m.
📍Venue: Spaces Herzliya Oxygen Ltd, 63 Medinat HaYehudim st., Herzelia, Israel
✅ Registration: Just send an email to ronnie.greenfield@intersystems.com*
We look forward to seeing you!
---
*Space is limited, so register today to secure your place. Admission free, registration is mandatory for attendees. Please check out the final agenda of the event:
Non-Healthcare Sessions:
📌 09:00 – 09:30 Gathering
📌 09:30 – 10:15 IRIS Data Platform Overview 📌 10:15 – 11:00 IRIS Adopting InterSystems IRISIn this session, we will introduce the InterSystems IRIS Adoption Guide, and describe the process of moving from Caché and/or Ensemble to InterSystems IRIS. We will also briefly touch on the conversion process for existing installations of Caché/Ensemble-based applications.Takeaway: InterSystems helps customers as they adopt InterSystems IRIS. 📌 11:00 – 11:45 API ManagementThis session will introduce the concept of API management and outline the InterSystems IRIS features that enable you to manage, monitor, and govern your APIs with full confidence.Takeaway: InterSystems IRIS includes comprehensive capabilities for API management. 📌 11:45 – 12:15 Resources and Services for InterSystems Developers. ObjectScript Package Manager IntroductionTakeaway: Attendees will learn about Developers Community, Open Exchange and other Resources and Services available for developers on InterSystems data platforms and will know about InterSystems Package Manager and how it can help in InterSystems IRIS solutions development 📌 12:45 – 13:00 REST at EaseThis session provides an overview of how to build REST APIs. Topics will include: using the %JSON adapter to expose and consume JSON data for REST endpoints, code-first and spec-first approaches for REST development, and a brief discussion of proper API management.Takeaway: Attendees will learn how to efficiently build, document, and manage REST APIs.
Healthcare Sessions:
📌 13:00 – 14:00 Welcome and Lunch
📌 14:00 – 14:45 InterSystems IRIS for Health Overview 📌 14:45 – 15:00 Showcase: InterSystems IRIS DirectionsThis session provides additional information about the new and future directions for InterSystems IRIS and InterSystems IRIS for Health.Takeaway: InterSystems IRIS and IRIS for Health have a compelling roadmap, with real meat behind it. 📌 15:00 – 15:45 API ManagementThis session will introduce the concept of API management and outline the InterSystems IRIS features that enable you to manage, monitor, and govern your APIs with full confidence.Takeaway: InterSystems IRIS includes comprehensive capabilities for API management. 📌 15:45 – 16:15 Resources and Services for InterSystems Developers. ObjectScript Package Manager IntroductionTakeaway: Attendees will learn about Developers Community, Open Exchange and other Resources and Services available for developers on InterSystems data platforms and will know about InterSystems Package Manager and how it can help in InterSystems IRIS solutions development 📌 16:15 – 17:00 Build HL7 Interfaces in a FlashThis session introduces our new HL7 productivity toolkit. We will give an overview, and demonstrate some key features, such as the Production Generator and Message Analyzer. We will also discuss how you can cost-effectively move from another interface engine to InterSystems technology.Takeaway: You can build HL7 interfaces more efficiently with the new productivity toolkit in InterSystems IRIS for Health.
The agenda is full of interesting stuff. Join the InterSystems Israel Meetup in Herzelia! 👍🏼 I'll participate in the meetup with the session:
📌 11:45 – 12:15 Resources and Services for InterSystems Developers. ObjectScript Package Manager IntroductionTakeaway: Attendees will learn about Developers Community, Open Exchange and other Resources and Services available for developers on InterSystems data platforms and will know about InterSystems Package Manager and how it can help in InterSystems IRIS solutions development
Come join InterSystems Developers Meetup in Israel!
Announcement
Anastasia Dyubaylo · Nov 1, 2019
Hi Everyone,
Please welcome the new Global Summit 2019 video on InterSystems Developers YouTube Channel:
⏯ InterSystems IRIS and Intel Optane Memory
Optane is a new class of memory from Intel that can accelerate the performance of hard drives. In this video, we will review the performance benefits and show high-level cost comparisons of using Intel Optane memory with InterSystems IRIS. We will also outline various use cases for Optane memory and storage.
Takeaway: Attendees will learn the benefits of using Intel's Optane technology with InterSystems IRIS.Presenter: @Mark.Bolinsky, Senior Technology Architect, InterSystems
And...
Don't forget to subscribe to our InterSystems Developers YouTube Channel.
Enjoy and stay tuned!
Article
Eduard Lebedyuk · Oct 21, 2019
InterSystems API Management (IAM) - a new feature of the InterSystems IRIS Data Platform, enables you to monitor, control and govern traffic to and from web-based APIs within your IT infrastructure. In case you missed it, here is the link to the announcement. And here's an article explaining how to start working with IAM.
In this article, we would use InterSystems API Management to Load Balance an API.
In our case, we have 2 InterSystems IRIS instances with /api/atelier REST API that we want to publish for our clients.
There are many different reasons why we might want to do that, such as:
Load balancing to spread the workload across servers
Blue-green deployment: we have two servers, one "prod", other "dev" and we might want to switch between them
Canary deployment: we might publish the new version only on one server and move 1% of clients there
High availability configuration
etc.
Still, the steps we need to take are quite similar.
Prerequisites
2 InterSystems IRIS instances
InterSystems API Management instance
Let's go
Here's what we need to do:
1. Create an upstream.
Upstream represents a virtual hostname and can be used to load balance incoming requests over multiple services (targets). For example, an upstream named service.v1.xyz would receive requests for a Service whose host is service.v1.xyz. Requests for this Service would be proxied to the targets defined within the upstream.
An upstream also includes a health checker, which can enable and disable targets based on their ability or inability to serve requests.
To start:
Open IAM Administration Portal
Go to Workspaces
Choose your workspace
Open Upstreams
Click on "New Upstream" button
After clicking the "New Upstream" button you would see a form where you can enter some basic information about the upstream (there are a lot more properties):
Enter name - it's a virtual hostname our services would use. It's unrelated to DNS records. I recommend setting it to a non-existing value to avoid confusion. If you want to read about the rest of the properties, check the documentation. On the screenshot, you can see how I imaginatively named the new upstream as myupstream.
2. Create targets.
Targets are backend servers that would execute the requests and send results back to the client. Go to Upstreams and click on the upstream name you just created (and NOT on update button):
You would see all the existing targets (none so far) and the "New Target" button. Press it:
And in the new form define a target. Only two parameters are available:
target - host and port of the backend server
weight - relative priority given to this server (more weight - more requests are sent to this target)
I have added two targets:
3. Create a service
Now that we have our upstream we need to send requests to it. We use Service for it.Service entities, as the name implies, are abstractions of each of your upstream services. Examples of Services would be a data transformationmicroservice, a billing API, etc.
Let's create a service targeting our IRIS instance, go to Services and press "New Service" button:
Set the following values:
field
value
description
name
myservice
the logical name of this service
host
myupstream
upstream name
path
/api/atelier
root path we want to serve
protocol
http
the protocols we want to support
Keep the default values for everything else (including port: 80).
After creating the service you'll see it in a list of services. Copy service ID somewhere, we're going to need that later.
4. Create a route
Routes define rules to match client requests. Each Route is associated with a Service, and a Service may have multiple Routes associated withit. Every request matching a given Route will be proxied to its associated Service.
The combination of Routes and Services (and the separation of concerns between them) offers a powerful routing mechanism with which it is possible to define fine-grained entry-points in IAM leading to different upstream services of your infrastructure.
Now let's create a route. Go to Routes and press the "New Route" button.
Set the values in the Route creation form:
field
value
description
path
/api/atelier
root path we want to serve
protocol
http
the protocols we want to support
service.id
guid from 3
service id value (guid from previous step)
And we're done!
Send a request to http://localhost:8000/api/atelier/ (note the slash at the end) and it would be served by one of our two backends.
Conclusion
IAM offers a highly customizable API Management infrastructure, allowing developers and administrators to take control of their APIs.
Links
Documentation
IAM Announcement
Working with IAM article
Question
What functionality do you want to see configured with IAM? I have a question regarding productionized deployments.Can the internal IRIS web-server be used, i.e. Port 52773?Or should there still be a web-gateway between IAM and the IRIS instance?
Regarding Kubernetes:I would think that IAM should be the ingress, is that correct? Hi Stefan,
The short answer is you still need a web-gateway between IAM and IRIS.
The private web server (port 52773) minimal build of the Apache web server is supplied for the purpose of running the Management Portal not production level traffic.
I would think that IAM should be the ingress, is that correct?
Agreed. Calling @Luca.Ravazzolo.
Announcement
Jeff Fried · Nov 4, 2019
The 2019.3 versions of InterSystems IRIS, InterSystems IRIS for Health, and InterSystems IRIS Studio are now Generally Available!
These releases are available from the WRC Software Distribution site, with build number 2019.3.0.311.0.
InterSystems IRIS Data Platform 2019.3 has many new capabilities including:
Support for InterSystems API Manager (IAM)
Polyglot Extension (PeX) available for Java
Java and .NET Gateway Reentrancy
Node-level Architecture for Sharding and SQL Support
SQL and Performance Enhancements
Infrastructure and Cloud Deployment Improvements
Port Authority for Monitoring Port Usage in Interoperability Productions
X12 Element Validation in Interoperability Productions
These are detailed in the documentation:
InterSystems IRIS 2019.3 documentation and release notes
InterSystems IRIS for Health 2019.3 includes all of the enhancements of InterSystems IRIS. In addition, this release includes FHIR searching with chained parameters (including reverse chaining) and minor updates to FHIR and other health care protocols.
FHIR STU3 PATCH Support
New IHE Profiles XCA-I and IUA
These are detailed in the documentation:
InterSystems IRIS for Health 2019.3 documentation and release notes
InterSystems IRIS Studio 2019.3 is a standalone development image supported on Microsoft Windows. It works with InterSystems IRIS and InterSystems IRIS for Health version 2019.3 and below, as well as with Caché and Ensemble.
See the InterSystems IRIS Studio Documentation for details
2019.3 is a CD release, so InterSystems IRIS and InterSystems IRIS for Health 2019.3 are only available in OCI (Open Container Initiative) a.k.a. Docker container format. The platforms on which this is supported for production and development are detailed in the Supported Platforms document. Having gone through the pain of Installing Docker for Windows and then installing the InterSystems IRIS for HEALTH 2019.3 image and having got hold of a copy of the 2019.3 Studio I was please when I saw this announcement and excitedly went looking for my 2019.3......exe only to find out there is none and a small note at the end off the announcement to say that 2019.3 InterSystems IRIS and InterSystems IRIS for Heath will only be released in CD form.
Yours
Nigel Salm Nigel, just want to be sure that you read CD as Containers Deployment - so it will be available on every delivery site (WRC, download, AWS, GCP, Azure, Dockerhub) but in a container form. InterSystems Docker Imageshttps://wrc.intersystems.com/wrc/coDistContainers.csp
Announcement
David Reche · Nov 13, 2018
Hi Everyone!
We are pleased to invite you to the InterSystems Iberia Summit 2018 on 27th of November in Madrid, Spain!
The New Challenges of Connected Health Matter
Date: November 27, 2018
Place: Hotel Meliá Serrano, Madrid
Please check the original agenda of the event.
REGISTER NOW and hope to see you at the Iberia Healthcare Summit 2018 in Madrid!
Announcement
Anastasia Dyubaylo · Nov 26, 2018
Hi Community!We're pleased to welcome @Sean.Connelly as our new Moderator in Developer Community Team! Let's greet Sean with big applause and take a closer look at his bio!Sean about his experience:— I help healthcare organisations solve complex integration problems using products such as Ensemble, Healthshare and Mirth.With 20 years of experience Sean has worked with over 20 NHS Trusts, Scottish Boards, NHS Digital, NHS Scotland and Primary Care system providers. This has included many large scale integration solutions such as replacing or implementing brand new Integration Engines, PAS replacements, OrderComms, and Electronic Document handling.Some words from Sean:— I'm also an InterSystems product specialist with deep knowledge of Cache, Ensemble, Healthshare and IRIS. I'm a moderator and active contributor on the InterSystems official community site where you can find many examples of my technical writing. I also actively write open source frameworks and tools for these products and regularly use them to accelerate development services. [CHECK OUT SEAN'S DC PROFILE]— I also specialize in web application development, I've written dozens of SPA applications over the years including large scale solutions for single record patient portals, document management, read code submissions, a dental claim system across Scotland and the modernization of a Trusts legacy green screen PAS system.Some facts about Sean's business:— I currently run my own successful consultancy business called MemCog Ltd which has been going for over 5 years. Some of my customers include the Manchester University Trust where I have helped implement large scale OrderComms solutions, merged Hospitals and Systems, as well as developing an electronic document solution that has delivered millions of electronic letters between the Trust and GP Practices every year. Welcome aboard and thanks for your great contribution, Sean!
Announcement
Benjamin De Boe · Jan 8, 2019
Hi, As we announced at our Global Summit in October, we are developing dedicated connectors for a number of third-party data visualization tools for InterSystems IRIS. With these connectors, we want to combine an excellent user experience with optimal performance when using those tools to visualize data managed on InterSystems IRIS Data Platform. If you are already using Tableau or Power BI products to access our data platform through their respective generic ODBC connectors today, we're interested in learning more about your experiences thus far and would be very grateful if you could spend a few minutes on our survey.survey for Tableau userssurvey for Microsoft Power BI usersThanks,benjamin @Benjamin.DeBoe
Have you made any progress on this?
We just starting playing around with using Web Data Connectors in Tableau to call APIs into Cache.
Best,
Mike @Mike.Davidovich I am working with Benjamin on an Alpha version of the Tableau connector. Interested in your experience with using OBDC or JDBC connection to Tableau. Have you used that and what else would you like to see with a Tableau connector?
Feel free to post here or email directly -- carmen.logue@intersystems.com @Carmen.Logue Thanks, Carmen! I'm not sure I can add too much to your Alpha development I personally haven't been using the OBDC to connect into Cache. What I do know is that some other groups have used ODBC to connect into Cache with SQL projections.
My team want's to avoid projections specifically (at this point at least) because we tend to only use them to get data from Cache to our Data Warehouse (Oracle). Other than that, we still traverse our database via globals and good old MUMPS programming. The project I'm working on is taking those many many routines that we have that traverse globals for reporting, and transforming the data to JSON streams. An %CSP.REST API will call those routines and provide the data to a Tableau web data connector so we can get instant, live data without projecting our whole database.
I'm just getting start with Tablaue and Cache so I may have some more input in the future.
Best,
Mike
Announcement
Anastasia Dyubaylo · Apr 17, 2019
Hi Community!Please welcome a new video on InterSystems Developers YouTube Channel:Implementing vSAN for InterSystems IRIS Specific examples of using VMware and vSAN will illustrate practical advice for deploying InterSystems IRIS, whether on premises or in the cloud.Takeaway: I know how to deploy InterSystems IRIS using VMware and vSAN.Presenter: @Murray.Oldfield And...Additional materials to the video you can find in this InterSystems Online Learning Course.Don't forget to subscribe our InterSystems Developers YouTube Channel. Enjoy and stay tuned!
Article
Sergey Kamenev · May 23, 2019
PHP, from the beginning of its time, is renowned (and criticized) for supporting integration with a lot of libraries, as well as with almost all the DB existing on the market. However, for some mysterious reasons, it did not support hierarchical databases on the globals.
Globals are structures for storing hierarchical information. They are somewhat similar to key-value database with the only difference being that the key can be multi-level:
Set ^inn("1234567890", "city") = "Moscow"
Set ^inn("1234567890", "city", "street") = "Req Square"
Set ^inn("1234567890", "city", "street", "house") = 1
Set ^inn("1234567890", "year") = 1970
Set ^inn("1234567890", "name", "first") = "Vladimir"
Set ^inn("1234567890", "name", "last") = "Ivanov"
In this example, multi-level information is saved in the global ^inn using the built-in ObjectScript language. Global ^inn is stored on the hard drive (this is indicated by the “^” sign in beginning).
In order to work with globals from PHP, we will need new functions that will be added by the PHP module, which will be discussed below.
Globals support many functions for working with hierarchies: traversal tree on fixed level and in depth, deleting, copying and pasting entire trees and individual nodes. And also ACID transactions - as is done in any quality database. All this happens extremely quickly (about 105-106 inserts per second on regular PC) for two reasons:
Globals are a lower level abstraction when compared to SQL,
The bases have been in production on the globals for decades, and during this time they were polished and their code was thoroughly optimized.
Learn more about globals in the series of articles titled "Globals Are Magic Swords For Managing Data.":
Part 1.Trees. Part 2.Sparse arrays. Part 3.
In this world, globals are primarily used in storage systems for unstructured and sparse information, such as: medical, personal data, banking, etc.
I love PHP (and I use it in my development work), and I wanted to play around with globals. There was no PHP module for IRIS and Caché. I contacted InterSystems and asked them to create it. InterSystems sponsored the development as part of an educational grant and my graduate student and I created the module.
Generally speaking, InterSystems IRIS is a multi-model DBMS, and that's why from PHP you can work with it via ODBC using SQL, but I was interested in globals, and there was no such connector.
So, the module is available for PHP 7.x (was tested for 7.0-7.2). Currently it can only work with InterSystems IRIS and Caché installed on the same host.
Module page on OpenExchange (a directory of projects and add-ons for developers at InterSystems IRIS and Caché).
There is a useful DISCUSS section where people share their related experiences.
Download here:
https://github.com/intersystems-community/php_ext_iris
Download the repository from the command line:
git clone https://github.com/intersystems-community/php_ext_iris
Installation instructions for the module in English and Russian.
Module Functions:
.article_table td{
padding: 5px;
}
.article_table th{
text-align: center;
}
PHP function
Description
Working with data
iris_set($node, value)
Setting a node value.
iris_set($global, $subscript1, ..., $subscriptN, $value); iris_set($global, $value);
Returns: true or false (in the case of an error). All parameters of this function are strings or numbers. The first one is the name of the global, then there are the indexes, and the last parameter is the value.
iris_set('^time',1);
iris_set('^time', 'tree', 1, 1, 'value');
ObjectScript equivalent:
Set ^time = 1
Set ^time("tree", 1, 1) = "value"
iris_set($arrayGlobal, $value);
There are just two parameters: the first one is the array in which the name of the global and all its indexes are stored, and the second one is the value.
$node = ['^time', 'tree', 1, 1];
iris_set($node,'value');
iris_get($node)
Getting a node value.
Returns: a value (a number or a line), NULL (the value is not defined), or FALSE (in the event of an error).
iris_get($global, $subscript1, ..., $subscriptN); iris_get($global);
All parameters of this function are lines or numbers. The first one is the name of the global, and the rest are subscripts. The global may not have subscripts.
$res = iris_get('^time');
$res1 = iris_get('^time', 'tree', 1, 1);
iris_get($arrayGlobal);
The only parameter is the array in which the name of the global and all its subscripts are stored.
$node = ['^time', 'tree', 1, 1];
$res = iris_get($node);
iris_zkill($node)
Deleting a node value.
Returns: TRUE or FALSE (in the event of an error).
It is important to note that this function only deletes the value in the node and does not affect lower branches.
iris_zkill($global, $subscript1, ..., $subscriptN); iris_zkill($global);
All parameters of this function are lines or numbers. The first one is the name of the global, and the rest are subscripts. The global may not have subscripts.
$res = iris_zkill('^time'); // Lower branches are not deleted.
$res1 = iris_zkill('^time', 'tree', 1, 1);
iris_zkill($arrayGlobal);
The only parameter is the array in which the name of the global and all its subscripts are stored.
$a = ['^time', 'tree', 1, 1];
$res = iris_zkill($a);
iris_kill($node)
Deleting a node and all descendant branches.
Returns: TRUE or FALSE (in the case of an error).
iris_kill($global, $subscript1, ..., $subscriptN); iris_kill($global);
All parameters of this function are lines or numbers. The first one is the name of the global, and the rest are indexes. The global may not have indexes, in which case it is deleted in full.
$res1 = iris_kill('^example', 'subscript1', 'subscript2');
$res = iris_kill('^time'); // The global is deleted in full.
iris_kill($arrayGlobal);
The only parameter is the array in which the name of the global and all its subscripts are stored.
$a = ['^time', 'tree', 1, 1];
$res = iris_kill($a);
iris_order($node)
Traversal the branches of the global on a given level
Returns: the array in which the full name of the previous node of the global on the same level is stored or FALSE (in the case of an error).
iris_order($global, $subscript1, ..., $subscriptN);
All parameters of this function are strings or numbers. The first one is the name of the global, and the rest are subscripts. Form of usage in PHP and ObjectScript equivalent:
iris_order('^ccc','new2','res2'); // $Order(^ccc("new2", "res2"))
iris_order($arrayGlobal);
The only parameter is the array in which the name of the global and the subscripts of the initial node are stored.
$node = ['^inn', '1234567890', 'city'];
for (; $node !== NULL; $node = iris_order($node))
{
echo join(', ', $node).'='.iris_get($node)."\n";
}
Returns:
^inn, 1234567890, city=Moscow
^inn, 1234567890, year=1970
iris_order_rev($node)
Traversal the branches of the global on a given level in reverse order
Returns: the array in which the full name of the previous node of the global on the same level is stored or FALSE (in the case of an error).
iris_order_rev($global, $subscript1, ..., $subscriptN);
All parameters of this function are lines or numbers. The first one is the name of the global, and the rest are subscripts. Form of usage in PHP and ObjectScript equivalent:
iris_order_rev('^ccc','new2','res2'); // $Order(^ccc("new2", "res2"), -1)
iris_order_rev($arrayGlobal);
The only parameter is the array in which the name of the global and the subscripts of the initial node are stored.
$node = ['^inn', '1234567890', 'name', 'last'];
for (; $node !== NULL; $node = iris_order_rev($node))
{
echo join(', ', $node).'='.iris_get($node)."\n";
}
Returns:
^inn, 1234567890, name, last=Ivanov
^inn, 1234567890, name, first=Vladimir
iris_query($CmdLine)
Traversal of the global in depth
Returns: the array in which the full name of the lower node (if available) or the next node of the global (if there is no embedded node) is contained.
iris_query($global, $subscript1, ..., $subscriptN);
All parameters of this function are strings or numbers. The first one is the name of the global, and the rest are subscripts. Form of usage in PHP and ObjectScript equivalent:
iris_query('^ccc', 'new2', 'res2'); // $Query(^ccc("new2", "res2"))
iris_query($arrayGlobal);
The only parameter is the array in which the name of the global and the indexes of the initial node are stored.
$node = ['^inn', 'city'];
for (; $node !== NULL; $node = iris_query($node))
{
echo join(', ', $node).'='.iris_get($node)."\n";
}
Returns:
^inn, 1234567890, city=Moscow
^inn, 1234567890, city, street=Req Square
^inn, 1234567890, city, street, house=1
^inn, 1234567890, name, first=Vladimir
^inn, 1234567890, name, last=Ivanov
^inn, 1234567890, year=1970
The order differs from the order in which we established it because everything is automatically sorted in ascending order in the global during insertion.
Service functions
iris_set_dir($FullPath)
Setting up a directory with a database
Returns: TRUE or FALSE (in the case of an error).
iris_set_dir('/InterSystems/Cache/mgr');
This must be performed before connecting to the database.
iris_exec($CmdLine)
Execute database command
Returns: TRUE or FALSE (in the case of an error).
iris_exec('kill ^global(6)'); // The ObjectScript command for deleting a global
iris_connect($login, $pass)
Connect to database
iris_quit()
Close connection with DB
iris_errno()
Get error code
iris_error()
Get text description of error
If you want to play around with the module, check e.g. docker container implementation
git clone https://github.com/intersystems-community/php_ext_iris
cd php_ext_iris/iris
docker-compose build
docker-compose up -d
Test the demo page on localhost:52080 in the browser.PHP files that can be edited and played with are in the php/demo folder which will be mounted to inside the container.
To test IRIS use the admin login with the SYS password.
To get into the IRIS settings, use the following URL:http://localhost:52773/csp/sys/UtilHome.csp
To get into the IRIS console of this container, use the following command:
docker exec -it iris_iris_1 iris session IRIS
Especially for DC and those who wants to use we run a virtual machine with a Caché php-module was set up.
Demo page on english. Demo page on russian. Login: habr_test Password: burmur#@8765
For self-installation of the module for InterSystems Caché
Have Linux. I tested for Ubuntu, the module should also compiled and work under Windows, but I didn’t test it.
Download the free version:
InterSystems Caché (registration required). As to Linux, Red Hat and Suse are supported out of the box, but you can also install them on other distribution packages.
Install the cach.so module in PHP according to the instructions..
Just out of interest, I ran two primitive tests to check the speed of inserting new values into the database in the docker container on my PC (AMD FX-9370@4700Mhz 32GB, LVM, SATA SSD).
Insertion of 1 million new nodes into the global took 1.81 seconds or 552K inserts per second.
Updating a value in the same global 1,000,000 times took 1.98 seconds or 505K updates per second. An interesting fact is that the insertion occurs faster than the update. Apparently this is a consequence of the initial optimization of the database aimed at quick insertion.
Obviously, these tests cannot be considered 100% accurate or useful, since they are primitive and are done in the container. On more powerful hardware with a disk system on a PCIe SSD, tens of millions of inserts per second can be achieved.
What else can be completed and the current state
Useful functions for working with transactions can be added (you can still use them with iris_exec).
The function of returning the whole global structure is not implemented, so as not to traverse the global from PHP.
The function of saving a PHP array as a subtree is not implemented.
Access to local database variables is not implemented. Only using iris_exec, although it's better with iris_set.
Global traversal in depth in the opposite direction is not implemented.
Access to the database via an object using methods (similar to current functions) is not implemented.
The current module is not quite yet ready for production: not tested for high loads and memory leaks. However, should someone need it, please feel free to contact me at any time (Sergey Kamenev sukamenev@gmail.com).
Bottom line
For a long time, the worlds of PHP and hierarchical databases on globals practically did not overlap, although globals provide strong and fast functionality for specific data types (medical, personal).
I hope that this module will motivate PHP programmers to experiment with globals and ObjectScript programmers for simple development of web interfaces in PHP.
P.S. Thank you for your time! Nice!Just tried this with docker container on my local machine.And got 1 million insertions(1,000,000) in 1,45 sec on my mac pro. Cool!