Clear filter
Announcement
Anastasia Dyubaylo · Sep 3, 2019
Hi Community!
We are super excited to announce the Boston FHIR @ InterSystems Meetup on 10th of September at the InterSystems meeting space!
There will be two talks with Q&A and networking.
Doors open at 5:30pm, we should start the first talk around 6pm. We will have a short break between talks for announcements, including job opportunities.
Please check the details below.
#1 We are in the middle of changes in healthcare technology that affect the strategies of companies and organizations across the globe, including many startups right here in Massachusetts. Micky Tripathi from the Massachusetts eHealth Collaborative is going to talk to us about the opportunities and consequences of API-based healthcare.
By Micky Tripathi - MAeHC#2 FHIR Analytics
The establishment of FHIR as a new healthcare data format creates new opportunities and challenges. Health professionals would like to acquire patient data from Electronic Health Records (EHR) with FHIR, and use it for population health management and research.FHIR provides resources and foundations based on XML and JSON data structures. However, traditional analytic tools are difficult to use with these structures. We created a prototype application to ingest FHIR bundles and save the Patient and Observation resources as objects/tables in InterSystems IRIS for Health. Developers can then easily create derived "fact tables" that de-normalize these tables for exploration and analytics.We will demo this application and our analytics tools using the InterSystems IRIS for Health platform.By Patrick Jamieson, M.D., Product Manager for InterSystems IRIS for Health and Carmen Logue, Product Manager - Analytics and AI
So, remember!
Date and time: Tuesday, 10 September 2019 5:30 pm to 7:30 pm
Venue: 1 Memorial Dr, Cambridge, MA 02142, USA
Event webpage: Boston FHIR @ InterSystems Meetup
Article
Evgeny Shvarov · Sep 6, 2019
Hi Developers!
InterSystems Package Manager (ZPM) is a great thing, but it is even better if you don't need to install it but can use immediately.
There are several ways how to do this and here is one approach of having IRIS container with ZPM built with dockerfile.
I've prepared a repository which has a few lines in dockerfile which perform the download and install the latest version of ZPM.
Add these lines to your standard dockerfile for IRIS community edition and you will have ZPM installed and ready to use.
To download the latest ZPM client:
RUN mkdir -p /tmp/deps \
&& cd /tmp/deps \
&& wget -q https://pm.community.intersystems.com/packages/zpm/latest/installer -O zpm.xml
to install ZPM into IRIS:
" Do \$system.OBJ.Load(\"/tmp/deps/zpm.xml\", \"ck\")" \
Great!
To try ZPM with this repository do the following:
$ git clone https://github.com/intersystems-community/objectscript-zpm-template.git
Build and run the repo:
$ docker-compose up -d
Open IRIS terminal:
$ docker-compose exec iris iris session iris
USER>
Call ZPM:
USER>zpm
zpm: USER>
Install webterminal
zpm: USER>install webterminal
webterminal] Reload START
[webterminal] Reload SUCCESS
[webterminal] Module object refreshed.
[webterminal] Validate START
[webterminal] Validate SUCCESS
[webterminal] Compile START
[webterminal] Compile SUCCESS
[webterminal] Activate START
[webterminal] Configure START
[webterminal] Configure SUCCESS
[webterminal] Activate SUCCESS
zpm: USER>
Use it!
And take a look at the whole process in this gif:
It turned out, that we don't need a special repository to add ZPM into your docker container.You just need another dockerfile - like this one. And here is the related docker-compose to make a handy start. See how it works:
Article
Dmitrii Kuznetsov · Oct 7, 2019
How can you allow computers to trust one another in your absence while maintaining security and privacy?
“A Dry Martini”, he said. “One. In a deep champagne goblet.”“Oui, monsieur.”“Just a moment. Three measures of Gordons, one of vodka, half a measure of Kina Lillet. Shake it very well until it’s ice-cold, then add a large thin slice of lemon peel. Got it?”"Certainly, monsieur." The barman seemed pleased with the idea.Casino Royale, Ian Fleming, 1953
OAuth helps to separate services with user credentials from “working” databases, both physically and geographically. It thereby strengthens the protection of identification data and, if necessary, helps you comply with the requirements of countries' data protection laws.
With OAuth, you can provide the user with the ability to work safely from multiple devices at once, while "exposing" personal data to various services and applications as little as possible. You can also avoid taking on "excess" data about users of your services (i.e. you can process data in a depersonalized form).
If you use Intersystems IRIS, you get a complete set of ready-made tools for testing and deploying OAuth and OIDC services, both autonomously and in cooperation with third-party software products.
OAuth 2.0 and OpenID Connect
OAuth and OpenID Connect — known as OIDC or simply OpenID — serve as a universal combination of open protocols for delegating access and identification — and in the 21st century, it seems to be a favorite. No one has come up with a better option for large-scale use. It's especially popular with frontenders because it sits on top of HTTP(S) protocols and uses a JWT (JSON Web Token) container.
OpenID works using OAuth — it is, in fact, a wrapper for OAuth. Using OpenID as an open standard for the authentication and creation of digital identification systems is nothing new for developers. As of 2019, it is in its 14th year (and its third version). It is popular in web and mobile development and in enterprise systems.
Its partner, the OAuth open standard for delegating access, is 12 years old, and it's been nine years since the relevant RFC 5849 standard appeared. For the purposes of this article, we will rely on the current version of the protocol, OAuth 2.0, and the current RFC 6749. (OAuth 2.0 is not compatible with its predecessor, OAuth 1.0.)
Strictly speaking, OAuth is not a protocol, but a set of rules (a scheme) for separating and transferring user identification operations to a separate trusted server when implementing an access-rights restriction architecture in software systems.
Be aware: OAuth can't say anything about a specific user! Who the user is, or where the user is, or even whether the user is currently at a computer or not. But OAuth makes it possible to interact with systems without user participation, using pre-issued access tokens. This is an important point (see "User Authentication with OAuth 2.0" on the OAuth site for more information).
The User-Managed Access (UMA) protocol is also based on OAuth. Using OAuth, OIDC and UMA together make it possible to implement a protected identity and access management (IdM, IAM) system in areas such as:
Using a patient's HEART (Health Relationship Trust) personal data profile in medicine.
Consumer Identity and Access Management (CIAM) platforms for manufacturing and trading companies.
Personalizing digital certificates for smart devices in IoT (Internet of Things) systems using the OAuth 2.0 Internet of Things (IoT) Client Credentials Grant.
A New Venn Of Access Control For The API Economy
Above all, do not store personal data in the same place as the rest of the system. Separate authentication and authorization physically. And ideally, give the identification and authentication to the individual person. Never store them yourself. Trust the owner's device.
Trust and Authentication
It is not a best practice to store users' personal data either in one’s own app or in a combined storage location along with a working database. In other words, we choose someone we trust to provide us with this service.
It is made up of the following parts:
The user
The client app
The identification service
The resource server
The action takes place in a web browser on the user's computer. The user has an account with the identification service. The client app has a signed contract with the identification service and reciprocal interfaces. The resource server trusts the identification service to issue access keys to anyone it can identify.
The user runs the client web app, requesting a resource. The client app must present a key to that resource to gain access.If the user doesn’t have a key, then the client app connects with an identification service with which it has a contract for issuing keys to the resource server (passing the user on to the identification service).
The Identification Service asks what kind of keys are required.
The user provides a password to access the resource. At this point, the user has been authenticated and identification of the user has been confirmed, thus providing the key to the resource (passing the user back to the client app), and the resource is made available to the user.
Implementing an Authorization Service
On the Intersystems IRIS platform, you can assemble a service from different platforms as needed. For example:
Configure and launch an OAuth server with the demo client registered on it.
Configure a demo OAuth client by associating it with an OAuth server and web resources.
Develop client apps that can use OAuth. You can use Java, Python, C#, or NodeJS. Below is an example of the application code in ObjectScript.
There are multiple settings in OAuth, so checklists can be helpful. Let's walk through an example. Go to the IRIS management portal and select the section System Administration > Security > OAuth 2.0 > Server.
Each item will then contain the name of a settings line and a colon, followed by an example or explanation, if necessary. As an alternative, you can use the screenshot hints in Daniel Kutac's three-part article, InterSystems IRIS Open Authorization Framework (OAuth 2.0) implementation - part 1, part 2, and part 3.
Note that all of the following screenshots are meant to serve as examples. You’ll likely need to choose different options when creating your own applications.
On the General Settings tab, use these settings:
Description: provide a description of the configuration, such as "Authorization server".
The endpoint of the generator (hereinafter EPG) host name: DNS name of your server.
Supported permission types (select at least one):
Authorization code
Implicit
Account details: Resource, Owner, Password
Client account details
SSL/TLS configuration: oauthserver
On the Scopes tab:
Add supported scopes: scope1 in our example
On the Intervals tab:
Access Key Interval: 3600
Authorization Code Interval: 60
Update Key Interval: 86400
Session Interruption Interval: 86400
Validity period of the client key (client secret): 0
On the JWT Settings tab:
Entry algorithm: RS512
Key Management Algorithm: RSA-OAEP
Content Encryption Algorithm: A256CBC-HS512
On the Customization tab:
Identify Class: %OAuth2.Server.Authenticate
Check User Class: %OAuth2.Server.Validate
Session Service Class: OAuth2.Server.Session
Generate Key Class: %OAuth2.Server.JWT
Custom Namespace: %SYS
Customization Roles (select at least one): %DB_IRISSYS and %Manager
Now save the changes.
The next step is registering the client on the OAuth server. Click the Customer Description button, then click Create Customer Description.
On the General Settings tab, enter the following information:
Name: OAuthClient
Description: provide a brief description
Client Type: Confidential
Redirect URLs: the address of the point to return to the app after identification from oauthclient.
Supported grant types:
Authorization code: yes
Implicit
Account details: Resource, Owner, Password
Client account details
JWT authorization
Supported response types: Select all of the following:
code
id_token
id_token key
token
Authorization type: Simple
The Client Account Details tab should be auto-completed, but ensure the information here is correct for the client.On the Client Information tab:
Authorization screen:
Client name
Logo URL
Client homepage URL
Policy URL
Terms of Service URL
Now configure the binding on the OAuth server client by going to System Administration > Security > OAuth 2.0 > Client.
Create a Server Description:
The endpoint of the generator: taken from the general parameters of the server (see above).
SSL/TLS configuration: choose from the preconfigured list.
Authorization server:
Authorization endpoint: EPG + /authorize
Key endpoint: EPG + /token
User endpoint: EPG + /userinfo
Key self-test endpoint: EPG + /revocation
Key termination endpoint: EPG + /introspection
JSON Web Token (JWT) settings:
Other source besides dynamic registration: choose JWKS from URL
URL: EPG + /jwks
From this list, for example, you can see (scopes_supported and claims_supported) that the server can provide the OAuth-client with different information about the user. And it's worth noting that when implementing your application, you should ask the user what data they are ready to share. In the example below, we will only ask for permission for scope1.
Now save the configuration.
If there is an error indicating the SSL configuration, then go to Settings > System Administration > Security > SSL/TSL Configurations and remove the configuration.
Now we're ready to set up an OAuth client:System Administration > Security > OAuth 2.0 > Client > Client configurations > Create Client configurationsOn the General tab, use these settings:
Application Name: OAuthClient
Client Name: OAuthClient
Description: enter a description
Enabled: Yes
Client Type: Confidential
SSL/TCL configuration: select oauthclient
Client Redirect URL: the DNS name of your server
Required Permission Types:
Authorization code: Yes
Implicit
Account details: Resource, Owner, Password
Client account details
JWT authorization
Authorization type: Simple
On Client Information tab:
Authorization screen:
Logo URL
Client homepage URL
Policy URL
Terms of Service URL
Default volume: taken from those specified earlier on the server (for example, scope1)
Contact email addresses: enter addresses, separated by commas
Default max age (in seconds): maximum authentication age or omit this option
On the JWT Settings tab:
JSON Web Token (JWT) settings
Creating JWT settings from X509 account details
IDToken Algorithms:
Signing: RS256
Encryption: A256CBC
Key: RSA-OAEP
Userinfo Algorithms
Access Token Algorithms
Query Algorithms
On the Client Credentials tab:
Client ID: as issued when the client registered on the server (see above).
Client ID Issued: isn't filled in
Client secret: as issued when the client registered on the server (see above).
Client Secret Expiry Period: isn't filled in
Client Registration URI: isn't filled in
Save the configuration.
Web app with OAuth authorization
OAuth relies on the fact that the communication channels between the interaction participants (server, clients, web application, user's browser, resource server) are somehow protected. Most often this role is played by protocols SSL/TLS. But OAuth will work and on unprotected channels. So, for example, server Keycloak, by default uses HTTP protocol and does without protection. It simplifies working out and debugging at working out. At real use of services, OAuth protection of channels should be included strictly obligatory is written down in the documentation Keycloak. Developers InterSystems IRIS adhere to a more strict approach for OAuth - use SSL/TSL is obligatory. The only simplification - you can use the self-signed certificates or take advantage of built-in IRIS service PKI (System administration >> Security >> Public key system).
Verification of the user's authorization is made with the explicit indication of two parameters - the name of your application registered on the OAuth server, and in the OAuth client scope.
Parameter OAUTH2APPNAME = "OAuthClient";
set isAuthorized = ##class(%SYS.OAuth2.AccessToken).IsAuthorized(
..#OAUTH2APPNAME,
.sessionId,
"scope1",
.accessToken,
.idtoken,
.responseProperties,
.error)
In the lack of authorization, we prepare a link to the request for user identification and obtaining permission to work with our application. Here we need to specify not only the name of the application registered on the OAuth server and in the OAuth client and the requested volume (scope) but also the backlink to which point of the web application to return the user.
Parameter OAUTH2CLIENTREDIRECTURI = "https://52773b-76230063.labs.learning.intersystems.com/oauthclient/"
set url = ##class(%SYS.OAuth2.Authorization).GetAuthorizationCodeEndpoint(
..#OAUTH2APPNAME,
"scope1",
..#OAUTH2CLIENTREDIRECTURI,
.properties,
.isAuthorized,
.sc)
We use IRIS and register users on the IRIS OAuth server. For example it is enough to set to the user only a name and the password.At transfer of the user under the received reference, the server will carry out the procedure of identification of the user and inquiry at it of the permissions for operation by the account data in the web application, and also will keep the result in itself in global OAuth2.Server.Session in the field %SYS:
3. Demonstrate the data of an authorized user. If the procedures are successful, we have, for example, an access token. Let's get it:
set valid = ##class(%SYS.OAuth2.Validation).ValidateJWT(
.#OAUTH2APPNAME,
accessToken,
"scope1",
.aud,
.JWTJsonObject,
.securityParameters,
.sc
)
The full working code of the OAuth example:
Class OAuthClient.REST Extends %CSP.REST
{
Parameter OAUTH2APPNAME = "OAuthClient";
Parameter OAUTH2CLIENTREDIRECTURI = "https://52773b-76230063.labs.learning.intersystems.com/oauthclient/";
// to keep sessionId
Parameter UseSession As Integer = 1;
XData UrlMap [ XMLNamespace = "http://www.intersystems.com/urlmap" ]
{
<Routes>
<Route Method="GET" Url = "/" Call = "Do" />
</Routes>
}
ClassMethod Do() As %Status
{
// Check for accessToken
set isAuthorized = ##class(%SYS.OAuth2.AccessToken).IsAuthorized(
..#OAUTH2APPNAME,
.sessionId,
"scope1",
.accessToken,
.idtoken,
.responseProperties,
.error)
// to show accessToken
if isAuthorized {
set valid = ##class(%SYS.OAuth2.Validation).ValidateJWT(
..#OAUTH2APPNAME,
accessToken,
"scope1",
.aud,
.JWTJsonObject,
.securityParameters,
.sc
)
&html< Hello!<br> >
w "You access token = ", JWTJsonObject.%ToJSON()
&html< </html> >
quit $$$OK
}
// perform the process of user and client identification and get accessToken
set url = ##class(%SYS.OAuth2.Authorization).GetAuthorizationCodeEndpoint(
..#OAUTH2APPNAME,
"scope1",
..#OAUTH2CLIENTREDIRECTURI,
.properties,
.isAuthorized,
.sc)
if $$$ISERR(sc) {
w "error handling here"
quit $$$OK
}
// url magic correction: change slashes in the query parameter to its code
set urlBase = $PIECE(url, "?")
set urlQuery = $PIECE(url, "?", 2)
set urlQuery = $REPLACE(urlQuery, "/", "%2F")
set url = urlBase _ "?" _ urlQuery
&html<
<html>
<h1>Authorization in IRIS via OAuth2</h1>
<a href = "#(url)#">Authorization in <b>IRIS</b></a>
</html>
>
quit $$$OK
}
}
You can also find a working copy of the code on the InterSystems GitHub repository: https://github.com/intersystems-community/iris-oauth-example.
If necessary, enable the advanced debug message mode on the OAuth server and OAuth client, which are written to the ISCLOG global in the %SYS area:
set ^%ISCLOG = 5
set ^%ISCLOG("Category", "OAuth2") = 5
set ^%ISCLOG("Category", "OAuth2Server") = 5
For more details, see the IRIS Using OAuth 2.0 and OpenID Connect documentation.
Conclusion
As you've seen, all OAuth features are easily accessible and completely ready to use. If necessary, you can replace the handler classes and user interfaces with your own. You can configure the OAuth server and the client settings from configuration files instead of using the management portal. Then that wonderful Ian Flemming intro gets reduced down to "vodka martini, shaken not stirred"
Announcement
Anastasia Dyubaylo · Oct 30, 2019
Hi Community,
Please join the upcoming InterSystems Israel Meetup in Herzelia which will be held on November 21st, 2019!
It will take place in the Spaces Herzliya Oxygen Ltd from 9:00 a.m. to 5:30 p.m.
The event will be focused on the InterSystems IRIS: it will be divided into IRIS for Healthcare and IRIS Data Platform. A joint lunch will be also included.
Please check the draft of the agenda below:
09:00 – 13:00 Non-Healthcare Sessions:
API Management
Showcase: InterSystems IRIS Directions
IRIS Adopting InterSystems IRIS
REST at Ease
IRIS Containers for Developers
13:00 – 14:00 Joint lunch for both morning and afternoon groups14:00 – 17:30 Healthcare Sessions:
API Management
Build HL7 Interfaces in a Flash
FHIR Update
Showcase: InterSystems IRIS Directions
Note: The final agenda will be published closer to the event.
So, remember:
⏱ Time: November 21st, 2019, from 9:30 a.m. to 5:30 p.m.
📍Venue: Spaces Herzliya Oxygen Ltd, 63 Medinat HaYehudim st., Herzelia, Israel
✅ Registration: Just send an email to ronnie.greenfield@intersystems.com*
We look forward to seeing you!
---
*Space is limited, so register today to secure your place. Admission free, registration is mandatory for attendees. Please check out the final agenda of the event:
Non-Healthcare Sessions:
📌 09:00 – 09:30 Gathering
📌 09:30 – 10:15 IRIS Data Platform Overview 📌 10:15 – 11:00 IRIS Adopting InterSystems IRISIn this session, we will introduce the InterSystems IRIS Adoption Guide, and describe the process of moving from Caché and/or Ensemble to InterSystems IRIS. We will also briefly touch on the conversion process for existing installations of Caché/Ensemble-based applications.Takeaway: InterSystems helps customers as they adopt InterSystems IRIS. 📌 11:00 – 11:45 API ManagementThis session will introduce the concept of API management and outline the InterSystems IRIS features that enable you to manage, monitor, and govern your APIs with full confidence.Takeaway: InterSystems IRIS includes comprehensive capabilities for API management. 📌 11:45 – 12:15 Resources and Services for InterSystems Developers. ObjectScript Package Manager IntroductionTakeaway: Attendees will learn about Developers Community, Open Exchange and other Resources and Services available for developers on InterSystems data platforms and will know about InterSystems Package Manager and how it can help in InterSystems IRIS solutions development 📌 12:45 – 13:00 REST at EaseThis session provides an overview of how to build REST APIs. Topics will include: using the %JSON adapter to expose and consume JSON data for REST endpoints, code-first and spec-first approaches for REST development, and a brief discussion of proper API management.Takeaway: Attendees will learn how to efficiently build, document, and manage REST APIs.
Healthcare Sessions:
📌 13:00 – 14:00 Welcome and Lunch
📌 14:00 – 14:45 InterSystems IRIS for Health Overview 📌 14:45 – 15:00 Showcase: InterSystems IRIS DirectionsThis session provides additional information about the new and future directions for InterSystems IRIS and InterSystems IRIS for Health.Takeaway: InterSystems IRIS and IRIS for Health have a compelling roadmap, with real meat behind it. 📌 15:00 – 15:45 API ManagementThis session will introduce the concept of API management and outline the InterSystems IRIS features that enable you to manage, monitor, and govern your APIs with full confidence.Takeaway: InterSystems IRIS includes comprehensive capabilities for API management. 📌 15:45 – 16:15 Resources and Services for InterSystems Developers. ObjectScript Package Manager IntroductionTakeaway: Attendees will learn about Developers Community, Open Exchange and other Resources and Services available for developers on InterSystems data platforms and will know about InterSystems Package Manager and how it can help in InterSystems IRIS solutions development 📌 16:15 – 17:00 Build HL7 Interfaces in a FlashThis session introduces our new HL7 productivity toolkit. We will give an overview, and demonstrate some key features, such as the Production Generator and Message Analyzer. We will also discuss how you can cost-effectively move from another interface engine to InterSystems technology.Takeaway: You can build HL7 interfaces more efficiently with the new productivity toolkit in InterSystems IRIS for Health.
The agenda is full of interesting stuff. Join the InterSystems Israel Meetup in Herzelia! 👍🏼 I'll participate in the meetup with the session:
📌 11:45 – 12:15 Resources and Services for InterSystems Developers. ObjectScript Package Manager IntroductionTakeaway: Attendees will learn about Developers Community, Open Exchange and other Resources and Services available for developers on InterSystems data platforms and will know about InterSystems Package Manager and how it can help in InterSystems IRIS solutions development
Come join InterSystems Developers Meetup in Israel!
Announcement
Anastasia Dyubaylo · Nov 1, 2019
Hi Everyone,
Please welcome the new Global Summit 2019 video on InterSystems Developers YouTube Channel:
⏯ InterSystems IRIS and Intel Optane Memory
Optane is a new class of memory from Intel that can accelerate the performance of hard drives. In this video, we will review the performance benefits and show high-level cost comparisons of using Intel Optane memory with InterSystems IRIS. We will also outline various use cases for Optane memory and storage.
Takeaway: Attendees will learn the benefits of using Intel's Optane technology with InterSystems IRIS.Presenter: @Mark.Bolinsky, Senior Technology Architect, InterSystems
And...
Don't forget to subscribe to our InterSystems Developers YouTube Channel.
Enjoy and stay tuned!
Article
Eduard Lebedyuk · Oct 21, 2019
InterSystems API Management (IAM) - a new feature of the InterSystems IRIS Data Platform, enables you to monitor, control and govern traffic to and from web-based APIs within your IT infrastructure. In case you missed it, here is the link to the announcement. And here's an article explaining how to start working with IAM.
In this article, we would use InterSystems API Management to Load Balance an API.
In our case, we have 2 InterSystems IRIS instances with /api/atelier REST API that we want to publish for our clients.
There are many different reasons why we might want to do that, such as:
Load balancing to spread the workload across servers
Blue-green deployment: we have two servers, one "prod", other "dev" and we might want to switch between them
Canary deployment: we might publish the new version only on one server and move 1% of clients there
High availability configuration
etc.
Still, the steps we need to take are quite similar.
Prerequisites
2 InterSystems IRIS instances
InterSystems API Management instance
Let's go
Here's what we need to do:
1. Create an upstream.
Upstream represents a virtual hostname and can be used to load balance incoming requests over multiple services (targets). For example, an upstream named service.v1.xyz would receive requests for a Service whose host is service.v1.xyz. Requests for this Service would be proxied to the targets defined within the upstream.
An upstream also includes a health checker, which can enable and disable targets based on their ability or inability to serve requests.
To start:
Open IAM Administration Portal
Go to Workspaces
Choose your workspace
Open Upstreams
Click on "New Upstream" button
After clicking the "New Upstream" button you would see a form where you can enter some basic information about the upstream (there are a lot more properties):
Enter name - it's a virtual hostname our services would use. It's unrelated to DNS records. I recommend setting it to a non-existing value to avoid confusion. If you want to read about the rest of the properties, check the documentation. On the screenshot, you can see how I imaginatively named the new upstream as myupstream.
2. Create targets.
Targets are backend servers that would execute the requests and send results back to the client. Go to Upstreams and click on the upstream name you just created (and NOT on update button):
You would see all the existing targets (none so far) and the "New Target" button. Press it:
And in the new form define a target. Only two parameters are available:
target - host and port of the backend server
weight - relative priority given to this server (more weight - more requests are sent to this target)
I have added two targets:
3. Create a service
Now that we have our upstream we need to send requests to it. We use Service for it.Service entities, as the name implies, are abstractions of each of your upstream services. Examples of Services would be a data transformationmicroservice, a billing API, etc.
Let's create a service targeting our IRIS instance, go to Services and press "New Service" button:
Set the following values:
field
value
description
name
myservice
the logical name of this service
host
myupstream
upstream name
path
/api/atelier
root path we want to serve
protocol
http
the protocols we want to support
Keep the default values for everything else (including port: 80).
After creating the service you'll see it in a list of services. Copy service ID somewhere, we're going to need that later.
4. Create a route
Routes define rules to match client requests. Each Route is associated with a Service, and a Service may have multiple Routes associated withit. Every request matching a given Route will be proxied to its associated Service.
The combination of Routes and Services (and the separation of concerns between them) offers a powerful routing mechanism with which it is possible to define fine-grained entry-points in IAM leading to different upstream services of your infrastructure.
Now let's create a route. Go to Routes and press the "New Route" button.
Set the values in the Route creation form:
field
value
description
path
/api/atelier
root path we want to serve
protocol
http
the protocols we want to support
service.id
guid from 3
service id value (guid from previous step)
And we're done!
Send a request to http://localhost:8000/api/atelier/ (note the slash at the end) and it would be served by one of our two backends.
Conclusion
IAM offers a highly customizable API Management infrastructure, allowing developers and administrators to take control of their APIs.
Links
Documentation
IAM Announcement
Working with IAM article
Question
What functionality do you want to see configured with IAM? I have a question regarding productionized deployments.Can the internal IRIS web-server be used, i.e. Port 52773?Or should there still be a web-gateway between IAM and the IRIS instance?
Regarding Kubernetes:I would think that IAM should be the ingress, is that correct? Hi Stefan,
The short answer is you still need a web-gateway between IAM and IRIS.
The private web server (port 52773) minimal build of the Apache web server is supplied for the purpose of running the Management Portal not production level traffic.
I would think that IAM should be the ingress, is that correct?
Agreed. Calling @Luca.Ravazzolo.
Announcement
Jeff Fried · Nov 4, 2019
The 2019.3 versions of InterSystems IRIS, InterSystems IRIS for Health, and InterSystems IRIS Studio are now Generally Available!
These releases are available from the WRC Software Distribution site, with build number 2019.3.0.311.0.
InterSystems IRIS Data Platform 2019.3 has many new capabilities including:
Support for InterSystems API Manager (IAM)
Polyglot Extension (PeX) available for Java
Java and .NET Gateway Reentrancy
Node-level Architecture for Sharding and SQL Support
SQL and Performance Enhancements
Infrastructure and Cloud Deployment Improvements
Port Authority for Monitoring Port Usage in Interoperability Productions
X12 Element Validation in Interoperability Productions
These are detailed in the documentation:
InterSystems IRIS 2019.3 documentation and release notes
InterSystems IRIS for Health 2019.3 includes all of the enhancements of InterSystems IRIS. In addition, this release includes FHIR searching with chained parameters (including reverse chaining) and minor updates to FHIR and other health care protocols.
FHIR STU3 PATCH Support
New IHE Profiles XCA-I and IUA
These are detailed in the documentation:
InterSystems IRIS for Health 2019.3 documentation and release notes
InterSystems IRIS Studio 2019.3 is a standalone development image supported on Microsoft Windows. It works with InterSystems IRIS and InterSystems IRIS for Health version 2019.3 and below, as well as with Caché and Ensemble.
See the InterSystems IRIS Studio Documentation for details
2019.3 is a CD release, so InterSystems IRIS and InterSystems IRIS for Health 2019.3 are only available in OCI (Open Container Initiative) a.k.a. Docker container format. The platforms on which this is supported for production and development are detailed in the Supported Platforms document. Having gone through the pain of Installing Docker for Windows and then installing the InterSystems IRIS for HEALTH 2019.3 image and having got hold of a copy of the 2019.3 Studio I was please when I saw this announcement and excitedly went looking for my 2019.3......exe only to find out there is none and a small note at the end off the announcement to say that 2019.3 InterSystems IRIS and InterSystems IRIS for Heath will only be released in CD form.
Yours
Nigel Salm Nigel, just want to be sure that you read CD as Containers Deployment - so it will be available on every delivery site (WRC, download, AWS, GCP, Azure, Dockerhub) but in a container form. InterSystems Docker Imageshttps://wrc.intersystems.com/wrc/coDistContainers.csp
Announcement
David Reche · Nov 13, 2018
Hi Everyone!
We are pleased to invite you to the InterSystems Iberia Summit 2018 on 27th of November in Madrid, Spain!
The New Challenges of Connected Health Matter
Date: November 27, 2018
Place: Hotel Meliá Serrano, Madrid
Please check the original agenda of the event.
REGISTER NOW and hope to see you at the Iberia Healthcare Summit 2018 in Madrid!
Announcement
Anastasia Dyubaylo · Nov 26, 2018
Hi Community!We're pleased to welcome @Sean.Connelly as our new Moderator in Developer Community Team! Let's greet Sean with big applause and take a closer look at his bio!Sean about his experience:— I help healthcare organisations solve complex integration problems using products such as Ensemble, Healthshare and Mirth.With 20 years of experience Sean has worked with over 20 NHS Trusts, Scottish Boards, NHS Digital, NHS Scotland and Primary Care system providers. This has included many large scale integration solutions such as replacing or implementing brand new Integration Engines, PAS replacements, OrderComms, and Electronic Document handling.Some words from Sean:— I'm also an InterSystems product specialist with deep knowledge of Cache, Ensemble, Healthshare and IRIS. I'm a moderator and active contributor on the InterSystems official community site where you can find many examples of my technical writing. I also actively write open source frameworks and tools for these products and regularly use them to accelerate development services. [CHECK OUT SEAN'S DC PROFILE]— I also specialize in web application development, I've written dozens of SPA applications over the years including large scale solutions for single record patient portals, document management, read code submissions, a dental claim system across Scotland and the modernization of a Trusts legacy green screen PAS system.Some facts about Sean's business:— I currently run my own successful consultancy business called MemCog Ltd which has been going for over 5 years. Some of my customers include the Manchester University Trust where I have helped implement large scale OrderComms solutions, merged Hospitals and Systems, as well as developing an electronic document solution that has delivered millions of electronic letters between the Trust and GP Practices every year. Welcome aboard and thanks for your great contribution, Sean!
Announcement
Benjamin De Boe · Jan 8, 2019
Hi, As we announced at our Global Summit in October, we are developing dedicated connectors for a number of third-party data visualization tools for InterSystems IRIS. With these connectors, we want to combine an excellent user experience with optimal performance when using those tools to visualize data managed on InterSystems IRIS Data Platform. If you are already using Tableau or Power BI products to access our data platform through their respective generic ODBC connectors today, we're interested in learning more about your experiences thus far and would be very grateful if you could spend a few minutes on our survey.survey for Tableau userssurvey for Microsoft Power BI usersThanks,benjamin @Benjamin.DeBoe
Have you made any progress on this?
We just starting playing around with using Web Data Connectors in Tableau to call APIs into Cache.
Best,
Mike @Mike.Davidovich I am working with Benjamin on an Alpha version of the Tableau connector. Interested in your experience with using OBDC or JDBC connection to Tableau. Have you used that and what else would you like to see with a Tableau connector?
Feel free to post here or email directly -- carmen.logue@intersystems.com @Carmen.Logue Thanks, Carmen! I'm not sure I can add too much to your Alpha development I personally haven't been using the OBDC to connect into Cache. What I do know is that some other groups have used ODBC to connect into Cache with SQL projections.
My team want's to avoid projections specifically (at this point at least) because we tend to only use them to get data from Cache to our Data Warehouse (Oracle). Other than that, we still traverse our database via globals and good old MUMPS programming. The project I'm working on is taking those many many routines that we have that traverse globals for reporting, and transforming the data to JSON streams. An %CSP.REST API will call those routines and provide the data to a Tableau web data connector so we can get instant, live data without projecting our whole database.
I'm just getting start with Tablaue and Cache so I may have some more input in the future.
Best,
Mike
Announcement
Anastasia Dyubaylo · Apr 17, 2019
Hi Community!Please welcome a new video on InterSystems Developers YouTube Channel:Implementing vSAN for InterSystems IRIS Specific examples of using VMware and vSAN will illustrate practical advice for deploying InterSystems IRIS, whether on premises or in the cloud.Takeaway: I know how to deploy InterSystems IRIS using VMware and vSAN.Presenter: @Murray.Oldfield And...Additional materials to the video you can find in this InterSystems Online Learning Course.Don't forget to subscribe our InterSystems Developers YouTube Channel. Enjoy and stay tuned!
Article
Sergey Kamenev · May 23, 2019
PHP, from the beginning of its time, is renowned (and criticized) for supporting integration with a lot of libraries, as well as with almost all the DB existing on the market. However, for some mysterious reasons, it did not support hierarchical databases on the globals.
Globals are structures for storing hierarchical information. They are somewhat similar to key-value database with the only difference being that the key can be multi-level:
Set ^inn("1234567890", "city") = "Moscow"
Set ^inn("1234567890", "city", "street") = "Req Square"
Set ^inn("1234567890", "city", "street", "house") = 1
Set ^inn("1234567890", "year") = 1970
Set ^inn("1234567890", "name", "first") = "Vladimir"
Set ^inn("1234567890", "name", "last") = "Ivanov"
In this example, multi-level information is saved in the global ^inn using the built-in ObjectScript language. Global ^inn is stored on the hard drive (this is indicated by the “^” sign in beginning).
In order to work with globals from PHP, we will need new functions that will be added by the PHP module, which will be discussed below.
Globals support many functions for working with hierarchies: traversal tree on fixed level and in depth, deleting, copying and pasting entire trees and individual nodes. And also ACID transactions - as is done in any quality database. All this happens extremely quickly (about 105-106 inserts per second on regular PC) for two reasons:
Globals are a lower level abstraction when compared to SQL,
The bases have been in production on the globals for decades, and during this time they were polished and their code was thoroughly optimized.
Learn more about globals in the series of articles titled "Globals Are Magic Swords For Managing Data.":
Part 1.Trees. Part 2.Sparse arrays. Part 3.
In this world, globals are primarily used in storage systems for unstructured and sparse information, such as: medical, personal data, banking, etc.
I love PHP (and I use it in my development work), and I wanted to play around with globals. There was no PHP module for IRIS and Caché. I contacted InterSystems and asked them to create it. InterSystems sponsored the development as part of an educational grant and my graduate student and I created the module.
Generally speaking, InterSystems IRIS is a multi-model DBMS, and that's why from PHP you can work with it via ODBC using SQL, but I was interested in globals, and there was no such connector.
So, the module is available for PHP 7.x (was tested for 7.0-7.2). Currently it can only work with InterSystems IRIS and Caché installed on the same host.
Module page on OpenExchange (a directory of projects and add-ons for developers at InterSystems IRIS and Caché).
There is a useful DISCUSS section where people share their related experiences.
Download here:
https://github.com/intersystems-community/php_ext_iris
Download the repository from the command line:
git clone https://github.com/intersystems-community/php_ext_iris
Installation instructions for the module in English and Russian.
Module Functions:
.article_table td{
padding: 5px;
}
.article_table th{
text-align: center;
}
PHP function
Description
Working with data
iris_set($node, value)
Setting a node value.
iris_set($global, $subscript1, ..., $subscriptN, $value); iris_set($global, $value);
Returns: true or false (in the case of an error). All parameters of this function are strings or numbers. The first one is the name of the global, then there are the indexes, and the last parameter is the value.
iris_set('^time',1);
iris_set('^time', 'tree', 1, 1, 'value');
ObjectScript equivalent:
Set ^time = 1
Set ^time("tree", 1, 1) = "value"
iris_set($arrayGlobal, $value);
There are just two parameters: the first one is the array in which the name of the global and all its indexes are stored, and the second one is the value.
$node = ['^time', 'tree', 1, 1];
iris_set($node,'value');
iris_get($node)
Getting a node value.
Returns: a value (a number or a line), NULL (the value is not defined), or FALSE (in the event of an error).
iris_get($global, $subscript1, ..., $subscriptN); iris_get($global);
All parameters of this function are lines or numbers. The first one is the name of the global, and the rest are subscripts. The global may not have subscripts.
$res = iris_get('^time');
$res1 = iris_get('^time', 'tree', 1, 1);
iris_get($arrayGlobal);
The only parameter is the array in which the name of the global and all its subscripts are stored.
$node = ['^time', 'tree', 1, 1];
$res = iris_get($node);
iris_zkill($node)
Deleting a node value.
Returns: TRUE or FALSE (in the event of an error).
It is important to note that this function only deletes the value in the node and does not affect lower branches.
iris_zkill($global, $subscript1, ..., $subscriptN); iris_zkill($global);
All parameters of this function are lines or numbers. The first one is the name of the global, and the rest are subscripts. The global may not have subscripts.
$res = iris_zkill('^time'); // Lower branches are not deleted.
$res1 = iris_zkill('^time', 'tree', 1, 1);
iris_zkill($arrayGlobal);
The only parameter is the array in which the name of the global and all its subscripts are stored.
$a = ['^time', 'tree', 1, 1];
$res = iris_zkill($a);
iris_kill($node)
Deleting a node and all descendant branches.
Returns: TRUE or FALSE (in the case of an error).
iris_kill($global, $subscript1, ..., $subscriptN); iris_kill($global);
All parameters of this function are lines or numbers. The first one is the name of the global, and the rest are indexes. The global may not have indexes, in which case it is deleted in full.
$res1 = iris_kill('^example', 'subscript1', 'subscript2');
$res = iris_kill('^time'); // The global is deleted in full.
iris_kill($arrayGlobal);
The only parameter is the array in which the name of the global and all its subscripts are stored.
$a = ['^time', 'tree', 1, 1];
$res = iris_kill($a);
iris_order($node)
Traversal the branches of the global on a given level
Returns: the array in which the full name of the previous node of the global on the same level is stored or FALSE (in the case of an error).
iris_order($global, $subscript1, ..., $subscriptN);
All parameters of this function are strings or numbers. The first one is the name of the global, and the rest are subscripts. Form of usage in PHP and ObjectScript equivalent:
iris_order('^ccc','new2','res2'); // $Order(^ccc("new2", "res2"))
iris_order($arrayGlobal);
The only parameter is the array in which the name of the global and the subscripts of the initial node are stored.
$node = ['^inn', '1234567890', 'city'];
for (; $node !== NULL; $node = iris_order($node))
{
echo join(', ', $node).'='.iris_get($node)."\n";
}
Returns:
^inn, 1234567890, city=Moscow
^inn, 1234567890, year=1970
iris_order_rev($node)
Traversal the branches of the global on a given level in reverse order
Returns: the array in which the full name of the previous node of the global on the same level is stored or FALSE (in the case of an error).
iris_order_rev($global, $subscript1, ..., $subscriptN);
All parameters of this function are lines or numbers. The first one is the name of the global, and the rest are subscripts. Form of usage in PHP and ObjectScript equivalent:
iris_order_rev('^ccc','new2','res2'); // $Order(^ccc("new2", "res2"), -1)
iris_order_rev($arrayGlobal);
The only parameter is the array in which the name of the global and the subscripts of the initial node are stored.
$node = ['^inn', '1234567890', 'name', 'last'];
for (; $node !== NULL; $node = iris_order_rev($node))
{
echo join(', ', $node).'='.iris_get($node)."\n";
}
Returns:
^inn, 1234567890, name, last=Ivanov
^inn, 1234567890, name, first=Vladimir
iris_query($CmdLine)
Traversal of the global in depth
Returns: the array in which the full name of the lower node (if available) or the next node of the global (if there is no embedded node) is contained.
iris_query($global, $subscript1, ..., $subscriptN);
All parameters of this function are strings or numbers. The first one is the name of the global, and the rest are subscripts. Form of usage in PHP and ObjectScript equivalent:
iris_query('^ccc', 'new2', 'res2'); // $Query(^ccc("new2", "res2"))
iris_query($arrayGlobal);
The only parameter is the array in which the name of the global and the indexes of the initial node are stored.
$node = ['^inn', 'city'];
for (; $node !== NULL; $node = iris_query($node))
{
echo join(', ', $node).'='.iris_get($node)."\n";
}
Returns:
^inn, 1234567890, city=Moscow
^inn, 1234567890, city, street=Req Square
^inn, 1234567890, city, street, house=1
^inn, 1234567890, name, first=Vladimir
^inn, 1234567890, name, last=Ivanov
^inn, 1234567890, year=1970
The order differs from the order in which we established it because everything is automatically sorted in ascending order in the global during insertion.
Service functions
iris_set_dir($FullPath)
Setting up a directory with a database
Returns: TRUE or FALSE (in the case of an error).
iris_set_dir('/InterSystems/Cache/mgr');
This must be performed before connecting to the database.
iris_exec($CmdLine)
Execute database command
Returns: TRUE or FALSE (in the case of an error).
iris_exec('kill ^global(6)'); // The ObjectScript command for deleting a global
iris_connect($login, $pass)
Connect to database
iris_quit()
Close connection with DB
iris_errno()
Get error code
iris_error()
Get text description of error
If you want to play around with the module, check e.g. docker container implementation
git clone https://github.com/intersystems-community/php_ext_iris
cd php_ext_iris/iris
docker-compose build
docker-compose up -d
Test the demo page on localhost:52080 in the browser.PHP files that can be edited and played with are in the php/demo folder which will be mounted to inside the container.
To test IRIS use the admin login with the SYS password.
To get into the IRIS settings, use the following URL:http://localhost:52773/csp/sys/UtilHome.csp
To get into the IRIS console of this container, use the following command:
docker exec -it iris_iris_1 iris session IRIS
Especially for DC and those who wants to use we run a virtual machine with a Caché php-module was set up.
Demo page on english. Demo page on russian. Login: habr_test Password: burmur#@8765
For self-installation of the module for InterSystems Caché
Have Linux. I tested for Ubuntu, the module should also compiled and work under Windows, but I didn’t test it.
Download the free version:
InterSystems Caché (registration required). As to Linux, Red Hat and Suse are supported out of the box, but you can also install them on other distribution packages.
Install the cach.so module in PHP according to the instructions..
Just out of interest, I ran two primitive tests to check the speed of inserting new values into the database in the docker container on my PC (AMD FX-9370@4700Mhz 32GB, LVM, SATA SSD).
Insertion of 1 million new nodes into the global took 1.81 seconds or 552K inserts per second.
Updating a value in the same global 1,000,000 times took 1.98 seconds or 505K updates per second. An interesting fact is that the insertion occurs faster than the update. Apparently this is a consequence of the initial optimization of the database aimed at quick insertion.
Obviously, these tests cannot be considered 100% accurate or useful, since they are primitive and are done in the container. On more powerful hardware with a disk system on a PCIe SSD, tens of millions of inserts per second can be achieved.
What else can be completed and the current state
Useful functions for working with transactions can be added (you can still use them with iris_exec).
The function of returning the whole global structure is not implemented, so as not to traverse the global from PHP.
The function of saving a PHP array as a subtree is not implemented.
Access to local database variables is not implemented. Only using iris_exec, although it's better with iris_set.
Global traversal in depth in the opposite direction is not implemented.
Access to the database via an object using methods (similar to current functions) is not implemented.
The current module is not quite yet ready for production: not tested for high loads and memory leaks. However, should someone need it, please feel free to contact me at any time (Sergey Kamenev sukamenev@gmail.com).
Bottom line
For a long time, the worlds of PHP and hierarchical databases on globals practically did not overlap, although globals provide strong and fast functionality for specific data types (medical, personal).
I hope that this module will motivate PHP programmers to experiment with globals and ObjectScript programmers for simple development of web interfaces in PHP.
P.S. Thank you for your time! Nice!Just tried this with docker container on my local machine.And got 1 million insertions(1,000,000) in 1,45 sec on my mac pro. Cool!
Announcement
Anastasia Dyubaylo · Aug 26, 2019
Hi Developers!
InterSystems Developers Community today unites more than 7,000 developers from all over the world. Since 2016, our community has been growing and improving for you, our dear developers!
Together we've done a lot over these years, and much more is planned for the future!
So, who makes our community better every day? Who tries for all of us and improves the space for developers?
Let's warmly greet our team:
@Evgeny.Shvarov – founder of InterSystems Developers community, Startups and Community manager at InterSystems.
@David.Reche – founder & manager of Spanish Developers Community & Senior Sales Engineer at InterSystems.
@Anastasia.Dyubaylo – our Community & Social Media Manager at InterSystems. She leads Global Masters Advocacy Hub and all InterSystems Developers social networks. Anastasia also reviews many of InterSystems' events on the community and on social media.
@Olga.Zavrazhnova2637 – our Global Masters Advocacy Hub Manager at InterSystems. She is managing Global Masters since it's launching in 2016. Now Olga creating engagement campaigns and exploring new rewards ideas for Global Masters.
@Julia.Fedoseeva – Educational and Logistics manager at InterSystems and also our Global Masters Advocacy Hub Manager. She organizes the delivery of GM Rewards around the whole world.
Gamification and Community Management – it's about these guys. They're supporting you on your way with InterSystems Global Masters Advocacy Hub!
And...
Of course, our remarkable Moderators in Developers Community team.
Please welcome:
@Eduard.Lebedyuk – Sales Engineer at InterSystems in Moscow, Russia.
@Dmitry.Maslennikov – Developer Advocate, co-founder of CaretDev corp.
@Sean.Connelly – Managing Director / Software Engineer at MemCog LTD.
@John.Murray – Senior Product Engineer at George James Software.
@Scott.Roth – Senior Applications Development Analyst at the Ohio State University Wexner Medical Center.
@Jeffrey.Drumm – Vice President and Chief Operating Officer at Healthcare Integration Consulting Group (HICG).
@Henrique – System Management Specialist, Database Administrator at Sao Paulo Federal Court.
And our Moderators of the Spanish Community Team:
@Francisco.López1549 – Project Manager & Head of Interoperability at Salutic Soluciones, S.L.
@Nancy.Martinez – Solution Consultant at Ready Computing.
So!
Now you know all InterSystems Developer Community heroes!
Stay tuned with us! 🚀 Awesome!!! I've always got superb guidance, and direction to all my questions. Thank you guys !!Nice to put faces to the people :) Great job guys !! Thanks, @Eric.David! Thank you! Happy to work with such people! Great team, perfect Community! A nice overview -- I like those "cheat sheets" Applause for your all!Keep up the good work all! Like the community very much ! Thanks to all of you, guys, for your effort and help!! Great job!! (And remember, with great power comes great responsibility ) Thanks, Udo! Another version of KYC - Know Your Community ) Thanks, Marco! See you on GS2019!
Discussion
Nikita Savchenko · Dec 12, 2019
Hello, InterSystems community!
Lately, you have probably heard of the new InterSystems Package Manager - ZPM. If you're familiar with it or with such package managers as NPM, Dep, pip/PyPI, etc. or just know what is it all about -- this question is for you! The question I want to arise is actually a system design question, or, in other words, "how should ZPM implement it".
In short, ZPM (the new package manager) allows you to install packages/software to your InterSystems product in a very convenient, manageable way. Just open up the terminal, run ZPM routine, and type install samples-objectscript: you will have a new package/software installed and ready to use! In the same way, you can easily delete and update packages.
From the developer's point of view, quite as same as in other package managers, ZPM requires the package/software to have a package description, fairly represented as module.xml file. Here's an example of it. This file has a description of what to install, which CSP applications to create, which routines to run once installed and so on.
Now, straight to the point. You've also probably heard of InterSystems WebTerminal - one of my projects which is quite widely used (over 500 installs over the last couple of months). We try to bring WebTerminal to ZPM.
So far, anyone could install WebTerminal just by importing an XML file with its code - no more actions were needed. During the class compilation, WebTerminal runs the projection and does all required settings on its own (web application, globals, etc - see here). In addition to this, WebTerminal has its own self-updating mechanism, which allows it to self-update when the new version comes out, made exactly with the use of projections. Apart from that, I have 2 more projects (ClassExplorer, Visual Editor) that use the same import-and-install convenient installation mechanism.
But, it was decided that ZPM won't accept projections as a paradigm and everything should be described in module.xml file. Hence, to publish WebTerminal for ZPM, the team tried to remove Installer.cls class (one of WebTerminal's classes which did all the install-update magic with the use of projections) and manually replaced it with some module.xml metadata. It turned to be quite enough for WebTerminal to work but it potentially leads to unexpected incompatibilities to be 100% compatible with ZPM (see below). Thus, the source code changes are needed.
So the question is, should ZPM really avoid all projection-enabled classes for its packages? The decision of avoiding projections might be changed via the open discussion here. It's not a question of why can't I rewrite WebTerminal's code, but rather why not just accept original software code even if it uses projections?
My opinion was quite strong against avoiding projection-enabled classes in ZPM modules. For multiple reasons. But first of all, because projections are the way how the programming language works, and I see no constructive reasoning against using them for whatever the software/package is designed for. Avoiding them and cutting Installer.cls class from the release is absolutely the same as patching a working module. I agree that the packages which ship specifically for ZPM should try to use all installation features which module.xml provides, however, WebTerminal is also shipped outside of ZPM, and maintaining 2 versions of WebTerminal (at least, because of the self-update feature) makes me think that something is wrong here.
I see the next pros of keeping all projection-enabled classes in ZPM:
The package/software will still be compatible with both ZPM and a regular installation done for years (via XML/classes import)
No original package/software source code changes needed to bring it to ZPM
All designed functions work as expected and don't cause problems (for instance, WebTerminal self-updates - upon the update, it loads the XML file with the new version and imports it, including projection-enabled Installer.cls file anyway)
Cons of keeping all projection-enabled classes in ZPM:
Side effects made during the installation/uninstallation, made by projection-enabled classes won't be statically described in the module.xml file, hence they are "less auditable". There is an opinion that any side effect must be described in module.xml file.
Please indicate any other pros/cons if this isn't the full list. What do you think?
Thank you! Exactly not for installing purposes, you're right, I agree. But what do you think about the WebTerminal case in particular?
1. It's already developed and bound to projections: installation, its own update mechanism, etc.2. It's also shipped outside of ZPM3. It would work as usual if only ZPM supported projections
I see you're pointing out to "It might need to support Projections eventually because as you said it's a part of language" - that's what mostly my point is about. Why not just to allow them. Thanks! Exactly, I completely agree about simplicity, transparency, and installation standard. But see my answer to Sergey's answer - what to do with WebTerminal in particular?
1. Why would I need to rewrite the update mechanism I developed years ago (for example)?2. Why would I need to maintain 2 code bases for ZPM & regular installations (or automate it in a quite crazy way, or just drop self-update feature when ZPM is detected)3. Why all these changes to the source code are needed, after all, if it "just works" normally without ZPM complications (which is how the ObjectScript works)
I think this leads to either "make a package ZPM-compatible" or "make ZPM ObjectScript-compatible" discussion, isn't it? The answer to all this could be "To make the world the better place").
Because if you do all 3 you get:
the same wonderful Web terminal, but with simple, transparent, and standard installing mechanism with and yet another channel for distribution, cause ZPM seems to be a very handy and popular way to install/try the staff.
Maybe yet another channel of clear and handy app distribution is enough argument to change something in the application too?
True points. For sure, developers can customize it. I can do another version of WebTerminal specifically for ZPM, but it will involve additional coding and support:
1. A need to change how the self-update mechanism works or shut it down completely. Right now, the user gets a message on the UI, suggesting to update WebTerminal to the latest version. There's quite a lot of things happen under the hood.2. Thus, create an additional pipeline (or split the codebase) for 2 WebTerminal versions: ZPM's one and a regular one with all the tests and so on.
I am wondering is it worth doing so in WebTerminal's perspective, or is it better to make WebTerminal a kind of an exception for ZPM. Because, still, inserting a couple of if (isZPMInstalled) { ... } else { ... } conditions to WebTerminal (even on front-end side) looks as anti-pattern to me. Thanks! Considering the points others mention, I agree that projections should not be the way to install things but rather the acceptable exception as for WebTerminal and other complex packages. Another option rather than having two versions of the whole codebase could be having a wrapper module around webterminal (i.e., another module that depends on webterminal) with hooks in webterminal to allow that wrapper to turn off projection-based installation-related features. I completely agree, and to get to
standard installing mechanism
for USERS, we need to zpm-enable as many existing projects as possible. To enable these projects we need to simplify the zpm-enabling, leveraging existing code if possible (or not preventing developers from leveraging the existing code). I think allowing developers to use already existing installers (whatever form they may take) would help with this goal. This is very wise, thanks Ed!
For zpm-enabling we plan to add some boilerplate module.xml generator for the repo, stay tuned Hi Nikita,
> A need to change how the self-update mechanism works or shut it down completely.If a package is distributed via package manager, its self-update should be completely removed. It should be a responsibility of package manager to alert the user that new version of package is available and to install it.
> Thus, create an additional pipeline (or split the codebase) for 2 WebTerminal versions: ZPM's one and a regular one with all the tests andso on.Some package managers allow to apply patches to software before packaging it, but I don't think it's the case for ZPM at the moment. I believe you will need to do a separate build for ZPM/non-ZPM versions of your software. You can either apply some patches during the build, or refactor the software so that it can run without auto updater if it's not installed. Hi Nikita!
Do you want the ZPM exception for the Webterminal only or for all your InterSystems solutions? ) The whole purpose of package manager is to get rid of individual installer/updater scripts written by individual developers and replace them with package management utility. So that you have a standard way of installing, removing and updating your packages. So I don't quite understand why this question is raised in this context -- of course package manager shouldn't support custom installers and updaters. It might need to support Projections eventually because as you said it's a part of language, but definitely not for installing purposes. I completely support inclusion of projections.
ObjectScript Language allows execution of arbitrary code at compile time through three different mechanisms:
Projections
Code generators
Macros
All these instruments are entirely unlimited in their scope, so I don't see why we need to prohibit one way of executing code at compilation.
Furthermore ZPM itself uses Projections to install itself so closing this avenue to other projects seems strange. Hi Nikita!
Thanks for the good question!
The answer is on why module.xml vs installer.cls on projections is quite obvious IMHO.
Compare module.xml and installer.cls which does the same thing.
Examining module.xml you can clearly say what the installation does and easily maintain/support it.
In this case, the package installs:
1. classes from WebTerminal package:
<Resource Name="WebTerminal.PKG" />
2. creates one REST Web app:
<CSPApplication
Url="/terminal"
Path="/build/client"
Directory="{$cspdir}/terminal"
DispatchClass="WebTerminal.Router"
ServeFiles="1"
Recurse="1"
PasswordAuthEnabled="1"
UnauthenticatedEnabled="0"
CookiePath="/"
/>
3. creates another REST Web app:
<CSPApplication
Url="/terminalsocket"
Path="/terminal"
Directory="{$cspdir}/terminalsocket"
ServeFiles="0"
UnauthenticatedEnabled="1"
MatchRoles=":%DB_CACHESYS:%DB_IRISSYS:{$dbrole}"
Recurse="1"
CookiePath="/"
/>
I cannot say the same for Installer.cls on projections - what does it do to my system?
Simplicity, transparency, and installation standard with zpm module.xml approach vs what?
From the pros/cons, it seems the objectives are:
Maintain compatibility with normal installation (without ZPM)
Make side effects from installation/uninstallation auditable by putting them in module.xml
I'd suggest as one approach to accomplish both objectives:
Suppress the projection side effects when running in a package manager installation/uninstallation context (either by checking $STACK or using some trickier under-the-hood things with singletons from the package manager - regardless, be sure to unit test this behavior!).
Add "Resource Processor" classes (specified in module.xml with Preload="true" and not included in normal WebTerminal XML exports used for non-ZPM installation) - that is, classes extending %ZPM.PackageManager.Developer.Processor.Abstract and overriding the appropriate methods - to handle your custom installation things. You can then use these in your module manifest, provided that such inversion of control still works without bootstrapping issues following changes made in https://github.com/intersystems-community/zpm.
Generally-useful things like creating a %All namespace should probably be pushed back to zpm itself.
Question
Alex Van Schoyck · Jan 30, 2019
ProblemI'm working on exporting data from an Intersystems Cache database through the Cache ODBC Driver. There is a particular table that is giving me an error message. The ODBC Driver crashes and reports an error from the Cache system. I think I was able to trace down where the error is coming from, but I do not know how to debug or fix the error.The table I am trying to extract is called SEDMIHP.Here's the Error:
[Cache Error: <<UNDEFINED>%0AmBd16^%sqlcq.PRD.3284 ^SEDMIHP(4,77)>]
[Location: <ServerLoop - Query Fetch>]
Research/Trial & Error
I was able to open up Cache Management Studio and find the class that matched up with the table name. I should mention that this is my very first time working with Intersystems Cache, so I apologize if I'm sounding dumb or inexperienced here.
Within the SQLMap, I found this code:
<Data name="DESCRIP_2">
<RetrievalCode> S {DESCRIP_2}=$P($G(^PHPROP({L1},"DESC_CODES")),"\",2) S {DESCRIP_2}=$S($L({DESCRIP_2}):^SEDMIHP($P({DESCRIP_2},","),$P({DESCRIP_2},",",2)),1:{DESCRIP_2})
S {DESCRIP_2}=$E({DESCRIP_2},1,80)
</RetrievalCode>
</Data>
I'm thinking that the code in here is causing an issue. With my very limited understanding of ObjectScript, I think this code is manipulating the text/string, and maybe if there's an undefined or bad value in the data, its causing those functions to throw an error?
I have limited access to the Cache Management Portal, and I am able to find the table in the SQL Schema and run a query on it. About 300 rows of data are loaded before the same Error as above shows up, and it stops loading any more rows. This is why I'm thinking there is bad data.
I tried using ISNULL() and IFNULL() in the SELECT statement to try and skip any bad data, but had the same error in the same spot every time.
Questions
Is there an easy solution from the SQL side that can avoid this error?Is there anything I can do with the class code in Studio to debug or get more info about this error?
Any and all help is greatly appreciated!
Additional Info
Cache Version: Cache for OpenVMS/IA64 V8.4 (Itanium) 2012.1.5 (Build 956 + Adhoc 12486) 17-APR-2013 19:49:58.07 Dmitry, Thank you so much! That worked perfectly and solved the issue! I've been coming up empty handed for hours trying to figure out this issue. I really appreciate your help! If you can edit this code, you can try change to this.
<Data name="DESCRIP_2"> <RetrievalCode> S {DESCRIP_2}=$P($G(^PHPROP({L1},"DESC_CODES")),"\",2) S {DESCRIP_2}=$S($L({DESCRIP_2}):$Get(^SEDMIHP($P({DESCRIP_2},","),$P({DESCRIP_2},",",2))),1:{DESCRIP_2}) S {DESCRIP_2}=$E({DESCRIP_2},1,80) </RetrievalCode> </Data>
But not sure, if this correct.
What I did there, is, wrapped retrieving data from global ^SEDMIHP with the function $Get()
Or this way, with the default value
<Data name="DESCRIP_2"> <RetrievalCode> S {DESCRIP_2}=$P($G(^PHPROP({L1},"DESC_CODES")),"\",2) S {DESCRIP_2}=$S($L({DESCRIP_2}):$Get(^SEDMIHP($P({DESCRIP_2},","),$P({DESCRIP_2},",",2)),{DESCRIP_2}),1:{DESCRIP_2}) S {DESCRIP_2}=$E({DESCRIP_2},1,80) </RetrievalCode> </Data>