Search

Clear filter
Announcement
Ben Spead · Jul 7, 2021

Try out InterSystems FHIR Transformation Service - we want your feedback!

Hello Developers! Have you ever had to convert HL7v2 messages to FHIR (Fast Healthcare Interoperability Resources) and found the process complicated and confusing? InterSystems is rolling out a new cloud based SaaS offering called InterSystems FHIR Transformation Service, which makes the process easy. We are excited to announce an Early Access Preview Program for our new offering, and we would love to have you kick the tires and let us know what you think! All you need is a free AWS account, with an S3 bucket to drop in your HL7v2 messages, and another S3 bucket to get your FHIR output. <tl;dr;> Check out this simple demo of the functionality here: A simple step-by-step guide on how you can sign up for a free AWS account and a free InterSystems Cloud Portal account and exercise the powerful functionality of the transformation services can be found on the InterSystems Learning Site. Full documentation is available within InterSystems Documentation. We are going to be formally launching this offering later in July, and when the preview is over you can still take advantage of getting your first one million transformations for free! </tl;dr;> More details on this new offer from InterSystems: Introducing InterSystems FHIR Transformation Service. The health information industry has embraced FHIR®, or Fast Healthcare Interoperability Resources, as its newest data standard for exchanging healthcare data. The on-demand InterSystems FHIR Transformation Service enables healthcare providers, payers, and pharmaceutical companies to convert their existing data formats to FHIR standards and extract the most value from their data. InterSystems is a leader in healthcare interoperability, implementing not only the latest FHIR standard, but all major healthcare standards including HL7v2, X12, CDA, C-CDA, and DICOM. InterSystems FHIR Transformation Service was designed to make converting messages from these earlier standards into FHIR R4 simple, with the initial release supporting the transformation of HL7 v2 messages to FHIR R4. The FHIR messages can then be sent to an AWS S3 bucket or Amazon HealthLake (Preview), with other FHIR repository options being added in the future. HealthShare Message Transformation Services makes transforming HL7v2 messages into FHIR simple. You don't need to worry about transformation logic, so you can shift your focus to building great healthcare applications, leaving the complexities of message transformations to InterSystems. The service provides: Easy provisioning and launching on AWS Checking your inbound S3 bucket for HL7v2 messages Validating of HL7 contents Message conversion to FHIR R4 Routing converted messages to your outbound S3 bucket, the InterSystems FHIR Server (Preview) service, or an Amazon HealthLake (Preview) repository Monitoring the status and statistics of your transformation pipelines Additionally, the service is built on the AWS infrastructure, so it is ISO 27001:2013 and HITRUST certified to support HIPAA. InterSystems manages the operations, monitoring, and backups of the service. Best of all, once this offering is launched commercially, you will get the first one million transformations for free and after that you will pay for only what you use with a very low cost per transformed message. There will be no long-term contracts for using this service - cancel any time. We are very interested in your feedback. Please leave comments on this article with feedback, or reach out to the team directly at HMTS@InterSystems.com. UPDATE: July 15, 2021 InterSystems is very pleased to announce that as of July 15th, HeathShare Message Transformation Services is now available as a commercial offering in the AWS Marketplace: https://aws.amazon.com/marketplace/pp/prodview-q7ryewpz75cq2 You can subscribe from your AWS account and new subscriptions will receive the 1 million free transformations (try before you buy). Preview accounts created under the Early Access Preview Program will continue to work for a month after they have been created. We are still VERY interested in seeing people try this service and provide us with specific feedback on how it might bring value to them and their organization. Please feel free to start up discussions on this thread or reach out to HMTS@InterSystems.com. HL7 ® and FHIR ® are the registered trademarks of Health Level Seven International and the use of these trademarks does not constitute an endorsement by HL7. Use of the FHIR trademark does not constitute endorsement of InterSystems FHIR Transformation Service or InterSystems FHIR Transformation Service by HL7. Thanks @Ben Spead Converting HL7v2 to HL7FHIR is always a challenging task. By using HealthShare Message Transformation Services in cloud based SaaS I found it I can achieve it easily.Once again thanks for the step by step guide. Regards Highly interesting, definitely warrants a more in depth look for future applications. Thanks @Raymond.Rebbeck - we would love for you to give it a try and let us know what you think! We're excited to announce that the offering went live on AWS Marketplace on July 15th: https://aws.amazon.com/marketplace/pp/prodview-q7ryewpz75cq2 I have updated the article to reflect this. We are still looking for more feedback so please create your account and give it a try for free when you have time! Hi Ben I would be very happy to participate. I have a number of Patient HL7 Interfaces and a lot of LabTrak Interfaces as well so I will feed a little bit of hands on help to get my IRIS for Health 2021Python Foundation completed and configured and then I add an operation into any of the interfaces to send a steady stream of HL7 messages into the FHIR Server.
Question
reach.gr G · Jul 11, 2021

InterSystems Track Relationship between Menus, Workflow, Workflow Items, Questionaire etc

Hi InterSystems Architects, Please may I know the relationship between 1. Menus, Workflows, Workflow Items, Components, Code Tables, etc Detailed relationship diagrams, how each element in each of the above are related to another. 2. When I extracted the data. I see ID number, but they aren't unique or related. I wanted to build a relationship between them myself in the absence of documents but I am missing the critical unique primary key of each and how they are referred in another table. Your help is appreciated and let me know if you need any specific (Non Confidential) information to get help. This is my Understanding: From objects - Classes (components) are designed - For example: web.PAPerson is the class and I wanted to search for it in the Element Reference Map, but I could not find. So I need more details here and understanding. 2. Based on design, Elements are grouped to form Components 3. Based on design Components are grouped in the Workflow Items ( Again I need explanation here) and they may also be derived from code tables. 4. Based on Design Work is set up deriving from Components ? 5. Based on user flow, series of Work flow Items are grouped to form Workflow Managers 6. Based on Design workflow managers are linked to Menus Mangers. Menu Headers etc Please can you help me fill in the above gaps, corrections and relevant documents, where I can trace from fundamental non divisible element to user form that can be configurable. Many Thanks Hi Intersystems, I am still waiting for the answer and knowledge so that I can understand your application better. This shouldn't take this long, I was expecting an answer within 24 hours so that I can continue my learning. You should understand that while InterSystems employees are on the community this is really a public forum and not an "official" support path. Questions are answered by the community at large when they can. For people in the forum to help more information is needed. You indicated you are working with HealthShare however this is really a family of solutions. Which specific product are you referring to? What part of that product are you trying understand better? The more specific you can be the easier it is for the community to help. If you have an immediate need I would suggest that you contact the Worldwide Response Center (referred to as the WRC) for immediate support. Here is the contact information: Phone:+1-617-621-0700+44 (0) 844 854 29170800615658 (NZ Toll Free)1800 628 181 (Aus Toll Free) Email:support@intersystems.com Finally, learning services (learning.instersystems.com) and documentation (docs.intersystems.coms) can be of great help. For HealthShare specific areas you do need to be a registered HealthShare user. If you are not work with your organization and the WRC to get that updated.
Announcement
Benjamin De Boe · Apr 8, 2021

InterSystems IRIS and IRIS for Health 2020.4 are now Generally Available (GA)

GA releases are now available for the 2020.4 version of InterSystems IRIS, InterSystems IRIS for Health and InterSystems IRIS Studio. InterSystems IRIS Data Platform 2020.4 makes it even easier to develop, deploy and manage augmented applications and business processes that bridge data and application silos. It has many new capabilities including: Enhancements for application and interface developers, including: Support for Java SE 11 LTS, both when using Oracle OpenJDK and AdoptOpenJDK Support for Connection Pooling for JDBC A new "foreach" action in routing rules for segmented virtual documents Enhancements for database and system administrators, including: ICM now supports deploying System Alerting and Monitoring (SAM) and InterSystems API Manager (IAM) Extensions to our SQL syntax for common administrative tasks Simplified deployment for InterSystems Reports InterSystems IRIS for Health 2020.4 includes all of the enhancements in InterSystems IRIS 2020.4. In addition, this release includes Enhanced FHIR support, including support for FHIR profiles Support for the RMD IHE profile DataGate support in the HL7 Migration Tooling More details on these features can be found in the product documentation: InterSystems IRIS 2020.4 documentation and release notes InterSystems IRIS for Health 2020.4 documentation and release notes As this is a Continuous Delivery (CD) release, it is only available in OCI (Open Container Initiative) a.k.a. Docker container format. Container images are available for OCI compliant run-time engines for Linux x86-64 and Linux ARM64, as detailed in the Supported Platforms document. Container images for the Enterprise Edition and all corresponding components are available from the InterSystems Container Registry using the following commands: docker pull containers.intersystems.com/intersystems/iris:2020.4.0.547.0 docker pull containers.intersystems.com/intersystems/irishealth:2020.4.0.547.0 For a full list of the available images, please refer to the ICR documentation. Container images for the Community Edition can also be pulled from the Docker store using the following commands: docker pull store/intersystems/iris-community:2020.4.0.547.0 docker pull store/intersystems/iris-community-arm64:2020.4.0.547.0 docker pull store/intersystems/irishealth-community:2020.4.0.547.0 docker pull store/intersystems/irishealth-community-arm64:2020.4.0.547.0 Alternatively, tarball versions of all container images are available via the WRC's CD download page. Our corresponding listings on the main cloud marketplaces will be updated in the next few days. InterSystems IRIS Studio 2020.4 is a standalone IDE for use with Microsoft Windows and can be downloaded via the WRC's components download page. It works with InterSystems IRIS and InterSystems IRIS for Health version 2020.4 and below. InterSystems also supports the VSCode-ObjectScript plugin for developing applications for InterSystems IRIS with Visual Studio Code, which is available for Microsoft Windows, Linux and MacOS. Other standalone InterSystems IRIS 2020.4 components, such as the ODBC driver and web gateway, are also available from the WRC's components download page. And we updated the images with ZPM 0.2.14 too: intersystemsdc/iris-community:2020.3.0.221.0-zpm intersystemsdc/iris-community:2020.4.0.547.0-zpm intersystemsdc/iris-ml-community:2020.3.0.302.0-zpm intersystemsdc/irishealth-community:2020.3.0.221.0-zpm intersystemsdc/irishealth-community:2020.4.0.547.0-zpm And to launch IRIS do: docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/iris-community:2020.3.0.221.0-zpm docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/iris-community:2020.4.0.547.0-zpm docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/iris-ml-community:2020.3.0.302.0-zpm docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/irishealth-community:2020.3.0.221.0-zpm docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/irishealth-community:2020.4.0.547.0-zpm And for terminal do: docker exec -it my-iris iris session IRIS and to start the control panel: http://localhost:9092/csp/sys/UtilHome.csp To stop and destroy container do: docker stop my-iris
Announcement
Anastasia Dyubaylo · Mar 29, 2023

[Kick-off Webinar] InterSystems IRIS Cloud SQL and IntegratedML Contest

Hey Community, We are glad to invite you to the upcoming kick-off webinar for the InterSystems IRIS Cloud SQL and IntegratedML Contest. In this webinar, we'll present you the arena for the next contest: our new cloud offerings InterSystems IRIS Cloud SQL and InterSystems IRIS Cloud IntegratedML. We'll describe what these SaaS offers are all about and walk you through setting up your account and log on to the web portal, which includes an intuitive user interface to load data, create and train machine learning models, and then evaluate those models on test data. For the contest environment, we have included sample datasets so you can go from account signup to your first SQL query and IntegratedML model in a matter of seconds. Date & Time: Monday, April 3 – 11 am EDT | 5 pm CEST Speakers: 🗣 @Steven.LeBlanc, InterSystems Product Specialist, Cloud Operations🗣 @Thomas.Dyar, InterSystems Product Specialist, Machine Learning🗣 @Benjamin.DeBoe, InterSystems Product Manager🗣 @Dean.Andrews2971, InterSystems Head of Developer Relations 🗣 @Evgeny.Shvarov, InterSystems Developer Ecosystem Manager >> Register here << Hi Devs!Please join the "[Kick-off Webinar] InterSystems IRIS Cloud SQL and IntegratedML Contest" in 20 minutes!➡️ Follow this link - https://us02web.zoom.us/j/9822194974?pwd=bnZBdFhCckZ6c0xOcW5GT1lLdnAvUT09Or join our➡️ YouTube stream: https://youtube.com/live/ZzlK9vc_jO0?feature=share
Announcement
Anastasia Dyubaylo · Nov 18, 2022

[Video] How InterSystems Supports the CMS & ONC Regulations as well as Prior Authorization

Hey Developers, Watch this video to learn how InterSystems has been building out capabilities to support current and future regulations in the US market that can have a significant impact on payers and providers: ⏯ How InterSystems Supports the CMS & ONC Regulations as well as Prior Authorization @ Global Summit 2022 🗣 Presenter: Lynda Rowe, Senior Advisor, Value-Based Markets, InterSystems Visit our InterSystems Developers YouTube channel and subscribe to receive updates!
Announcement
Anastasia Dyubaylo · Apr 25, 2023

Online Meetup with the winners of InterSystems IRIS Cloud SQL and IntegratedML Contest

Hi Community, Let's meet together at the online meetup with the winners of the InterSystems IRIS Cloud SQL and IntegratedML Contest – a great opportunity to have a discussion with the InterSystems Experts team as well as our contestants. Winners' demo included! Date & Time: Thursday, April 27, 12 pm EDT | 6 pm CEST Join us to learn more about winners' applications and to have a talk with our experts. ➡️ REGISTER TODAY See you all at our virtual meetup!
Announcement
Anastasia Dyubaylo · Sep 13, 2022

data2day and InterSystems look forward to welcoming you in Karlsruhe, Germany

Hi Developers, We have another great opportunity for you to join us in Germany for a Conference for Data Scientists, Data Engineers and Data Teams that will be held in Karlsruhe! ⏱ Date: 20 – 21 September 2022 📍Venue: IHK Karlsruhe, Lammstr. 13-17, 76133 Karlsruhe, Germany Our speaker Markus Mechnich will be talking about (Smart) Data Fabrics during the session titled “Putting an end to data silos: making better decisions, pain-free”. He will discuss the challenge of accessing and merging different systems that store data that is rarely homogeneous and/or synchronous enough for a comprehensive evaluation and using it to make sound business decisions. ✅ REGISTER HERE We look forward to welcoming you to Karlsruhe!
Announcement
Larry Finlayson · Nov 16, 2022

Managing InterSystems Servers December 5-9, 2022 - Registration space available

Managing InterSystems Servers December 5-9, 2022 9:00am-5:00pm US-Eastern Time (EST) This five-day course teaches system and database administrators how to install, configure and secure InterSystems server software, configure for high availability and disaster recovery, and monitor the system. Students also learn troubleshooting techniques. This course is applicable to both InterSystems IRIS and Caché. Although the course is mostly platform independent, students can complete the exercises using either Windows or Ubuntu. Self Register Here
Announcement
Fabiano Sanches · May 10, 2023

Starting developer preview #1 for InterSystems IRIS, & IRIS for Health 2023.2

InterSystems announces its first preview, as part of the developer preview program for the 2023.2 release. This release will include InterSystems IRIS and InterSystems IRIS for Health. Highlights Many updates and enhancements have been added in 2023.2 and there are also brand-new capabilities, such as Time-Aware Modeling, enhancements of Foreign Tables, and the ability to use Ready-Only Federated Tables. Some of these features or improvements may not be available in this current developer preview. Another important topic is the removal of the Private Web Server (PWS) from the installers. This feature has been announced since last year and will be removed from InterSystems installers, but they are still in this first preview. See this note in the documentation. --> If you are interested to try the installers without the PWS, please enroll in its EAP using this form, selecting the option "NoPWS". Additional information related to this EAP can be found here. Future preview releases are expected to be updated biweekly and we will add features as they are ready. Please share your feedback through the Developer Community so we can build a better product together. Initial documentation can be found at these links below. They will be updated over the next few weeks until launch is officially announced (General Availability - GA): InterSystems IRIS InterSystems IRIS for Health Availability and Package Information As usual, Continuous Delivery (CD) releases come with classic installation packages for all supported platforms, as well as container images in Docker container format. For a complete list, refer to the Supported Platforms document. Installation packages and preview keys are available from the WRC's preview download site or through the evaluation services website (use the flag "Show Preview Software" to get access to the 2023.2). Container images for both Enterprise and Community Editions of InterSystems IRIS and IRIS for Health and all corresponding components are available from the new InterSystems Container Registry web interface. For additional information about docker commands, please see this post: Announcing the InterSystems Container Registry web user interface. The build number for this developer preview for InterSystems IRIS is 2023.2.0.198.0. For InterSystems IRIS for Health build number is 2023.2.0.200.0. For a full list of the available images, please refer to the ICR documentation. Alternatively, tarball versions of all container images are available via the WRC's preview download site. InterSystems IRIS for Health kits are published and the build number is 2023.2.0.200.0. They are available from the WRC's preview download site and from the InterSystems Container Registry. Time-Aware Modeling, enhancements of Foreign Tables, Ready-Only Federated Tables. Is there any documentation on these features? Hi Herman - Documentation will be released as features become available. As usual, we're adding things every two weeks to these previews. So, stay tuned!
Announcement
John Murray · May 22, 2023

InterSystems Testing Manager - a new VS Code extension for the %UnitTest framework

If you have already built unit tests using the %UnitTest framework, or are thinking about doing so, please take a look at InterSystems Testing Manager. Without leaving VS Code you can now browse your unit tests, run or debug them, and view previous run results. InterSystems Testing Manager works with both of the source code location paradigms supported by the ObjectScript extension. Your unit test classes can either be mastered in VS Code's local filesystem (the 'client-side editing' paradigm) or in a server namespace ('server-side editing'). In both cases the actual test runs occur in a server namespace. Feedback welcome. Don't see it in Open Exchange :) Could you please publish? @Evgeny.Shvarov It's available on the VS Code Marketplace, which is where people would expect to find it. Right, I'm not suggesting to remove it from there. But if you publish on Open Exchange it will let developers from InterSystems ecosystem it will help some developers to notice it. E.g. here on Open Exchange InterSystems can find dev tools and libs that could form their development environment. This is awesome! I've really missed this from working in other programming languages. According to the ReadMe (v0.2.0), this extension is currently considered a "Preview". Is there a product roadmap/timeline of what needs to be done to promote it out of "preview" status? Thanks for the positive response. IMO the main thing needed before the Preview tag gets dropped is feedback from %UnitTest users. One motivation for publishing the Preview is the upcoming Global Summit, which I hope will be a good opportunity for in-person discussion about this extension and others. Find me at the George James Software booth in the Partner Pavilion. Now at https://openexchange.intersystems.com/package/InterSystems-Testing-Manager-for-VS-Code Also welcome are reviews, which can be posted on Marketplace and on Open Exchange. Tried with the project where I have ObjectScript Unittests. I call them manually with IPM and automatically with github workflow. Testing manager is installed, I'm on a class with unit test. Not sure though how it works? @Evgeny.Shvarov I believe the Market place link @John.Murray provided has details of how to run the test (including some GIFs etc.) https://marketplace.visualstudio.com/items?itemName=intersystems-community.testingmanager Since you are working client-side I think you need to set a intersystems.testingManager.client.relativeTestRoot setting to contain the workspace-relative path to the tests folder that's showing at the beginning of the breadcrumb in your screenshot. Please see point 2 of https://github.com/intersystems-community/intersystems-testingmanager#workspace-preparations @Evgeny.Shvarov Did my suggestion help you get started with this extension? Thank you! Hi @John.Murray ! Not sure if I put it in a right place: Says it shouldn't be here Move it outside the `objectscript.conn` object please. Did it: Now it likes it, but still the testing tool doesn't see any tests. Should it be absolute path or relative to the root of repo? Do you mind to send a PR with a working setting to the repo? Please change this to a relative path (no leading slash). Also, there was a bug in how it handled a "docker-compose" type of configuration. Please try this dev build by downloading the zip, extracting the VSIX and dropping it into VS Code's Extensions view. If it works for you I will publish a new version to Marketplace. Now it sees them: But they ask for ^UnitTest to be setup, instructions, etc. Could it work similar as it works in IPM as @Timothy.Leavitt demonstrated? Because with IPM I can run tests all and standalone without any settings at all - it just works. Could the IPM be leveraged if it is presented in the repo/namespace? BTW, this setting worked: "intersystems.testingManager.client.relativeTestRoot": "/tests", Thanks for confirming the fix. I'm publishing 0.2.1 now. Please post enhancement ideas on the repo, for better tracking.
Article
Evgeny Shvarov · Feb 15, 2020

Setting Up Your Own InterSystems ObjectScript Package Manager Registry

Hi Developers! As you know the concept of ObjectScript Package Manager consists of ZPM client - client application for IRIS which helps you to install packages from the registry. And the code which works "on the other side" is ZPM Registry - server which hosts packages and exposes API to submit, list and install it. Now when you install the ZPM client it installs packages from community package registry, which si hosted on pm.community.intersystems.com But what if you want your own registry? E.g. you produce different software packages for your clients and you want to distribute it via private registry? Also, you may want to use your own registry to deploy solutions with different combinations of packages. Is it possible? The answer is YES! You can have it if you deploy ZPM registry on your server with InterSystems IRIS. To make it happen you would need to set up your own registry server. How to do that? ZPM Registry can be installed as a package zpm-registry. So you can install zpm-client first (article, video), or take a docker-image with zpm-client inside and install zpm-registry as a package: USER>zpm zpm: USER>install zpm-registry When zpm-registry is installed it introduces the REST end-point on server:port/registry with a set of REST-API routes. Let’s examine it and let's open the GET requests on community registry as an example. / - root entry shows the version of the registry software. /_ping {"message":"ping"} - entry to check the working status. /_spec - swagger spec entry /packages/-/all - displays all the available packages. /packages/:package - the set of GET entries for package deployment. ZPM client is using it when we ask it to install a particular package. /package - POST entry to publish a new package from the Github repository. This and all other API you can examine e.g. in full documentation online generated with the help of /_spec entry: OK! Let's install and test ZPM-registry on a local machine. 1. Run docker container with IRIS 2019.4 and preinstalled ZPM-client (from developer community repo on docker hub). % docker run --name my-iris -d --publish 52773:52773 intersystemsdc/iris-community:2019.4.0.383.0-zpm f40a06bd81b98097b6cc146cd61a6f8487d2536da1ffaf0dd344c615fe5d2844 % docker exec -it my-iris iris session IRIS Node: f40a06bd81b9, Instance: IRIS USER>zn "%SYS" %SYS>Do ##class(Security.Users).UnExpireUserPasswords("*") %SYS>zn "USER" USER>zpm zpm: USER>install zpm-registry [zpm-registry] Reload START [zpm-registry] Reload SUCCESS [zpm-registry] Module object refreshed. [zpm-registry] Validate START [zpm-registry] Validate SUCCESS [zpm-registry] Compile START [zpm-registry] Compile SUCCESS [zpm-registry] Activate START [zpm-registry] Configure START [zpm-registry] Configure SUCCESS [zpm-registry] Activate SUCCESS zpm: USER> 2. Let’s publish a package in our new privately-launched zpm-registry. To publish a registry you can make a POST request on registry/package end-point and supply the URL of the repository, which contains module.xml in the root. E.g. let's take the repo of objectscript-math application on Open Exchange by @Peter.Steiwer : https://github.com/psteiwer/ObjectScript-Math $ curl -i -X POST -H "Content-Type:application/json" -u user:password -d '{"repository":"https://github.com/psteiwer/ObjectScript-Math"}' 'http://localhost:52773/registry/package' make sure to change the user and password to your credentials. HTTP/1.1 200 OK Date: Sat, 15 Feb 2020 20:48:13 GMT Server: Apache CACHE-CONTROL: no-cache EXPIRES: Thu, 29 Oct 1998 17:04:19 GMT PRAGMA: no-cache CONTENT-LENGTH: 0 Content-Type: application/json; charset=utf-8 As we see, the request returns 200 which means the call is successful and the new package is available in the private registry for installation via ZPM clients. Let’s check the list of packages again: Open in browser http://localhost:52773/registry/packages/-/all: [{"name":"objectscript-math","versions":["0.0.4"]}] Now it shows us that there is one package available. That’s perfect! Next question is how to install packages via ZPM client from an alternative repository. By default when ZPM client is being installed it’s configured to work with public registry pm.community.intersystems.com. This command shows what is the current registry set up: zpm: USER>repo -list registry Source: https://pm.community.intersystems.com Enabled? Yes Available? Yes Use for Snapshots? Yes Use for Prereleases? Yes zpm: USER> But this could be altered. The following command changes the registry to your one: zpm: USER>repo -n registry -r -url http://localhost:52773/registry/ -user usernname -pass password Change here username and password to what is setup on your server available for /registry REST API. Let’s check that alternative registry is available: zpm: USER>repo -list registry Source: http://localhost:52773/registry/ Enabled? Yes Available? Yes Use for Snapshots? Yes Use for Prereleases? Yes Username: _SYSTEM Password: <set> zpm: USER> So ZPM client is ready to work with another ZPM registry. So, ZPM registry lets you build your own private registries which could be filled with any collection of packages either from public or from your own. And ZPM client gives you the option to switch between registries and install from public or from any private registries. Also check the article by @Mikhail.Khomenko which describes how to deploy InterSystems IRIS docker-container with ZPM registry in Kubernetes cloud provided by Google Kubernetes Engine. Happy coding and stay tuned! Is there any way to have multiple registries enabled at the same time in ZPM, with a priority so that it looks at them in order to find packages. That way you can use public packages from the community repo without having to have them imported/duplicated into your local repo. Or is the only way to keep switching between repos if you want to source from different places.
Question
Evgeny Shvarov · Mar 1, 2019

How to Setup InterSystems IRIS Container to Use OS-Level Authentication Programmatically

Hi Community! When you run IRIS container out-of-the-box and connect to it via terminal e.g. with: docker-compose exec iris bash You see something like: root@7b19f545187b:/opt/app# irissession IRIS Node: 7b19f545187b, Instance: IRIS Username: *** Password: *** USER> And you enter login and password every time. How to programmatically setup docker-compose file to have IRIS container with OS authentication enabled? And have the following while entering the terminal: root@7b19f545187b:/opt/app# irissession IRIS Node: 7b19f545187b, Instance: IRIS USER> One substitution in your code to use $zboolean to account for cases where it had already been enabled (in which case your code would disable it).Instead of: Set p("AutheEnabled")=p("AutheEnabled")+16 Use Set p("AutheEnabled")=$zb(p("AutheEnabled"),16,7) Documentation for $ZBOOLEAN Check out my series of articles Continuous Delivery of your InterSystems solution using GitLab it talks about many features, related to automating these kind of tasks. In particular, Part VII (CD using containers) talks about programmatic enabling of OS-level authentication. To activate OS authentication in your docker image, you can run this code, in %SYS namespace Do ##class(Security.System).Get(,.p) Set p("AutheEnabled")=p("AutheEnabled")+16 Do ##class(Security.System).Modify(,.p) If you work with community edition, you can use my image, where you can easily define also user and password for external use. Running server $ docker run -d --rm --name iris \ -p 52773:52773 \ -e IRIS_USER=test \ -e IRIS_PASSWORD=test \ daimor/intersystems-iris:2019.1.0S.111.0-community Terminal connect $ docker exec -it iris iris session iris Node: 413a4da758e7, Instance: IRIS USER>write $username root USER>write $roles %All Or with docker-compose, something like this iris: image: daimor/intersystems-iris:2019.1.0S.111.0-community ports: - 52773:52773 environment: IRIS_USER: ${IRIS_PASSWORD:-test} IRIS_PASSWORD: ${IRIS_PASSWORD:-test}
Announcement
Anastasia Dyubaylo · Apr 17, 2019

[May 15-17, 2019] Upcoming Event: InterSystems at the "J on the Beach" conference

Hi Community! Good news! One more upcoming event is nearby. We're please to invite you to join the "J on the Beach" – an international rendezvous for developers and DevOps around Big Data technologies. A fun conference to learn and share the latest experiences, tips and tricks related to Big Data technologies and, the most important part, it’s On The Beach! We're more than happy to invite you and your colleagues to our InterSystems booth for a personal conversation. InterSystems is also a silver sponsor of the JOTB. In addition, this year we have a special Global Masters Meeting Point at the conference. You're very welcome to come to our booth to ask questions, share your ideas and, of course, pick up some samples of rewards and GM badges. Looking forward to seeing you soon! So, remember! Date: May 15-17, 2019Place: Palacio de Congresos y Exposiciones Adolfo Suarez, Marbella, Spain More information you can find here: jonthebeach.com Please feel free to ask any questions in the comments of this post. Buy your ticket and save your seat today!
Article
David E Nelson · Apr 26, 2019

A Containerized Machine Learning Playground with InterSystems IRIS Community Edition, Spark, and Zeppelin

The last time that I created a playground for experimenting with machine learning using Apache Spark and an InterSystems data platform, see Machine Learning with Spark and Caché, I installed and configured everything directly on my laptop: Caché, Python, Apache Spark, Java, some Hadoop libraries, to name a few. It required some effort, but eventually it worked. Paradise. But, I worried. Would I ever be able to reproduce all those steps? Maybe. Would it be possible for a random Windows or Java update to wreck the whole thing in an instant? Almost certainly. Now, thanks to the increasingly widespread availability of containers and the increasingly usable Docker for Windows, I have my choice of pre-configured machine learning and data science environments . See, for example, Jupyter Docker Stacks and Zeppelin on Docker Hub. With InterSystems making the community edition of the IRIS Data Platform available via container (InterSystems IRIS now Available on the Docker Store), I have easy access to a data platform supporting both machine learning and analytics among a host of other features. By using containers, I do not need to worry about any automatic updates wrecking my playground. If my office floods and my laptop is destroyed, I can easily recreate the playground with a single text file, which I have of course placed in source control ;-) In the following, I will share a Docker compose file that I used to create a container-based machine learning and data science playground. The playground involves two containers: one with a Zeppelin and Spark environment, the other with the InterSystems IRIS Data Platform community edition. Both use images available on Docker hub. I’ll then show how to configure the InterSystems Spark Connector to connect the two. I will end by loading some data into InterSystems IRIS and using Spark to do some data exploration, visualization, and some very basic machine learning . Of course, my example will barely scratch the surface of the capabilities of both Spark and InterSystems IRIS. However, I hope the article will be useful to help others get started doing more complex and useful work. Note: I created and tested everything that follows on my Windows 10 laptop, using Docker for Windows. For information on configuring Docker for Windows for use with InterSystems IRIS please see the following. The second of the two articles also discusses the basics of using compose files to configure Docker containers. Using InterSystems IRIS Containers with Docker for Windows Docker for Windows and the InterSystems IRIS Data Platform Compose File for the Two-Container Playground Hopefully, the comments in the following compose file do a reasonably adequate job of explaining the environment, but in case they do not, here are the highlights. The compose file defines: Two containers: One containing the InterSystems IRIS Community Edition and the other containing both the Zeppelin notebook environment and Apache Spark. Both containers are based on images pulled from the Docker store. A network for communication between the two containers. With this technique, we can use the container names as host names when setting up communication between the containers. Local directories mounted in each container. We can use these directories to make jar files available to the Spark environment and some data files available to the IRIS environment. A named volume for the durable %SYS feature needed by InterSystems IRIS. Named volumes are necessary for InterSystems IRIS when running in containers on Docker for Windows. For more about this see below for links to other community articles. Map some networking ports inside the containers to ports available outside the containers to provide easy access. version: '3.2' services: #container 1 with InterSystems IRIS iris: # iris community edition image to pull from docker store. image: store/intersystems/iris:2019.1.0.510.0-community container_name: iris-community ports: # 51773 is the superserver default port - "51773:51773" # 52773 is the webserver/management portal default port - "52773:52773" volumes: # Sets up a named volume durable_data that will keep the durable %SYS data - durable:/durable # Maps a /local directory into the container to allow for easily passing files and test scripts - ./local/samples:/samples/local environment: # Set the variable ISC_DATA_DIRECTORY to the durable_data volume that we defined above to use durable %SYS - ISC_DATA_DIRECTORY=/durable/irissys # Adds the IRIS container to the network defined below. networks: - mynet #container 2 with Zeppelin and Spark zeppelin: # zeppelin notebook with spark image to pull from docker store. image: apache/zeppelin:0.8.1 container_name: spark-zeppelin #Ports for accessing Zeppelin environment ports: #Port for Zeppelin notebook - "8080:8080" #Port for Spark jobs page - "4040:4040" #Maps /local directories for saving notebooks and accessing jar files. volumes: - ./local/notebooks:/zeppelin/notebook - ./local/jars:/home/zeppelin/jars #Adds the Spark and Zeppelin container to the network defined below. networks: - mynet #Declares the named volume for the IRIS durable %SYS volumes: durable: # Defines a network for communication between the two containers. networks: mynet: ipam: config: - subnet: 172.179.0.0/16 Launching the Containers Place the compose file in a directory on your system. Note that the directory name becomes the Docker project name. You will need to create sub-directories matching those mentioned in the compose file. So, my directory structure looks like this iris_spark_zeppelin local jars notebooks samples docker-compose.yml To launch the containers, execute the following Docker command from inside your project directory: C:\iris_spark_zeppelin>docker-compose up –d Note that the –d flag causes the containers in detached mode. You will not see them logging any information to the command line. You can inspect the log files for the containers using the docker logs command. For example, to see the log file for the iris-community container, execute the following: C:\>docker logs iris-community To inspect the status of the containers, execute the following command: C:\>docker container ls When the iris-community container is ready, you can access the IRIS Management Portal with this url: http://localhost:52773/csp/sys/UtilHome.csp Note: The first time you login to IRIS use the username/password: SuperUser/SYS. You will be re-directed to a password change page. You can access the Zeppelin notebook with this url: http://localhost:8080 Copying Some Jar FilesIn order to use the InterSystems Spark Connector, the Spark environment needs access to two jar files: 1. intersystems-jdbc-3.0.0.jar 2. intersystems-spark-1.0.0.jar Currently, these jar files are with IRIS in the iris-community container. We need to copy them out into the locally mapped directory so that the spark-zeppelin container can access them. To do this, we can use the Docker cp command to copy all the JDK 1.8 version files from inside the iris-community container into one of the local directories visible to the spark-zeppelin container. Open a CMD prompt in the project directory and execute the following command:C:\iris_spark_zeppelin>docker cp iris-community:/usr/irissys/dev/java/lib/JDK18 local/jars This will add a JDK18 directory containing the above jar files along with a few others to <project-directory>/local/jars. Adding Some Data No data, no machine learning. We can use the local directories mounted by the iris-community container to add some data to the data platform. I used the Iris data set (no relation to InterSystems IRIS Data Platform). The Iris data set contains data about flowers. It has long served as the “hello world” example for machine learning (Iris flower data set). You can download or pull an InterSystems class definition for generating the data, along with code for several related examples, from GitHub (Samples-Data-Mining). We are interested in only one file from this set: DataMining.IrisDataset.cls. Copy DataMining.IrisDataset.cls into your <project-directory>/local/samples directory. Next, open a bash shell inside the iris-community container by executing the following from a command prompt on your local system: C:\>docker exec –it iris-community bash From the bash shell, launch an IRIS terminal session: /# iris session iris IRIS asks for a username/password. If this is the first time that you are logging into IRIS in this container, use SuperUser/SYS. You will then be asked to change the password. If you have logged in before, for example through the Management Portal, then you changed the password already. Use your updated password now. Execute the following command to load the file into IRIS: USER>Do $System.OBJ.Load(“/samples/local/IrisDataset.cls”,”ck”) You should see output about the above class file compiling and loading successfully. Once this code is loaded, execute the following commands to generate the data for the Iris dataset USER>Set status = ##class(DataMining.IrisDataset).load() USER>Write status The output from the second command should be 1. The database now contains data for 150 examples of Iris flowers. Launching Zeppelin and Configuring Our Notebook First, download the Zeppelin notebook note available here: https://github.com/denelson/DevCommunity. The name of the note is “Machine Learning Hello World”. You can open the Zeppelin notebook in your web browser using the following url: http://localhost:8080 It looks something like this. Click the “Import note” link and import “Machine Learning Hello World.json”. The first code paragraph contains code that will load the InterSystems JDBC driver and Spark Connector. By default, Zeppelin notebooks provide the z variable for accessing Zeppelin context. See Zeppelin Context in the Zeppelin documentation. %spark.dep //z supplies Zeppelin context z.reset() z.load("/home/zeppelin/jars/JDK18/intersystems-jdbc-3.0.0.jar") z.load("/home/zeppelin/jars/JDK18/intersystems-spark-1.0.0.jar") Before running the paragraph, click the down arrow next to the word “anonymous” and then select “Interpreter”. On the Interpreters page, search for spark, then click the restart button on the right-hand-side and then ok on the ensuing pop-up. Now return to the Machine Learning Hello World notebook and run the paragraph by clicking the little arrow all the way at the right. You should see output similar to that in the following screen capture: Connecting to IRIS and Exploring the Data Everything is all configured. Now we can connect code running in the spark-zeppelin container to InterSystems IRIS, running in our iris-community container, and begin exploring the data we added earlier. The following Python code connects to InterSystems IRIS and reads the table of data that we loaded in an earlier step (DataMining.IrisDataset) and then displays the first ten rows. Here are a couple of notes about the following code: We need to supply a username and password to IRIS. Use the password that you provided in an earlier step when you logged into IRIS and were forced to change your password. I used SuperUser/SYS1 “iris” in the spark.read.format(“iris”) snippet is an alias for the com.intersystems.spark class, the spark connector. The connection url, including “IRIS” at the start, specifies the location of the InterSystems IRIS default Spark master server The spark variable points to the Spark session supplied by the Zeppelin Spark interpreter. %pyspark uname = "SuperUser" pwd = "SYS1" #spark session available by default through spark variable. #URL uses the name of the container, iris-community, as the host name. iris = spark.read.format("iris").option("url","IRIS://iris-community:51773/USER").option("dbtable","DataMining.IrisDataset").option("user",uname).option("password",pwd).load() iris.show(10) Note: For more information on configuring the Spark connection to InterSystems IRIS, see Using the InterSystems Spark Connector in the InterSystems IRIS documentation. For more information on the spark session and other context variables provided by Zeppelin, see SparkContext, SQLContext, SparkSession, ZeppelinContext in the Zeppelin documentation. Running the above paragraph results in the following output: Each row represents an individual flower and records its petal length and width, its sepal length and width, and the Iris species it belongs to. Here is some SQL-esque code for further exploration: %pyspark iris.groupBy("Species").count().show() Running the paragraph produces the following output: So there are three different Iris species represented in the data. The data represents each species equally. Using Python’s matplotlib library, we can even draw some graphs. Here is code to plot Petal Length vs. Petal Width: %pyspark %matplotlib inline import matplotlib.pyplot as plt #Retrieve an array of row objects from the DataFrame items = iris.collect() petal_length = [] petal_width = [] for item in items: petal_length.append(item['PetalLength']) petal_width.append(item['PetalWidth']) plt.scatter(petal_width,petal_length) plt.xlabel("Petal Width") plt.ylabel("Petal Length") plt.show() Running the paragraph creates the following scatter plot: Even to the untrained eye, it looks like there is a pretty strong correlation between Petal Width and Petal Length. We should be able to reliably predict petal length based on petal width. A Little Machine Learning Note: I copied the following code from my earlier playground article, cited above. In order to predict petal length based on petal width, we need a model of the relationship between the two. We can create such a model very easily using Spark. Here is some code that uses Spark's linear regression API to train a regression model. The code does the following: Creates a new Spark DataFrame containing the petal length and petal width columns. The petal width column represents the "features" and the petal length column represents the "labels". We use the features to predict the labels. Randomly divides the data into training (70%) and test (30%) sets. Uses the training dat to fit the linear regression model. Runs the test data through the model and then displays the petal length, petal width, features, and predictions. %pyspark from pyspark.ml.regression import LinearRegression from pyspark.ml.feature import VectorAssembler # Transform the "Features" column(s) into the correct vector format df = iris.select('PetalLength','PetalWidth') vectorAssembler = VectorAssembler(inputCols=["PetalWidth"], outputCol="features") data=vectorAssembler.transform(df) # Split the data into training and test sets. trainingData,testData = data.randomSplit([0.7, 0.3], 0.0) # Configure the model. lr = LinearRegression().setFeaturesCol("features").setLabelCol("PetalLength").setMaxIter(10) # Train the model using the training data. lrm = lr.fit(trainingData) # Run the test data through the model and display its predictions for PetalLength. predictions = lrm.transform(testData) predictions.show(10) Running the paragraph results in the following output: The Regression Line The “model” is really just a regression line through the data. It would be nice to have the slope and y-intercept of that line. It would also be nice to be able to visualize that line superimposed on our scatter plot. The following code retrieves the slope and y-intercept from the trained model and then uses them to add a regression line to the scatter plot of the petal length and width data. %pyspark %matplotlib inline import matplotlib.pyplot as plt # retrieve the slope and y-intercepts of the regression line from the model. slope = lrm.coefficients[0] intercept = lrm.intercept print("slope of regression line: %s" % str(slope)) print("y-intercept of regression line: %s" % str(intercept)) items = iris.collect() petal_length = [] petal_width = [] petal_features = [] for item in items: petal_length.append(item['PetalLength']) petal_width.append(item['PetalWidth']) fig, ax = plt.subplots() ax.scatter(petal_width,petal_length) plt.xlabel("Petal Width") plt.ylabel("Petal Length") y = [slope*x+intercept for x in petal_width] ax.plot(petal_width, y, color='red') plt.show() Running the paragraph results in the following output: What’s Next? There is much, much more we can do. Obviously, we can load much larger and much more interesting datasets into IRIS. See, for example the Kaggle datasets (https://www.kaggle.com/datasets) With a fully licensed IRIS we could configure sharding and see how Spark running through the InterSystems Spark Connector takes advantage of the parallelism sharding offers. Spark, of course, provides many more machine learning and data analysis algorithms. It supports several different languages, including Scala and R. The article is considered as InterSystems Data Platform Best Practice. Hi,Your article helps me a lot.one question more: how can we get a fully licensed IRIS?It seems that there is no download page in the official site. Hi!You can request for a fully licensed IRIS on this pageIf you want to try or use IRIS features with IRIS Community Edition:Try IRIS onlineUse IRIS Community from DockerHub on your laptop as is, or with different samples from Open Exchange. Check how to use IRIS Docker image on InterSystems Developers video channel.or run Community on Express IRIS images on AWS, GCP or Azure.HTH
Article
Gevorg Arutiunian · Apr 27, 2019

How to Create New Database, Namespace and Web Application for InterSystems IRIS programmatically

Here is an ObjectScript snippet which lets to create database, namespace and a web application for InterSystems IRIS: ``` set currentNS = $namespace zn "%SYS" write "Create DB ...",! set dbName="testDB" set dbProperties("Directory") = "/InterSystems/IRIS/mgr/testDB" set status=##Class(Config.Databases).Create(dbName,.dbProperties) write:'status $system.Status.DisplayError(status) write "DB """_dbName_""" was created!",!! write "Create namespace ...",! set nsName="testNS" //DB for globals set nsProperties("Globals") = dbName //DB for routines set nsProperties("Routines") = dbName set status=##Class(Config.Namespaces).Create(nsName,.nsProperties) write:'status $system.Status.DisplayError(status) write "Namespace """_nsName_""" was created!",!! write "Create web application ...",! set webName = "/csp/testApplication" set webProperties("NameSpace") = nsName set webProperties("Enabled") = $$$YES set webProperties("IsNameSpaceDefault") = $$$YES set webProperties("CSPZENEnabled") = $$$YES set webProperties("DeepSeeEnabled") = $$$YES set webProperties("AutheEnabled") = $$$AutheCache set status = ##class(Security.Applications).Create(webName, .webProperties) write:'status $system.Status.DisplayError(status) write "Web application """webName""" was created!",! zn currentNS ``` Also check these manuals: - [Creating Database](https://irisdocs.intersystems.com/iris20181/csp/documatic/%25CSP.Documatic.cls?PAGE=CLASS&LIBRARY=%25SYS&CLASSNAME=Config.Databases) - [Namespace](https://irisdocs.intersystems.com/iris20181/csp/documatic/%25CSP.Documatic.cls?PAGE=CLASS&LIBRARY=%25SYS&CLASSNAME=Config.Namespaces) - [CSP Application](https://irisdocs.intersystems.com/iris20181/csp/documatic/%25CSP.Documatic.cls?PAGE=CLASS&LIBRARY=%25SYS&CLASSNAME=Security.Applications) For web applications see the answer https://community.intersystems.com/post/how-do-i-programmatically-create-web-application-definition Thank you! It's great having all 3 examples in one place :) Just a quick note. I found that when creating a new database it was best to initially use SYS.Database so you can specifiy max size etc.. s db=##class(SYS.Database).%New() s db.Directory=directory s db.Size=initialSize s db.MaxSize=maxSize s db.GlobalJournalState=3 s Status=db.%Save() Then finalise with Config.Database s Properties("Directory")=directory s Status=##Class(Config.Databases).Create(name,.Properties) s Obj=##Class(Config.Databases).Open(name) s Obj.MountRequired=1 s Status=Obj.%Save() This might not be the best way to do it, I'm open to improvements.