Clear filter
Announcement
Anastasia Dyubaylo · Sep 13, 2022
Hi Developers,
We have another great opportunity for you to join us in Germany for a Conference for Data Scientists, Data Engineers and Data Teams that will be held in Karlsruhe!
⏱ Date: 20 – 21 September 2022
📍Venue: IHK Karlsruhe, Lammstr. 13-17, 76133 Karlsruhe, Germany
Our speaker Markus Mechnich will be talking about (Smart) Data Fabrics during the session titled “Putting an end to data silos: making better decisions, pain-free”. He will discuss the challenge of accessing and merging different systems that store data that is rarely homogeneous and/or synchronous enough for a comprehensive evaluation and using it to make sound business decisions.
✅ REGISTER HERE
We look forward to welcoming you to Karlsruhe!
Announcement
Larry Finlayson · Nov 16, 2022
Managing InterSystems Servers December 5-9, 2022 9:00am-5:00pm US-Eastern Time (EST)
This five-day course teaches system and database administrators how to install, configure and secure InterSystems server software, configure for high availability and disaster recovery, and monitor the system. Students also learn troubleshooting techniques.
This course is applicable to both InterSystems IRIS and Caché. Although the course is mostly platform independent, students can complete the exercises using either Windows or Ubuntu.
Self Register Here
Announcement
Fabiano Sanches · May 10, 2023
InterSystems announces its first preview, as part of the developer preview program for the 2023.2 release. This release will include InterSystems IRIS and InterSystems IRIS for Health.
Highlights
Many updates and enhancements have been added in 2023.2 and there are also brand-new capabilities, such as Time-Aware Modeling, enhancements of Foreign Tables, and the ability to use Ready-Only Federated Tables. Some of these features or improvements may not be available in this current developer preview.
Another important topic is the removal of the Private Web Server (PWS) from the installers. This feature has been announced since last year and will be removed from InterSystems installers, but they are still in this first preview. See this note in the documentation.
--> If you are interested to try the installers without the PWS, please enroll in its EAP using this form, selecting the option "NoPWS". Additional information related to this EAP can be found here.
Future preview releases are expected to be updated biweekly and we will add features as they are ready. Please share your feedback through the Developer Community so we can build a better product together.
Initial documentation can be found at these links below. They will be updated over the next few weeks until launch is officially announced (General Availability - GA):
InterSystems IRIS
InterSystems IRIS for Health
Availability and Package Information
As usual, Continuous Delivery (CD) releases come with classic installation packages for all supported platforms, as well as container images in Docker container format. For a complete list, refer to the Supported Platforms document.
Installation packages and preview keys are available from the WRC's preview download site or through the evaluation services website (use the flag "Show Preview Software" to get access to the 2023.2).
Container images for both Enterprise and Community Editions of InterSystems IRIS and IRIS for Health and all corresponding components are available from the new InterSystems Container Registry web interface. For additional information about docker commands, please see this post: Announcing the InterSystems Container Registry web user interface.
The build number for this developer preview for InterSystems IRIS is 2023.2.0.198.0. For InterSystems IRIS for Health build number is 2023.2.0.200.0.
For a full list of the available images, please refer to the ICR documentation. Alternatively, tarball versions of all container images are available via the WRC's preview download site. InterSystems IRIS for Health kits are published and the build number is 2023.2.0.200.0.
They are available from the WRC's preview download site and from the InterSystems Container Registry. Time-Aware Modeling, enhancements of Foreign Tables, Ready-Only Federated Tables.
Is there any documentation on these features? Hi Herman - Documentation will be released as features become available. As usual, we're adding things every two weeks to these previews. So, stay tuned!
Announcement
John Murray · May 22, 2023
If you have already built unit tests using the %UnitTest framework, or are thinking about doing so, please take a look at InterSystems Testing Manager.
Without leaving VS Code you can now browse your unit tests, run or debug them, and view previous run results.
InterSystems Testing Manager works with both of the source code location paradigms supported by the ObjectScript extension. Your unit test classes can either be mastered in VS Code's local filesystem (the 'client-side editing' paradigm) or in a server namespace ('server-side editing'). In both cases the actual test runs occur in a server namespace.
Feedback welcome. Don't see it in Open Exchange :) Could you please publish? @Evgeny.Shvarov It's available on the VS Code Marketplace, which is where people would expect to find it. Right, I'm not suggesting to remove it from there. But if you publish on Open Exchange it will let developers from InterSystems ecosystem it will help some developers to notice it. E.g. here on Open Exchange InterSystems can find dev tools and libs that could form their development environment. This is awesome! I've really missed this from working in other programming languages.
According to the ReadMe (v0.2.0), this extension is currently considered a "Preview". Is there a product roadmap/timeline of what needs to be done to promote it out of "preview" status? Thanks for the positive response. IMO the main thing needed before the Preview tag gets dropped is feedback from %UnitTest users.
One motivation for publishing the Preview is the upcoming Global Summit, which I hope will be a good opportunity for in-person discussion about this extension and others. Find me at the George James Software booth in the Partner Pavilion. Now at https://openexchange.intersystems.com/package/InterSystems-Testing-Manager-for-VS-Code Also welcome are reviews, which can be posted on Marketplace and on Open Exchange. Tried with the project where I have ObjectScript Unittests. I call them manually with IPM and automatically with github workflow.
Testing manager is installed, I'm on a class with unit test.
Not sure though how it works?
@Evgeny.Shvarov I believe the Market place link @John.Murray provided has details of how to run the test (including some GIFs etc.)
https://marketplace.visualstudio.com/items?itemName=intersystems-community.testingmanager Since you are working client-side I think you need to set a intersystems.testingManager.client.relativeTestRoot setting to contain the workspace-relative path to the tests folder that's showing at the beginning of the breadcrumb in your screenshot.
Please see point 2 of https://github.com/intersystems-community/intersystems-testingmanager#workspace-preparations @Evgeny.Shvarov Did my suggestion help you get started with this extension? Thank you! Hi @John.Murray !
Not sure if I put it in a right place:
Says it shouldn't be here Move it outside the `objectscript.conn` object please. Did it:
Now it likes it, but still the testing tool doesn't see any tests. Should it be absolute path or relative to the root of repo?
Do you mind to send a PR with a working setting to the repo? Please change this to a relative path (no leading slash).
Also, there was a bug in how it handled a "docker-compose" type of configuration.
Please try this dev build by downloading the zip, extracting the VSIX and dropping it into VS Code's Extensions view. If it works for you I will publish a new version to Marketplace. Now it sees them:
But they ask for ^UnitTest to be setup, instructions, etc.
Could it work similar as it works in IPM as @Timothy.Leavitt demonstrated?
Because with IPM I can run tests all and standalone without any settings at all - it just works. Could the IPM be leveraged if it is presented in the repo/namespace? BTW,
this setting worked:
"intersystems.testingManager.client.relativeTestRoot": "/tests", Thanks for confirming the fix. I'm publishing 0.2.1 now.
Please post enhancement ideas on the repo, for better tracking.
Article
Evgeny Shvarov · Feb 15, 2020
Hi Developers!
As you know the concept of ObjectScript Package Manager consists of ZPM client - client application for IRIS which helps you to install packages from the registry. And the code which works "on the other side" is ZPM Registry - server which hosts packages and exposes API to submit, list and install it. Now when you install the ZPM client it installs packages from community package registry, which si hosted on pm.community.intersystems.com
But what if you want your own registry? E.g. you produce different software packages for your clients and you want to distribute it via private registry? Also, you may want to use your own registry to deploy solutions with different combinations of packages.
Is it possible? The answer is YES! You can have it if you deploy ZPM registry on your server with InterSystems IRIS.
To make it happen you would need to set up your own registry server.
How to do that?
ZPM Registry can be installed as a package zpm-registry. So you can install zpm-client first (article, video), or take a docker-image with zpm-client inside and install zpm-registry as a package:
USER>zpm
zpm: USER>install zpm-registry
When zpm-registry is installed it introduces the REST end-point on server:port/registry with a set of REST-API routes. Let’s examine it and let's open the GET requests on community registry as an example.
/
- root entry shows the version of the registry software.
/_ping
{"message":"ping"}
- entry to check the working status.
/_spec
- swagger spec entry
/packages/-/all
- displays all the available packages.
/packages/:package
- the set of GET entries for package deployment. ZPM client is using it when we ask it to install a particular package.
/package
- POST entry to publish a new package from the Github repository.
This and all other API you can examine e.g. in full documentation online generated with the help of /_spec entry:
OK! Let's install and test ZPM-registry on a local machine.
1. Run docker container with IRIS 2019.4 and preinstalled ZPM-client (from developer community repo on docker hub).
% docker run --name my-iris -d --publish 52773:52773 intersystemsdc/iris-community:2019.4.0.383.0-zpm
f40a06bd81b98097b6cc146cd61a6f8487d2536da1ffaf0dd344c615fe5d2844
% docker exec -it my-iris iris session IRIS
Node: f40a06bd81b9, Instance: IRIS
USER>zn "%SYS"
%SYS>Do ##class(Security.Users).UnExpireUserPasswords("*")
%SYS>zn "USER"
USER>zpm
zpm: USER>install zpm-registry
[zpm-registry] Reload START
[zpm-registry] Reload SUCCESS
[zpm-registry] Module object refreshed.
[zpm-registry] Validate START
[zpm-registry] Validate SUCCESS
[zpm-registry] Compile START
[zpm-registry] Compile SUCCESS
[zpm-registry] Activate START
[zpm-registry] Configure START
[zpm-registry] Configure SUCCESS
[zpm-registry] Activate SUCCESS
zpm: USER>
2. Let’s publish a package in our new privately-launched zpm-registry.
To publish a registry you can make a POST request on registry/package end-point and supply the URL of the repository, which contains module.xml in the root. E.g. let's take the repo of objectscript-math application on Open Exchange by @Peter.Steiwer : https://github.com/psteiwer/ObjectScript-Math
$ curl -i -X POST -H "Content-Type:application/json" -u user:password -d '{"repository":"https://github.com/psteiwer/ObjectScript-Math"}' 'http://localhost:52773/registry/package'
make sure to change the user and password to your credentials.
HTTP/1.1 200 OK
Date: Sat, 15 Feb 2020 20:48:13 GMT
Server: Apache
CACHE-CONTROL: no-cache
EXPIRES: Thu, 29 Oct 1998 17:04:19 GMT
PRAGMA: no-cache
CONTENT-LENGTH: 0
Content-Type: application/json; charset=utf-8
As we see, the request returns 200 which means the call is successful and the new package is available in the private registry for installation via ZPM clients. Let’s check the list of packages again:
Open in browser http://localhost:52773/registry/packages/-/all:
[{"name":"objectscript-math","versions":["0.0.4"]}]
Now it shows us that there is one package available.
That’s perfect!
Next question is how to install packages via ZPM client from an alternative repository.
By default when ZPM client is being installed it’s configured to work with public registry pm.community.intersystems.com.
This command shows what is the current registry set up:
zpm: USER>repo -list
registry
Source: https://pm.community.intersystems.com
Enabled? Yes
Available? Yes
Use for Snapshots? Yes
Use for Prereleases? Yes
zpm: USER>
But this could be altered. The following command changes the registry to your one:
zpm: USER>repo -n registry -r -url http://localhost:52773/registry/ -user usernname -pass password
Change here username and password to what is setup on your server available for /registry REST API.
Let’s check that alternative registry is available:
zpm: USER>repo -list
registry
Source: http://localhost:52773/registry/
Enabled? Yes
Available? Yes
Use for Snapshots? Yes
Use for Prereleases? Yes
Username: _SYSTEM
Password: <set>
zpm: USER>
So ZPM client is ready to work with another ZPM registry.
So, ZPM registry lets you build your own private registries which could be filled with any collection of packages either from public or from your own.
And ZPM client gives you the option to switch between registries and install from public or from any private registries.
Also check the article by @Mikhail.Khomenko which describes how to deploy InterSystems IRIS docker-container with ZPM registry in Kubernetes cloud provided by Google Kubernetes Engine.
Happy coding and stay tuned!
Is there any way to have multiple registries enabled at the same time in ZPM, with a priority so that it looks at them in order to find packages. That way you can use public packages from the community repo without having to have them imported/duplicated into your local repo. Or is the only way to keep switching between repos if you want to source from different places.
Question
Evgeny Shvarov · Mar 1, 2019
Hi Community!
When you run IRIS container out-of-the-box and connect to it via terminal e.g. with:
docker-compose exec iris bash
You see something like:
root@7b19f545187b:/opt/app# irissession IRIS
Node: 7b19f545187b, Instance: IRIS
Username: ***
Password: ***
USER>
And you enter login and password every time.
How to programmatically setup docker-compose file to have IRIS container with OS authentication enabled? And have the following while entering the terminal:
root@7b19f545187b:/opt/app# irissession IRIS
Node: 7b19f545187b, Instance: IRIS
USER>
One substitution in your code to use $zboolean to account for cases where it had already been enabled (in which case your code would disable it).Instead of:
Set p("AutheEnabled")=p("AutheEnabled")+16
Use
Set p("AutheEnabled")=$zb(p("AutheEnabled"),16,7)
Documentation for $ZBOOLEAN Check out my series of articles Continuous Delivery of your InterSystems solution using GitLab it talks about many features, related to automating these kind of tasks. In particular, Part VII (CD using containers) talks about programmatic enabling of OS-level authentication. To activate OS authentication in your docker image, you can run this code, in %SYS namespace
Do ##class(Security.System).Get(,.p) Set p("AutheEnabled")=p("AutheEnabled")+16 Do ##class(Security.System).Modify(,.p)
If you work with community edition, you can use my image, where you can easily define also user and password for external use.
Running server
$ docker run -d --rm --name iris \
-p 52773:52773 \
-e IRIS_USER=test \
-e IRIS_PASSWORD=test \
daimor/intersystems-iris:2019.1.0S.111.0-community
Terminal connect
$ docker exec -it iris iris session iris
Node: 413a4da758e7, Instance: IRIS
USER>write $username
root
USER>write $roles
%All
Or with docker-compose, something like this
iris:
image: daimor/intersystems-iris:2019.1.0S.111.0-community
ports:
- 52773:52773
environment:
IRIS_USER: ${IRIS_PASSWORD:-test}
IRIS_PASSWORD: ${IRIS_PASSWORD:-test}
Announcement
Anastasia Dyubaylo · Apr 17, 2019
Hi Community!
Good news! One more upcoming event is nearby.
We're please to invite you to join the "J on the Beach" – an international rendezvous for developers and DevOps around Big Data technologies. A fun conference to learn and share the latest experiences, tips and tricks related to Big Data technologies and, the most important part, it’s On The Beach!
We're more than happy to invite you and your colleagues to our InterSystems booth for a personal conversation. InterSystems is also a silver sponsor of the JOTB.
In addition, this year we have a special Global Masters Meeting Point at the conference. You're very welcome to come to our booth to ask questions, share your ideas and, of course, pick up some samples of rewards and GM badges. Looking forward to seeing you soon!
So, remember!
Date: May 15-17, 2019Place: Palacio de Congresos y Exposiciones Adolfo Suarez, Marbella, Spain
More information you can find here: jonthebeach.com
Please feel free to ask any questions in the comments of this post.
Buy your ticket and save your seat today!
Article
David E Nelson · Apr 26, 2019
The last time that I created a playground for experimenting with machine learning using Apache Spark and an InterSystems data platform, see Machine Learning with Spark and Caché, I installed and configured everything directly on my laptop: Caché, Python, Apache Spark, Java, some Hadoop libraries, to name a few. It required some effort, but eventually it worked. Paradise. But, I worried. Would I ever be able to reproduce all those steps? Maybe. Would it be possible for a random Windows or Java update to wreck the whole thing in an instant? Almost certainly.
Now, thanks to the increasingly widespread availability of containers and the increasingly usable Docker for Windows, I have my choice of pre-configured machine learning and data science environments . See, for example, Jupyter Docker Stacks and Zeppelin on Docker Hub. With InterSystems making the community edition of the IRIS Data Platform available via container (InterSystems IRIS now Available on the Docker Store), I have easy access to a data platform supporting both machine learning and analytics among a host of other features. By using containers, I do not need to worry about any automatic updates wrecking my playground. If my office floods and my laptop is destroyed, I can easily recreate the playground with a single text file, which I have of course placed in source control ;-)
In the following, I will share a Docker compose file that I used to create a container-based machine learning and data science playground. The playground involves two containers: one with a Zeppelin and Spark environment, the other with the InterSystems IRIS Data Platform community edition. Both use images available on Docker hub. I’ll then show how to configure the InterSystems Spark Connector to connect the two. I will end by loading some data into InterSystems IRIS and using Spark to do some data exploration, visualization, and some very basic machine learning . Of course, my example will barely scratch the surface of the capabilities of both Spark and InterSystems IRIS. However, I hope the article will be useful to help others get started doing more complex and useful work.
Note: I created and tested everything that follows on my Windows 10 laptop, using Docker for Windows. For information on configuring Docker for Windows for use with InterSystems IRIS please see the following. The second of the two articles also discusses the basics of using compose files to configure Docker containers.
Using InterSystems IRIS Containers with Docker for Windows
Docker for Windows and the InterSystems IRIS Data Platform
Compose File for the Two-Container Playground
Hopefully, the comments in the following compose file do a reasonably adequate job of explaining the environment, but in case they do not, here are the highlights. The compose file defines:
Two containers: One containing the InterSystems IRIS Community Edition and the other containing both the Zeppelin notebook environment and Apache Spark. Both containers are based on images pulled from the Docker store.
A network for communication between the two containers. With this technique, we can use the container names as host names when setting up communication between the containers.
Local directories mounted in each container. We can use these directories to make jar files available to the Spark environment and some data files available to the IRIS environment.
A named volume for the durable %SYS feature needed by InterSystems IRIS. Named volumes are necessary for InterSystems IRIS when running in containers on Docker for Windows. For more about this see below for links to other community articles.
Map some networking ports inside the containers to ports available outside the containers to provide easy access.
version: '3.2'
services:
#container 1 with InterSystems IRIS
iris:
# iris community edition image to pull from docker store.
image: store/intersystems/iris:2019.1.0.510.0-community
container_name: iris-community
ports:
# 51773 is the superserver default port
- "51773:51773"
# 52773 is the webserver/management portal default port
- "52773:52773"
volumes:
# Sets up a named volume durable_data that will keep the durable %SYS data
- durable:/durable
# Maps a /local directory into the container to allow for easily passing files and test scripts
- ./local/samples:/samples/local
environment:
# Set the variable ISC_DATA_DIRECTORY to the durable_data volume that we defined above to use durable %SYS
- ISC_DATA_DIRECTORY=/durable/irissys
# Adds the IRIS container to the network defined below.
networks:
- mynet
#container 2 with Zeppelin and Spark
zeppelin:
# zeppelin notebook with spark image to pull from docker store.
image: apache/zeppelin:0.8.1
container_name: spark-zeppelin
#Ports for accessing Zeppelin environment
ports:
#Port for Zeppelin notebook
- "8080:8080"
#Port for Spark jobs page
- "4040:4040"
#Maps /local directories for saving notebooks and accessing jar files.
volumes:
- ./local/notebooks:/zeppelin/notebook
- ./local/jars:/home/zeppelin/jars
#Adds the Spark and Zeppelin container to the network defined below.
networks:
- mynet
#Declares the named volume for the IRIS durable %SYS
volumes:
durable:
# Defines a network for communication between the two containers.
networks:
mynet:
ipam:
config:
- subnet: 172.179.0.0/16
Launching the Containers
Place the compose file in a directory on your system. Note that the directory name becomes the Docker project name. You will need to create sub-directories matching those mentioned in the compose file. So, my directory structure looks like this
iris_spark_zeppelin
local
jars
notebooks
samples
docker-compose.yml
To launch the containers, execute the following Docker command from inside your project directory:
C:\iris_spark_zeppelin>docker-compose up –d
Note that the –d flag causes the containers in detached mode. You will not see them logging any information to the command line.
You can inspect the log files for the containers using the docker logs command. For example, to see the log file for the iris-community container, execute the following:
C:\>docker logs iris-community
To inspect the status of the containers, execute the following command:
C:\>docker container ls
When the iris-community container is ready, you can access the IRIS Management Portal with this url:
http://localhost:52773/csp/sys/UtilHome.csp
Note: The first time you login to IRIS use the username/password: SuperUser/SYS. You will be re-directed to a password change page.
You can access the Zeppelin notebook with this url:
http://localhost:8080
Copying Some Jar FilesIn order to use the InterSystems Spark Connector, the Spark environment needs access to two jar files:
1. intersystems-jdbc-3.0.0.jar
2. intersystems-spark-1.0.0.jar
Currently, these jar files are with IRIS in the iris-community container. We need to copy them out into the locally mapped directory so that the spark-zeppelin container can access them.
To do this, we can use the Docker cp command to copy all the JDK 1.8 version files from inside the iris-community container into one of the local directories visible to the spark-zeppelin container. Open a CMD prompt in the project directory and execute the following command:C:\iris_spark_zeppelin>docker cp iris-community:/usr/irissys/dev/java/lib/JDK18 local/jars
This will add a JDK18 directory containing the above jar files along with a few others to <project-directory>/local/jars.
Adding Some Data
No data, no machine learning. We can use the local directories mounted by the iris-community container to add some data to the data platform. I used the Iris data set (no relation to InterSystems IRIS Data Platform). The Iris data set contains data about flowers. It has long served as the “hello world” example for machine learning (Iris flower data set). You can download or pull an InterSystems class definition for generating the data, along with code for several related examples, from GitHub (Samples-Data-Mining). We are interested in only one file from this set: DataMining.IrisDataset.cls.
Copy DataMining.IrisDataset.cls into your <project-directory>/local/samples directory. Next, open a bash shell inside the iris-community container by executing the following from a command prompt on your local system:
C:\>docker exec –it iris-community bash
From the bash shell, launch an IRIS terminal session:
/# iris session iris
IRIS asks for a username/password. If this is the first time that you are logging into IRIS in this container, use SuperUser/SYS. You will then be asked to change the password. If you have logged in before, for example through the Management Portal, then you changed the password already. Use your updated password now.
Execute the following command to load the file into IRIS:
USER>Do $System.OBJ.Load(“/samples/local/IrisDataset.cls”,”ck”)
You should see output about the above class file compiling and loading successfully. Once this code is loaded, execute the following commands to generate the data for the Iris dataset
USER>Set status = ##class(DataMining.IrisDataset).load()
USER>Write status
The output from the second command should be 1. The database now contains data for 150 examples of Iris flowers.
Launching Zeppelin and Configuring Our Notebook
First, download the Zeppelin notebook note available here: https://github.com/denelson/DevCommunity. The name of the note is “Machine Learning Hello World”.
You can open the Zeppelin notebook in your web browser using the following url:
http://localhost:8080
It looks something like this.
Click the “Import note” link and import “Machine Learning Hello World.json”.
The first code paragraph contains code that will load the InterSystems JDBC driver and Spark Connector. By default, Zeppelin notebooks provide the z variable for accessing Zeppelin context. See Zeppelin Context in the Zeppelin documentation.
%spark.dep
//z supplies Zeppelin context
z.reset()
z.load("/home/zeppelin/jars/JDK18/intersystems-jdbc-3.0.0.jar")
z.load("/home/zeppelin/jars/JDK18/intersystems-spark-1.0.0.jar")
Before running the paragraph, click the down arrow next to the word “anonymous” and then select “Interpreter”.
On the Interpreters page, search for spark, then click the restart button on the right-hand-side and then ok on the ensuing pop-up.
Now return to the Machine Learning Hello World notebook and run the paragraph by clicking the little arrow all the way at the right. You should see output similar to that in the following screen capture:
Connecting to IRIS and Exploring the Data
Everything is all configured. Now we can connect code running in the spark-zeppelin container to InterSystems IRIS, running in our iris-community container, and begin exploring the data we added earlier. The following Python code connects to InterSystems IRIS and reads the table of data that we loaded in an earlier step (DataMining.IrisDataset) and then displays the first ten rows.
Here are a couple of notes about the following code:
We need to supply a username and password to IRIS. Use the password that you provided in an earlier step when you logged into IRIS and were forced to change your password. I used SuperUser/SYS1
“iris” in the spark.read.format(“iris”) snippet is an alias for the com.intersystems.spark class, the spark connector.
The connection url, including “IRIS” at the start, specifies the location of the InterSystems IRIS default Spark master server
The spark variable points to the Spark session supplied by the Zeppelin Spark interpreter.
%pyspark
uname = "SuperUser"
pwd = "SYS1"
#spark session available by default through spark variable.
#URL uses the name of the container, iris-community, as the host name.
iris = spark.read.format("iris").option("url","IRIS://iris-community:51773/USER").option("dbtable","DataMining.IrisDataset").option("user",uname).option("password",pwd).load()
iris.show(10)
Note: For more information on configuring the Spark connection to InterSystems IRIS, see Using the InterSystems Spark Connector in the InterSystems IRIS documentation. For more information on the spark session and other context variables provided by Zeppelin, see SparkContext, SQLContext, SparkSession, ZeppelinContext in the Zeppelin documentation.
Running the above paragraph results in the following output:
Each row represents an individual flower and records its petal length and width, its sepal length and width, and the Iris species it belongs to.
Here is some SQL-esque code for further exploration:
%pyspark
iris.groupBy("Species").count().show()
Running the paragraph produces the following output:
So there are three different Iris species represented in the data. The data represents each species equally.
Using Python’s matplotlib library, we can even draw some graphs. Here is code to plot Petal Length vs. Petal Width:
%pyspark
%matplotlib inline
import matplotlib.pyplot as plt
#Retrieve an array of row objects from the DataFrame
items = iris.collect()
petal_length = []
petal_width = []
for item in items:
petal_length.append(item['PetalLength'])
petal_width.append(item['PetalWidth'])
plt.scatter(petal_width,petal_length)
plt.xlabel("Petal Width")
plt.ylabel("Petal Length")
plt.show()
Running the paragraph creates the following scatter plot:
Even to the untrained eye, it looks like there is a pretty strong correlation between Petal Width and Petal Length. We should be able to reliably predict petal length based on petal width.
A Little Machine Learning
Note: I copied the following code from my earlier playground article, cited above.
In order to predict petal length based on petal width, we need a model of the relationship between the two. We can create such a model very easily using Spark. Here is some code that uses Spark's linear regression API to train a regression model. The code does the following:
Creates a new Spark DataFrame containing the petal length and petal width columns. The petal width column represents the "features" and the petal length column represents the "labels". We use the features to predict the labels.
Randomly divides the data into training (70%) and test (30%) sets.
Uses the training dat to fit the linear regression model.
Runs the test data through the model and then displays the petal length, petal width, features, and predictions.
%pyspark
from pyspark.ml.regression import LinearRegression
from pyspark.ml.feature import VectorAssembler
# Transform the "Features" column(s) into the correct vector format
df = iris.select('PetalLength','PetalWidth')
vectorAssembler = VectorAssembler(inputCols=["PetalWidth"],
outputCol="features")
data=vectorAssembler.transform(df)
# Split the data into training and test sets.
trainingData,testData = data.randomSplit([0.7, 0.3], 0.0)
# Configure the model.
lr = LinearRegression().setFeaturesCol("features").setLabelCol("PetalLength").setMaxIter(10)
# Train the model using the training data.
lrm = lr.fit(trainingData)
# Run the test data through the model and display its predictions for PetalLength.
predictions = lrm.transform(testData)
predictions.show(10)
Running the paragraph results in the following output:
The Regression Line
The “model” is really just a regression line through the data. It would be nice to have the slope and y-intercept of that line. It would also be nice to be able to visualize that line superimposed on our scatter plot. The following code retrieves the slope and y-intercept from the trained model and then uses them to add a regression line to the scatter plot of the petal length and width data.
%pyspark
%matplotlib inline
import matplotlib.pyplot as plt
# retrieve the slope and y-intercepts of the regression line from the model.
slope = lrm.coefficients[0]
intercept = lrm.intercept
print("slope of regression line: %s" % str(slope))
print("y-intercept of regression line: %s" % str(intercept))
items = iris.collect()
petal_length = []
petal_width = []
petal_features = []
for item in items:
petal_length.append(item['PetalLength'])
petal_width.append(item['PetalWidth'])
fig, ax = plt.subplots()
ax.scatter(petal_width,petal_length)
plt.xlabel("Petal Width")
plt.ylabel("Petal Length")
y = [slope*x+intercept for x in petal_width]
ax.plot(petal_width, y, color='red')
plt.show()
Running the paragraph results in the following output:
What’s Next?
There is much, much more we can do. Obviously, we can load much larger and much more interesting datasets into IRIS. See, for example the Kaggle datasets (https://www.kaggle.com/datasets) With a fully licensed IRIS we could configure sharding and see how Spark running through the InterSystems Spark Connector takes advantage of the parallelism sharding offers. Spark, of course, provides many more machine learning and data analysis algorithms. It supports several different languages, including Scala and R. The article is considered as InterSystems Data Platform Best Practice. Hi,Your article helps me a lot.one question more: how can we get a fully licensed IRIS?It seems that there is no download page in the official site. Hi!You can request for a fully licensed IRIS on this pageIf you want to try or use IRIS features with IRIS Community Edition:Try IRIS onlineUse IRIS Community from DockerHub on your laptop as is, or with different samples from Open Exchange. Check how to use IRIS Docker image on InterSystems Developers video channel.or run Community on Express IRIS images on AWS, GCP or Azure.HTH
Article
Gevorg Arutiunian · Apr 27, 2019
Here is an ObjectScript snippet which lets to create database, namespace and a web application for InterSystems IRIS:
```
set currentNS = $namespace
zn "%SYS"
write "Create DB ...",!
set dbName="testDB"
set dbProperties("Directory") = "/InterSystems/IRIS/mgr/testDB"
set status=##Class(Config.Databases).Create(dbName,.dbProperties)
write:'status $system.Status.DisplayError(status)
write "DB """_dbName_""" was created!",!!
write "Create namespace ...",!
set nsName="testNS"
//DB for globals
set nsProperties("Globals") = dbName
//DB for routines
set nsProperties("Routines") = dbName
set status=##Class(Config.Namespaces).Create(nsName,.nsProperties)
write:'status $system.Status.DisplayError(status)
write "Namespace """_nsName_""" was created!",!!
write "Create web application ...",!
set webName = "/csp/testApplication"
set webProperties("NameSpace") = nsName
set webProperties("Enabled") = $$$YES
set webProperties("IsNameSpaceDefault") = $$$YES
set webProperties("CSPZENEnabled") = $$$YES
set webProperties("DeepSeeEnabled") = $$$YES
set webProperties("AutheEnabled") = $$$AutheCache
set status = ##class(Security.Applications).Create(webName, .webProperties)
write:'status $system.Status.DisplayError(status)
write "Web application """webName""" was created!",!
zn currentNS
```
Also check these manuals:
- [Creating Database](https://irisdocs.intersystems.com/iris20181/csp/documatic/%25CSP.Documatic.cls?PAGE=CLASS&LIBRARY=%25SYS&CLASSNAME=Config.Databases)
- [Namespace](https://irisdocs.intersystems.com/iris20181/csp/documatic/%25CSP.Documatic.cls?PAGE=CLASS&LIBRARY=%25SYS&CLASSNAME=Config.Namespaces)
- [CSP Application](https://irisdocs.intersystems.com/iris20181/csp/documatic/%25CSP.Documatic.cls?PAGE=CLASS&LIBRARY=%25SYS&CLASSNAME=Security.Applications)
For web applications see the answer https://community.intersystems.com/post/how-do-i-programmatically-create-web-application-definition Thank you! It's great having all 3 examples in one place :) Just a quick note.
I found that when creating a new database it was best to initially use SYS.Database so you can specifiy max size etc..
s db=##class(SYS.Database).%New()
s db.Directory=directory
s db.Size=initialSize
s db.MaxSize=maxSize
s db.GlobalJournalState=3
s Status=db.%Save()
Then finalise with Config.Database
s Properties("Directory")=directory
s Status=##Class(Config.Databases).Create(name,.Properties)
s Obj=##Class(Config.Databases).Open(name)
s Obj.MountRequired=1
s Status=Obj.%Save()
This might not be the best way to do it, I'm open to improvements.
Article
Evgeny Shvarov · Feb 24, 2020
Hi Developers!
Many of you publish your InterSystems ObjectScript libraries on Open Exchange and Github.
But what do you do to ease the usage and collaboration to your project for developers?
In this article, I want to introduce the way how to introduce an easy way to launch and contribute to any ObjectScript project just by copying a standard set of files to your repository.
Let's go!
TLDR - copy these files from the repository into your repository:
Dockerfile
docker-compose.yml
Installer.cls
iris.script
settings.json
.dockerignore.gitattributes.gitignore
And you get the standard way to launch and collaborate to your project. Below is the long article on how and why this works.
NB: In this article, we will consider projects which are runnable on InterSystems IRIS 2019.1 and newer.
Choosing the launch environment for InterSystems IRIS projects
Usually, we want a developer to try the project/library and be sure that this will be fast and safe exercise.
IMHO the ideal approach to launch anything new fast and safe is the Docker container which gives a developer a guarantee that anything he/she launches, imports, compiles and calculates is safe for the host machine and no system or code would be destroyed or spoiled. If something goes wrong you just stop and remove the container. If the application takes an enormous amount of disk space - you wipe out it with the container and your space is back. If an application spoils the database configuration - you just delete the container with spoiled configuration. Simple and safe like that.
Docker container gives you safety and standardization.
The simplest way to run vanilla InterSystems IRIS Docker container is to run an IRIS Community Edition image:
1. Install Docker desktop
2. Run in OS terminal the following:
docker run --rm -p 52773:52773 --init --name my-iris store/intersystems/iris-community:2020.1.0.199.0
3. Then open Management portal in your host browser on:
http://localhost:52773/csp/sys/UtilHome.csp
4. Or open a terminal to IRIS:
docker exec -it my-iris iris session IRIS
5. Stop IRIS container when you don't need it:
docker stop my-iris
OK! We run IRIS in a docker container. But you want a developer to install your code into IRIS and maybe make some settings. This is what we will discuss below.
Importing ObjectScript files
The simplest InterSystems ObjectScript project can contain a set of ObjectScript files like classes, routines, macro, and globals. Check the article on the naming convention and proposed folder structure.
The question is how to import all this code into an IRIS container?
Here is the momennt where Dockerfile helps us which we can use to take the vanilla IRIS container and import all the code from a repository to IRIS and do some settings with IRIS if we need. We need to add a Dockerfile in the repo.
Let's examine the Dockerfile from ObjectScript template repo:
ARG IMAGE=store/intersystems/irishealth:2019.3.0.308.0-community
ARG IMAGE=store/intersystems/iris-community:2019.3.0.309.0
ARG IMAGE=store/intersystems/iris-community:2019.4.0.379.0
ARG IMAGE=store/intersystems/iris-community:2020.1.0.199.0
FROM $IMAGE
USER root
WORKDIR /opt/irisapp
RUN chown ${ISC_PACKAGE_MGRUSER}:${ISC_PACKAGE_IRISGROUP} /opt/irisapp
USER irisowner
COPY Installer.cls .
COPY src src
COPY iris.script /tmp/iris.script # run iris and initial
RUN iris start IRIS \
&& iris session IRIS < /tmp/iris.script
First ARG lines set the $IMAGE variable - which we will use then in FROM. This is suitable to test/run the code in different IRIS versions switching them just by what is the last line before FROM to change the $IMAGE variable.
Here we have:
ARG IMAGE=store/intersystems/iris-community:2020.1.0.199.0
FROM $IMAGE
This means that we are taking IRIS 2020 Community Edition build 199.
We want to import the code from the repository - that means we need to copy the files from a repository into a docker container. The lines below help to do that:
USER root
WORKDIR /opt/irisapp
RUN chown ${ISC_PACKAGE_MGRUSER}:${ISC_PACKAGE_IRISGROUP} /opt/irisapp
USER irisowner
COPY Installer.cls .
COPY src src
USER root - here we switch user to a root to create a folder and copy files in docker.
WORKDIR /opt/irisapp - in this line we setup the workdir in which we will copy files.
RUN chown ${ISC_PACKAGE_MGRUSER}:${ISC_PACKAGE_IRISGROUP} /opt/irisapp - here we give the rights to irisowner user and group which are run IRIS.
USER irisowner - switching user from root to irisowner
COPY Installer.cls . - coping Installer.cls to a root of workdir. Don't miss the dot, here.
COPY src src - copy source files from src folder in the repo to src folder in workdir in the docker.
In the next block we run the initial script, where we call installer and ObjectScript code:
COPY iris.script /tmp/iris.script # run iris and initial
RUN iris start IRIS \
&& iris session IRIS < /tmp/iris.script
COPY iris.script / - we copy iris.script into the root directory. It contains ObjectScript we want to call to setup the container.
RUN iris start IRIS\ - start IRIS
&& iris session IRIS < /tmp/iris.script - start IRIS terminal and input the initial ObjectScript to it.
Fine! We have the Dockerfile, which imports files in docker. But we faced two other files: installer.cls and iris.script Let's examine it.
Installer.cls
Class App.Installer
{
XData setup
{
<Manifest>
<Default Name="SourceDir" Value="#{$system.Process.CurrentDirectory()}src"/>
<Default Name="Namespace" Value="IRISAPP"/>
<Default Name="app" Value="irisapp" />
<Namespace Name="${Namespace}" Code="${Namespace}" Data="${Namespace}" Create="yes" Ensemble="no">
<Configuration>
<Database Name="${Namespace}" Dir="/opt/${app}/data" Create="yes" Resource="%DB_${Namespace}"/>
<Import File="${SourceDir}" Flags="ck" Recurse="1"/>
</Configuration>
<CSPApplication Url="/csp/${app}" Directory="${cspdir}${app}" ServeFiles="1" Recurse="1" MatchRoles=":%DB_${Namespace}" AuthenticationMethods="32"
/>
</Namespace>
</Manifest>
}
ClassMethod setup(ByRef pVars, pLogLevel As %Integer = 3, pInstaller As %Installer.Installer, pLogger As %Installer.AbstractLogger) As %Status [ CodeMode = objectgenerator, Internal ]
{
#; Let XGL document generate code for this method.
Quit ##class(%Installer.Manifest).%Generate(%compiledclass, %code, "setup")
}
}
Frankly, we do not need Installer.cls to import files. This could be done with one line. But often besides importing code we need to setup the CSP app, introduce security settings, create databases and namespaces.
In this Installer.cls we create a new database and namespace with the name IRISAPP and create the default /csp/irisapp application for this namespace.
All this we perform in <Namespace> element:
<Namespace Name="${Namespace}" Code="${Namespace}" Data="${Namespace}" Create="yes" Ensemble="no">
<Configuration>
<Database Name="${Namespace}" Dir="/opt/${app}/data" Create="yes" Resource="%DB_${Namespace}"/>
<Import File="${SourceDir}" Flags="ck" Recurse="1"/>
</Configuration>
<CSPApplication Url="/csp/${app}" Directory="${cspdir}${app}" ServeFiles="1" Recurse="1" MatchRoles=":%DB_${Namespace}" AuthenticationMethods="32"
/>
</Namespace>
And we import files all the files from SourceDir with Import tag:
<Import File="${SourceDir}" Flags="ck" Recurse="1"/>
SourceDir here is a variable, which is set to the current directory/src folder:
<Default Name="SourceDir" Value="#{$system.Process.CurrentDirectory()}src"/>
Installer.cls with these settings gives us confidence, that we create a clear new database IRISAPP in which we import arbitrary ObjectScript code from src folder.
iris.script
Here you are welcome to provide any initial ObjectScript setup code you want to start your IRIS container.
E.g. here we load and run installer.cls and then we make UserPasswords forever just to avoid the first request to change the password cause we don't need this prompt for development.
; run installer to create namespace
do $SYSTEM.OBJ.Load("/opt/irisapp/Installer.cls", "ck")
set sc = ##class(App.Installer).setup() zn "%SYS"
Do ##class(Security.Users).UnExpireUserPasswords("*") ; call your initial methods here
halt
docker-compose.yml
Why do we need docker-compose.yml - couldn't we just build and run the image just with Dockerfile? Yes, we could. But docker-compose.yml simplifies the life.
Usually, docker-compose.yml is used to launch several docker images connected to one network.
docker-compose.yml could be used to also make launches of one docker image easier when we deal with a lot of parameters. You can use it to pass parameters to docker, such as ports mapping, volumes, VSCode connection parameters.
version: '3.6'
services:
iris:
build:
context: .
dockerfile: Dockerfile
restart: always
ports:
- 51773
- 52773
- 53773
volumes:
- ~/iris.key:/usr/irissys/mgr/iris.key
- ./:/irisdev/app
Here we declare service iris, which uses docker file Dockerfile and which exposes the following ports of IRIS: 51773, 52773, 53773. Also this service maps two volumes: iris.key from home directory of host machine to IRIS folder where it is expected and it maps the root folder of source code to /irisdev/app folder.
Docker-compose gives us the shorter and unified command to build and run the image whatever parameters you setup in docker compose.
in any case, the command to build and launch the image is:
$ docker-compose up -d
and to open IRIS terminal:
$ docker-compose exec iris iris session iris
Node: 05a09e256d6b, Instance: IRIS
USER>
Also, docker-compose.yml helps to set up the connection for VSCode ObjectScript plugin.
.vscode/settings.json
The part, which relates to ObjectScript addon connection settings is this:
{
"objectscript.conn" :{
"ns": "IRISAPP",
"active": true,
"docker-compose": {
"service": "iris",
"internalPort": 52773
}
}
}
Here we see the settings, which are different from default settings of VSCode ObjectScript plugin.
Here we say, that we want to connect to IRISAPP namespace (which we create with Installer.cls):
"ns": "IRISAPP",
and there is a docker-compose setting, which tells, that in docker-compose file inside service "iris" VSCode will connect to the port, which 52773 is mapped to:
"docker-compose": {
"service": "iris",
"internalPort": 52773
}
If we check, what we have for 52773 we see that this is the mapped port is not defined for 52773:
ports:
- 51773
- 52773
- 53773
This means that a random available on a host machine port will be taken and VSCode will connect to this IRIS on docker via random port automatically.
This is a very handy feature, cause it gives you the option to run any amount of docker images with IRIS on random ports and having VSCode connected to them automatically.
What about other files?
We also have:
.dockerignore - file which you can use to filter host machine files you don't want to be copied into docker image you build. Usually .git and .DS_Store are mandatory lines.
.gitattributes - attributes for git, which unify line endings for ObjectScript files in sources. This is very useful if the repo is collaborated by Windows and Mac/Ubuntu owners.
.gitignore - files, which you don't want git to track the changes history for. Typically some hidden OS level files, like .DS_Store.
Fine!
How to make your repository docker-runnable and collaboration friendly?
1. Clone this repository.
2. Copy all this files:
Dockerfile
docker-compose.yml
Installer.cls
iris.script
settings.json
.dockerignore.gitattributes.gitignore
to your repository.
Change this line in Dockerfile to match the directory with ObjectScript in the repo you want to import into IRIS (or don't change if you have it in /src folder).
That's it. And everyone (and you too) will have your code imported into IRIS in a new IRISAPP namespace.
How will people launch your project
the algorithm to execute any ObjectScript project in IRIS would be:
1. Git clone the project locally
2. Run the project:
$ docker-compose up -d
$ docker-compose exec iris iris session iris
Node: 05a09e256d6b, Instance: IRIS
USER>zn "IRISAPP"
How would any the developer contribute to your project
1. Fork the repository and git clone the forked repo locally
2. Open the folder in VSCode (they also need Docker and ObjectScript extensions are installed in VSCode)
3. Right-click on docker-compose.yml->Restart - VSCode ObjectScript will automatically connect and be ready to edit/compile/debug
4. Commit, Push and Pull request changes to your repository
Here is the short gif on how this works:
That's it! Happy coding!
Link to Management portal from is corrected: http://localhost:52773/csp/UtilHome.cspRight link is: http://localhost:52773/csp/sys/UtilHome.csp
Thanks @Artem.Reva could you please explain command
docker-compose exec iris iris session iris
i understand exec, but what is "iris iris session iris" ? The first iris after exec is the service name from docker-compose.yml, then goes command which has to be executed.
iris command is a replacement for ccontrol from Cache/Ensemble
session - subcommand for iris tool
And the latest iris as the instance name inside for IRIS inside the container Is putting all this in the main directory of the repository necessary?
I believe the two git files (.gitignore and .gitattributes) need to be there. But perhaps all files related to docker can be put in a "Docker" directory to avoid adding so many files to the main directory.
My main fear is people seeing all these files and not knowing where to start. Hi Peter!
Thanks for the question.
In addition to .gitignore and .gitattributes
.vscode/settings.json should be in the root too ( @Dmitry.Maslennikov please correct me if I'm wrong).
All the rest:
Dockerfile
docker-compose.yml
Installer.cls
irissession.sh
Could live in a dedicated folder.
BUT! We use Dockerfile to COPY installer.cls and source files from the repo to the image we build and Dockerfile sees the files which sit in the same folder or in subfolders. Specialists, please correct me here if I'm wrong.
So Dockerfile could possibly live inside the source folder - not sure this is what you want to achieve.
there are some possible issues to have docker related files in a dedicated folder.
When you would like to start an environment with docker-compose, you can do it with a command like this.
docker-compose up -d
but it will work only if the docker-compose.yml file name has not changed and it lays right in the current folder.
if you change its location or name, you will have to specify the new place
docker-compose -f ./docker/docker-compose.yml up -d
became not so simple, right?
Ok, next about Dockerfile.
when you build docker image, you have to specify context. So, the command below, just uses file Dockerfile in the current folder, and uses current folder as a context for build.
docker build .
To build docker image with Dockerfile placed somewhere else, you should specify it, suppose you still would like to have current folder as context.
docker build -f ./docker/Dockerfile .
any other files in the root, such as Installer.cls, irissession.sh or any other files which should be used during docker build have to be available from specified context folder. And you can't specify more than one context. So, any of those files should have some parent folder at least, and why not the root of a project.
with docker-compose.yml, we forget about docker build command, but we still have to care about docker-compose
When I try this on my laptop I am getting the following error: The terminal process terminated with exit code: 1
Looking at the terminal output everything seems normal up to that point. Have you encountered this issue before?
David
Hi David!
Could you please commit it and push to the repo on GitHub and I'll try on my laptop?
If not possible - join the discord. please provide more information from log Hi David, could you check it with the latest beta version? it happens.
If you want to open IRIS terminal you can try the following:
Evgeny;
This did work so thanks for the tip!
David Hi All!
Update the article to introduce iris.script approach - dockerfile is smaller, objectscirpt is clearer. 💡 This article is considered as InterSystems Data Platform Best Practice. I have followed these steps both from the GitHub readme file and this more detailed article and I'm having an issue.
To start, however, know I was able to follow the instructions on the the DockerHub page for IRIS and I got my container running and VS Code connected and working. Since I got that working, I decided to better understand starting with this template.
I downloaded the objectscript-docker-template from GitHub to my local drive K:\objectscript-docker-template. I ran the commands to install and run the IRIS container in Git Bash and I see it running in Docker Desktop. I can go in the container's command line and type 'iris TERMINAL IRIS' and I'm brought to the USER namespace.
Back in the project location, I run 'docker-compose up -d' and I get a series of errors:
$ docker-compose up -dCreating objectscript-docker-template_iris_1 ... error
ERROR: for objectscript-docker-template_iris_1 Cannot create container for service iris: status code not OK but 500: {"Message":"Unhandled exception: Filesharing has been cancelled","StackTrace":" at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src . . .
It goes on and on so I won't paste the whole thing.
If I stop the Docker container it's also deleted.
Has anyone had a similar issue or can see right off the bat what I might be doing wrong?
Thanks,
Mike Hi Mike! It looks like that your docker space is over.
Call
docker system prune -f
and see if this fixes the problem.
Caution! This will delete unused containers on your laptop. make sure you don't have sensitive data or uncommitted code in it. Usually you don't You may face the access issue, check the docker settings that you correctly provided a list of resources that can be shared.
@Evgeny Shvarov - "Check the article on the naming and proposed folder structure."
Is this supposed to link to a specific article? In a recent example I posted, I had the need to extend the naming and proposed folder structureIt was obvious if you were reading the downloaded repository. The related article was an advertisement and a "heads up" that it just wasn't the default structure as usual. Hi Ben!
Thanks for the heads up. We have several so far:
the proposed folder structure
the naming convention for packages
the naming convention for "community" packages.
Thanks! I see you updated the article with links. Very helpful :)
Announcement
Anastasia Dyubaylo · Feb 14, 2020
Dear Community,
In advance of the upcoming release of InterSystems IRIS 2020.1 and InterSystems IRIS for Health 2020.1, we're pleased to invite you to the “Office Hours: InterSystems IRIS 2020.1 Preview” webinars. These Q&A sessions will provide a forum for you to ask questions and learn about the latest product features in the upcoming releases.
The webinar will feature members of InterSystems product management team who will answer questions and provide additional information about the new features and enhancements.
We will be hosting two sessions on Thursday, February 20, 2020. Reserve your spot by clicking one of the buttons below for the appropriate time slot.
➡️ 8AM (EST) WEBINAR
➡️ 1PM (EST) WEBINAR
In preparation for the webinar, we encourage you to try InterSystems IRIS or IRIS for Health 2020.1 Preview kits, which are available to download from WRC Distribution*. Also, Release Notes outlining key new features are available to view online:
InterSystems IRIS 2020.1 Release Notes
InterSystems IRIS for Health 2020.1 Release Notes
* WRC login credentials required
We are waiting for you at our webinar! Register now! 👍🏼 Looks really good.
Are there any good videos for
Intersystems API Mgmt
Interystems Cloud Manager
Machine Learning & NLP?? New video content!
Now this webinar recording is available on InterSystems Developers YouTube Channel:
Enjoy watching the video! 👍🏼
Announcement
Anastasia Dyubaylo · Feb 22, 2023
Hey Developers,
Enjoy watching the new video on InterSystems Developers YouTube:
⏯ Real-Life Experiences Migrating a Caché Application Portfolio to InterSystems IRIS @ Global Summit 2022
Learn about AppServices' experiences migrating our collection of applications to InterSystems IRIS. We'll cover migration-related topics at the OS, platform, and application levels.
Presenters:
🗣 @Matthew.Giesmann, Developer, Application Services, InterSystems🗣 @Wangyi.Huang, Technical Specialist, Application Services, InterSystems
Enjoy and stay tuned! 👍
Article
Kate Lau · Mar 15, 2023
In this article, I am trying to walk through my deploying step of IAM on my EC2(ubuntu).
What is IAM?
IAM is InterSystems API Manageryou may reference to the link below to get more idea about IAM
https://docs.intersystems.com/components/csp/docbook/Doc.View.cls?KEY=PAGE_apimgr
Before deploying IAM
Check the license of your API host
Enable the User IAM
Deploy IAM
Reference
https://community.intersystems.com/post/introducing-intersystems-api-manager
Download the image from the following link
https://wrc.intersystems.com/wrc/coDistGen.csp
I downloaded the following version to my pc
Upload the image to my EC2
I use the command scp to upload the image to my cloud
Make sure docker and docker compose are installed
If not, please reference to the following link
https://docs.docker.com/engine/install/ubuntu/
Untar the image file
tar zpxvf IAM-3.0.2.0-4.tar.gz
Load the image into docker
sudo docker load -i iam_image.tar
Run the iam-setup.sh
source ./iam-setup.sh
Edit the file : docker-compose.yml
In order to let us visit the IAM UI from the environment outside the EC2, replace the localhost by the EC2 public address in the parameter KONG_PORTAL_GUI_HOST and KONG_ADMIN_GUI_URL
vi docker-compose.yml
Start the container
sudo docker compose up -d
Check the IAM UI
You can access the UI of the IAM by the following link
http://yourEC2publicAddress:8002/overview
In fact I encountered the error of unable to pass the variable to
"ISC_IRIS_URL", "ISC_IAM_IAMGE","ISC_CA_CERT"
I suspect that the iam-setup.sh now working very well
As a work around, I hard code the variables in the docker-compose.yml
and run the
sudo docker compose up -d
again
Great article,Kate! Could you also help put this on CN DC? Thx a lot! Michael
Announcement
Larry Finlayson · Oct 5, 2022
Managing InterSystems Servers October 24-28, 2022 9:00am-5:00 US-Eastern Time (EDT)
This five-day virtual course teaches system and database administrators how to install, configure and secure InterSystems server software, configure for high availability and disaster recovery, and monitor the system. Students also learn troubleshooting techniques.
This course is applicable to both InterSystems IRIS and Caché. Although the course is mostly platform independent, students can complete the exercises using either Windows or Ubuntu.
Self Register Here
Announcement
Evgeny Shvarov · Sep 15, 2022
Hi developers!
Here is the score of technical bonuses for participants' applications in the InterSystems Interoperability Contest: Building Sustainable Solutions 2022!
Project
Sustainability
Sustainability Dataset
Interoperability Production
Custom Interoperability Adapter
PEX
Embedded Python
Docker
ZPM
Online Demo
Code Quality
First Article on DC
Second Article on DC
Video on YouTube
Total Bonus
Nominal
5
3
3
2
3
3
2
2
2
1
2
1
3
32
appmsw-banks-ru
3
2
2
2
9
n8n-nodes-iris
2
2
3
7
iris-energy-isodata
5
3
3
3
2
2
1
2
3
24
interoperability-soap
3
3
2
2
2
12
samba-iris-adapter
3
2
3
2
2
1
2
3
18
interoperability-test
3
2
2
2
1
2
1
3
16
iris-megazord
5
3
2
-
-
-
-
2
3
15
production-monitor
3
2
-
2
1
-
8
Sustainable Machine Learning
5
3
3
3
2
2
2
20
Recycler
5
2
2
9
Bonuses are subject to change upon the update.
Please claim here in the comments below or in the Discord chat.
I claim YouTube Video bonus for interoperability-test. I have added the link in GitHub ReadMe. Does that automatically update on Open Exchange or do I need to make a new release?
https://youtu.be/LqyRVxpmxGY
I wonder how some apps have a link on Open Exchange to vote in the contest? I claim python embedded bonus, sambaservice class uses python: https://raw.githubusercontent.com/yurimarx/samba-iris-adapter/master/src/dc/samba/SambaService.cls Hi @Evgeny.Shvarov Thanks for sharing the Technical Bonuses.Please note that iris-energy-isodata app has online demo.Thanks Hi @Muhammad.Waseem! Could you please add the link of online demo in the OEX app here https://openexchange.intersystems.com/package/iris-energy-isodata Hi @Evgeny.Shvarov! Link is already there Hi @Evgeny Shvarov Please note that iris-energy-isodata app has youtube video as well.Thanks I just published an article about interoperability-soap here:
https://community.intersystems.com/post/background-story-around-interoperability-soap
I was not able to edit the app on Open Exchange to put the Article link there :-(