Clear filter
Article
Evgeny Shvarov · Apr 14, 2019
Hi guys!Portrait of Madame X, Gustave Caillebotte.One of the features I like in InterSystems ObjectScript is how you can process array transformations in a specific method or a function.Usually when we say "process an array" we assume a very straightforward algorithm which loops through an array and does something with its entries upon a certain rule.The trick is how you transfer an array to work with into a function. One of the nice approaches on how to pass the information about an array is using $Name and Indirection operator. Below you can find a very simple example which illustrates the thing.Suppose we have a method which accepts an array and does the multiplication by two of all the values for all the keys on a level one. The array passed could be an array of any type: global, local, or PPG.Here is the code of the method:
ClassMethod ArraySample(list) As %Status
{
set key=$Order(@list@("")) //get initial key
while key'="" {
set @list@(key)=@list@(key)*2 // multiply all the keys in a list
set key=$Order(@list@(key))
}
}
How this could be used?
E.g. we have a global ^A and we want all the entries of the first level in global ^A to by multiplied by 2. Here is the code:
set arr=$Name(^A)
kill @arr
for i=1:1:10 {
set @arr@(i)=i
}
w "Initial array:",!
zw @arr
do ..ArraySample(arr)
w !,"Multiplied arrays:",!
zw @arr
}
if you run this it will produce the following:
USER>d ##class(Ideal.ObjectScript).ArraySampleTest()
Initial array:
^A(1)=1
^A(2)=2
^A(3)=3
^A(4)=4
^A(5)=5
^A(6)=6
^A(7)=7
^A(8)=8
^A(9)=9
^A(10)=10
After multiplication:
^A(1)=2
^A(2)=4
^A(3)=6
^A(4)=8
^A(5)=10
^A(6)=12
^A(7)=14
^A(8)=16
^A(9)=18
^A(10)=20
Notice @arr@(index) construct. This is called double indirection or subscript indirection.
if arr contains the path to an array subscript indirection lets you refer to subscripts of any initial arr. This could be useful in many cases e.g. if you want to shortcut the long global path or to use in methods as in this case.
Notice $Name function:
set arr=$Name(^A)
in this case $Name sets into arr the name of an array which lets you work with subscripts of ^A using @arr@(subscirpt1,subscript2,...) constructions.
This is similar to
set arr="^A"
But $Name could be used not only for a global name itself but to get a path to any subscript level of the global. E.g. It's very handy when we have strings on the level which saves us from double quotes:
set arr=$Name(^A("level one",3,"level three"))
is much handier than:
set arr ="^A(""level one"",3,""level three"")"
So, $Name helps you to get a fully qualified name for any global, local, or PPG path when you want to process the subscripts of it in the function/procedure/method and you don't want to transfer the data, but provide only the name of an array. Usage $Name + plus double indirection @list@() does the work.
This ObjectScript flexibility could be really helpful if you build API, or libraries, which work with a large amount of persistent data.
I put placed the code above into this simple class, which you can find also in Open Exchange.
So!
I'm not telling "Don't try this at home!". In contrary: "Try it on your laptop"!
The fastest and coolest way is to checkout the repo and as soon as the repo is dockerized you are able to run the example with IRIS Community Edition on your laptop using three following commands in the repo directory:
build container:
$ docker-compose build
run container:
$ docker-compose up -d
start IRIS terminal and call the sample method:
$ docker-compose exec iris iris session iris
USER>zn "OSCRIPT"
OSCRIPT> do ##class(Ideal.ObjectScript).ArraySampleTest()
Added VSCode settings with the last commit - so you are able to code and compile immediately after you open the project in VSCode. And the settings of VSCode which does the "instant-coding" effect are:
{
"objectscript.conn.version": 3,
"objectscript.conn.ns": "OSCRIPT",
"objectscript.conn.port": 52773,
"objectscript.conn.active": true
}
The file.
Article
David E Nelson · Apr 26, 2019
The last time that I created a playground for experimenting with machine learning using Apache Spark and an InterSystems data platform, see Machine Learning with Spark and Caché, I installed and configured everything directly on my laptop: Caché, Python, Apache Spark, Java, some Hadoop libraries, to name a few. It required some effort, but eventually it worked. Paradise. But, I worried. Would I ever be able to reproduce all those steps? Maybe. Would it be possible for a random Windows or Java update to wreck the whole thing in an instant? Almost certainly.
Now, thanks to the increasingly widespread availability of containers and the increasingly usable Docker for Windows, I have my choice of pre-configured machine learning and data science environments . See, for example, Jupyter Docker Stacks and Zeppelin on Docker Hub. With InterSystems making the community edition of the IRIS Data Platform available via container (InterSystems IRIS now Available on the Docker Store), I have easy access to a data platform supporting both machine learning and analytics among a host of other features. By using containers, I do not need to worry about any automatic updates wrecking my playground. If my office floods and my laptop is destroyed, I can easily recreate the playground with a single text file, which I have of course placed in source control ;-)
In the following, I will share a Docker compose file that I used to create a container-based machine learning and data science playground. The playground involves two containers: one with a Zeppelin and Spark environment, the other with the InterSystems IRIS Data Platform community edition. Both use images available on Docker hub. I’ll then show how to configure the InterSystems Spark Connector to connect the two. I will end by loading some data into InterSystems IRIS and using Spark to do some data exploration, visualization, and some very basic machine learning . Of course, my example will barely scratch the surface of the capabilities of both Spark and InterSystems IRIS. However, I hope the article will be useful to help others get started doing more complex and useful work.
Note: I created and tested everything that follows on my Windows 10 laptop, using Docker for Windows. For information on configuring Docker for Windows for use with InterSystems IRIS please see the following. The second of the two articles also discusses the basics of using compose files to configure Docker containers.
Using InterSystems IRIS Containers with Docker for Windows
Docker for Windows and the InterSystems IRIS Data Platform
Compose File for the Two-Container Playground
Hopefully, the comments in the following compose file do a reasonably adequate job of explaining the environment, but in case they do not, here are the highlights. The compose file defines:
Two containers: One containing the InterSystems IRIS Community Edition and the other containing both the Zeppelin notebook environment and Apache Spark. Both containers are based on images pulled from the Docker store.
A network for communication between the two containers. With this technique, we can use the container names as host names when setting up communication between the containers.
Local directories mounted in each container. We can use these directories to make jar files available to the Spark environment and some data files available to the IRIS environment.
A named volume for the durable %SYS feature needed by InterSystems IRIS. Named volumes are necessary for InterSystems IRIS when running in containers on Docker for Windows. For more about this see below for links to other community articles.
Map some networking ports inside the containers to ports available outside the containers to provide easy access.
version: '3.2'
services:
#container 1 with InterSystems IRIS
iris:
# iris community edition image to pull from docker store.
image: store/intersystems/iris:2019.1.0.510.0-community
container_name: iris-community
ports:
# 51773 is the superserver default port
- "51773:51773"
# 52773 is the webserver/management portal default port
- "52773:52773"
volumes:
# Sets up a named volume durable_data that will keep the durable %SYS data
- durable:/durable
# Maps a /local directory into the container to allow for easily passing files and test scripts
- ./local/samples:/samples/local
environment:
# Set the variable ISC_DATA_DIRECTORY to the durable_data volume that we defined above to use durable %SYS
- ISC_DATA_DIRECTORY=/durable/irissys
# Adds the IRIS container to the network defined below.
networks:
- mynet
#container 2 with Zeppelin and Spark
zeppelin:
# zeppelin notebook with spark image to pull from docker store.
image: apache/zeppelin:0.8.1
container_name: spark-zeppelin
#Ports for accessing Zeppelin environment
ports:
#Port for Zeppelin notebook
- "8080:8080"
#Port for Spark jobs page
- "4040:4040"
#Maps /local directories for saving notebooks and accessing jar files.
volumes:
- ./local/notebooks:/zeppelin/notebook
- ./local/jars:/home/zeppelin/jars
#Adds the Spark and Zeppelin container to the network defined below.
networks:
- mynet
#Declares the named volume for the IRIS durable %SYS
volumes:
durable:
# Defines a network for communication between the two containers.
networks:
mynet:
ipam:
config:
- subnet: 172.179.0.0/16
Launching the Containers
Place the compose file in a directory on your system. Note that the directory name becomes the Docker project name. You will need to create sub-directories matching those mentioned in the compose file. So, my directory structure looks like this
iris_spark_zeppelin
local
jars
notebooks
samples
docker-compose.yml
To launch the containers, execute the following Docker command from inside your project directory:
C:\iris_spark_zeppelin>docker-compose up –d
Note that the –d flag causes the containers in detached mode. You will not see them logging any information to the command line.
You can inspect the log files for the containers using the docker logs command. For example, to see the log file for the iris-community container, execute the following:
C:\>docker logs iris-community
To inspect the status of the containers, execute the following command:
C:\>docker container ls
When the iris-community container is ready, you can access the IRIS Management Portal with this url:
http://localhost:52773/csp/sys/UtilHome.csp
Note: The first time you login to IRIS use the username/password: SuperUser/SYS. You will be re-directed to a password change page.
You can access the Zeppelin notebook with this url:
http://localhost:8080
Copying Some Jar FilesIn order to use the InterSystems Spark Connector, the Spark environment needs access to two jar files:
1. intersystems-jdbc-3.0.0.jar
2. intersystems-spark-1.0.0.jar
Currently, these jar files are with IRIS in the iris-community container. We need to copy them out into the locally mapped directory so that the spark-zeppelin container can access them.
To do this, we can use the Docker cp command to copy all the JDK 1.8 version files from inside the iris-community container into one of the local directories visible to the spark-zeppelin container. Open a CMD prompt in the project directory and execute the following command:C:\iris_spark_zeppelin>docker cp iris-community:/usr/irissys/dev/java/lib/JDK18 local/jars
This will add a JDK18 directory containing the above jar files along with a few others to <project-directory>/local/jars.
Adding Some Data
No data, no machine learning. We can use the local directories mounted by the iris-community container to add some data to the data platform. I used the Iris data set (no relation to InterSystems IRIS Data Platform). The Iris data set contains data about flowers. It has long served as the “hello world” example for machine learning (Iris flower data set). You can download or pull an InterSystems class definition for generating the data, along with code for several related examples, from GitHub (Samples-Data-Mining). We are interested in only one file from this set: DataMining.IrisDataset.cls.
Copy DataMining.IrisDataset.cls into your <project-directory>/local/samples directory. Next, open a bash shell inside the iris-community container by executing the following from a command prompt on your local system:
C:\>docker exec –it iris-community bash
From the bash shell, launch an IRIS terminal session:
/# iris session iris
IRIS asks for a username/password. If this is the first time that you are logging into IRIS in this container, use SuperUser/SYS. You will then be asked to change the password. If you have logged in before, for example through the Management Portal, then you changed the password already. Use your updated password now.
Execute the following command to load the file into IRIS:
USER>Do $System.OBJ.Load(“/samples/local/IrisDataset.cls”,”ck”)
You should see output about the above class file compiling and loading successfully. Once this code is loaded, execute the following commands to generate the data for the Iris dataset
USER>Set status = ##class(DataMining.IrisDataset).load()
USER>Write status
The output from the second command should be 1. The database now contains data for 150 examples of Iris flowers.
Launching Zeppelin and Configuring Our Notebook
First, download the Zeppelin notebook note available here: https://github.com/denelson/DevCommunity. The name of the note is “Machine Learning Hello World”.
You can open the Zeppelin notebook in your web browser using the following url:
http://localhost:8080
It looks something like this.
Click the “Import note” link and import “Machine Learning Hello World.json”.
The first code paragraph contains code that will load the InterSystems JDBC driver and Spark Connector. By default, Zeppelin notebooks provide the z variable for accessing Zeppelin context. See Zeppelin Context in the Zeppelin documentation.
%spark.dep
//z supplies Zeppelin context
z.reset()
z.load("/home/zeppelin/jars/JDK18/intersystems-jdbc-3.0.0.jar")
z.load("/home/zeppelin/jars/JDK18/intersystems-spark-1.0.0.jar")
Before running the paragraph, click the down arrow next to the word “anonymous” and then select “Interpreter”.
On the Interpreters page, search for spark, then click the restart button on the right-hand-side and then ok on the ensuing pop-up.
Now return to the Machine Learning Hello World notebook and run the paragraph by clicking the little arrow all the way at the right. You should see output similar to that in the following screen capture:
Connecting to IRIS and Exploring the Data
Everything is all configured. Now we can connect code running in the spark-zeppelin container to InterSystems IRIS, running in our iris-community container, and begin exploring the data we added earlier. The following Python code connects to InterSystems IRIS and reads the table of data that we loaded in an earlier step (DataMining.IrisDataset) and then displays the first ten rows.
Here are a couple of notes about the following code:
We need to supply a username and password to IRIS. Use the password that you provided in an earlier step when you logged into IRIS and were forced to change your password. I used SuperUser/SYS1
“iris” in the spark.read.format(“iris”) snippet is an alias for the com.intersystems.spark class, the spark connector.
The connection url, including “IRIS” at the start, specifies the location of the InterSystems IRIS default Spark master server
The spark variable points to the Spark session supplied by the Zeppelin Spark interpreter.
%pyspark
uname = "SuperUser"
pwd = "SYS1"
#spark session available by default through spark variable.
#URL uses the name of the container, iris-community, as the host name.
iris = spark.read.format("iris").option("url","IRIS://iris-community:51773/USER").option("dbtable","DataMining.IrisDataset").option("user",uname).option("password",pwd).load()
iris.show(10)
Note: For more information on configuring the Spark connection to InterSystems IRIS, see Using the InterSystems Spark Connector in the InterSystems IRIS documentation. For more information on the spark session and other context variables provided by Zeppelin, see SparkContext, SQLContext, SparkSession, ZeppelinContext in the Zeppelin documentation.
Running the above paragraph results in the following output:
Each row represents an individual flower and records its petal length and width, its sepal length and width, and the Iris species it belongs to.
Here is some SQL-esque code for further exploration:
%pyspark
iris.groupBy("Species").count().show()
Running the paragraph produces the following output:
So there are three different Iris species represented in the data. The data represents each species equally.
Using Python’s matplotlib library, we can even draw some graphs. Here is code to plot Petal Length vs. Petal Width:
%pyspark
%matplotlib inline
import matplotlib.pyplot as plt
#Retrieve an array of row objects from the DataFrame
items = iris.collect()
petal_length = []
petal_width = []
for item in items:
petal_length.append(item['PetalLength'])
petal_width.append(item['PetalWidth'])
plt.scatter(petal_width,petal_length)
plt.xlabel("Petal Width")
plt.ylabel("Petal Length")
plt.show()
Running the paragraph creates the following scatter plot:
Even to the untrained eye, it looks like there is a pretty strong correlation between Petal Width and Petal Length. We should be able to reliably predict petal length based on petal width.
A Little Machine Learning
Note: I copied the following code from my earlier playground article, cited above.
In order to predict petal length based on petal width, we need a model of the relationship between the two. We can create such a model very easily using Spark. Here is some code that uses Spark's linear regression API to train a regression model. The code does the following:
Creates a new Spark DataFrame containing the petal length and petal width columns. The petal width column represents the "features" and the petal length column represents the "labels". We use the features to predict the labels.
Randomly divides the data into training (70%) and test (30%) sets.
Uses the training dat to fit the linear regression model.
Runs the test data through the model and then displays the petal length, petal width, features, and predictions.
%pyspark
from pyspark.ml.regression import LinearRegression
from pyspark.ml.feature import VectorAssembler
# Transform the "Features" column(s) into the correct vector format
df = iris.select('PetalLength','PetalWidth')
vectorAssembler = VectorAssembler(inputCols=["PetalWidth"],
outputCol="features")
data=vectorAssembler.transform(df)
# Split the data into training and test sets.
trainingData,testData = data.randomSplit([0.7, 0.3], 0.0)
# Configure the model.
lr = LinearRegression().setFeaturesCol("features").setLabelCol("PetalLength").setMaxIter(10)
# Train the model using the training data.
lrm = lr.fit(trainingData)
# Run the test data through the model and display its predictions for PetalLength.
predictions = lrm.transform(testData)
predictions.show(10)
Running the paragraph results in the following output:
The Regression Line
The “model” is really just a regression line through the data. It would be nice to have the slope and y-intercept of that line. It would also be nice to be able to visualize that line superimposed on our scatter plot. The following code retrieves the slope and y-intercept from the trained model and then uses them to add a regression line to the scatter plot of the petal length and width data.
%pyspark
%matplotlib inline
import matplotlib.pyplot as plt
# retrieve the slope and y-intercepts of the regression line from the model.
slope = lrm.coefficients[0]
intercept = lrm.intercept
print("slope of regression line: %s" % str(slope))
print("y-intercept of regression line: %s" % str(intercept))
items = iris.collect()
petal_length = []
petal_width = []
petal_features = []
for item in items:
petal_length.append(item['PetalLength'])
petal_width.append(item['PetalWidth'])
fig, ax = plt.subplots()
ax.scatter(petal_width,petal_length)
plt.xlabel("Petal Width")
plt.ylabel("Petal Length")
y = [slope*x+intercept for x in petal_width]
ax.plot(petal_width, y, color='red')
plt.show()
Running the paragraph results in the following output:
What’s Next?
There is much, much more we can do. Obviously, we can load much larger and much more interesting datasets into IRIS. See, for example the Kaggle datasets (https://www.kaggle.com/datasets) With a fully licensed IRIS we could configure sharding and see how Spark running through the InterSystems Spark Connector takes advantage of the parallelism sharding offers. Spark, of course, provides many more machine learning and data analysis algorithms. It supports several different languages, including Scala and R. The article is considered as InterSystems Data Platform Best Practice. Hi,Your article helps me a lot.one question more: how can we get a fully licensed IRIS?It seems that there is no download page in the official site. Hi!You can request for a fully licensed IRIS on this pageIf you want to try or use IRIS features with IRIS Community Edition:Try IRIS onlineUse IRIS Community from DockerHub on your laptop as is, or with different samples from Open Exchange. Check how to use IRIS Docker image on InterSystems Developers video channel.or run Community on Express IRIS images on AWS, GCP or Azure.HTH
Article
Gevorg Arutiunian · Apr 27, 2019
Here is an ObjectScript snippet which lets to create database, namespace and a web application for InterSystems IRIS:
```
set currentNS = $namespace
zn "%SYS"
write "Create DB ...",!
set dbName="testDB"
set dbProperties("Directory") = "/InterSystems/IRIS/mgr/testDB"
set status=##Class(Config.Databases).Create(dbName,.dbProperties)
write:'status $system.Status.DisplayError(status)
write "DB """_dbName_""" was created!",!!
write "Create namespace ...",!
set nsName="testNS"
//DB for globals
set nsProperties("Globals") = dbName
//DB for routines
set nsProperties("Routines") = dbName
set status=##Class(Config.Namespaces).Create(nsName,.nsProperties)
write:'status $system.Status.DisplayError(status)
write "Namespace """_nsName_""" was created!",!!
write "Create web application ...",!
set webName = "/csp/testApplication"
set webProperties("NameSpace") = nsName
set webProperties("Enabled") = $$$YES
set webProperties("IsNameSpaceDefault") = $$$YES
set webProperties("CSPZENEnabled") = $$$YES
set webProperties("DeepSeeEnabled") = $$$YES
set webProperties("AutheEnabled") = $$$AutheCache
set status = ##class(Security.Applications).Create(webName, .webProperties)
write:'status $system.Status.DisplayError(status)
write "Web application """webName""" was created!",!
zn currentNS
```
Also check these manuals:
- [Creating Database](https://irisdocs.intersystems.com/iris20181/csp/documatic/%25CSP.Documatic.cls?PAGE=CLASS&LIBRARY=%25SYS&CLASSNAME=Config.Databases)
- [Namespace](https://irisdocs.intersystems.com/iris20181/csp/documatic/%25CSP.Documatic.cls?PAGE=CLASS&LIBRARY=%25SYS&CLASSNAME=Config.Namespaces)
- [CSP Application](https://irisdocs.intersystems.com/iris20181/csp/documatic/%25CSP.Documatic.cls?PAGE=CLASS&LIBRARY=%25SYS&CLASSNAME=Security.Applications)
For web applications see the answer https://community.intersystems.com/post/how-do-i-programmatically-create-web-application-definition Thank you! It's great having all 3 examples in one place :) Just a quick note.
I found that when creating a new database it was best to initially use SYS.Database so you can specifiy max size etc..
s db=##class(SYS.Database).%New()
s db.Directory=directory
s db.Size=initialSize
s db.MaxSize=maxSize
s db.GlobalJournalState=3
s Status=db.%Save()
Then finalise with Config.Database
s Properties("Directory")=directory
s Status=##Class(Config.Databases).Create(name,.Properties)
s Obj=##Class(Config.Databases).Open(name)
s Obj.MountRequired=1
s Status=Obj.%Save()
This might not be the best way to do it, I'm open to improvements.
Article
Julio Esquerdo · Feb 12
Using Flask, REST API, and IAM with InterSystems IRIS
Part 3 – IAM
InterSystems API Manager (IAM) is a component that allows you to monitor, control, and manage traffic from HTTP-based APIs. It also acts as an API gateway between InterSystems IRIS applications and servers.
The document published in https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=PAGE_apimgr provides information about the product.
The document https://docs.intersystems.com/components/csp/docbook/DocBook.UI.Page.cls?KEY=CIAM3.0_install provides all the information for the installation and configuration of IAM.
After installing IAM we will activate it and configure a service, route and plugin to create a Rate Limiting for our REST API.
First, after installing IAM we will activate it as defined in the documentation:
Once IAM is activated, we will open the administration interface to create the necessary service, route and plugin. For this in the browser we access the server on port 8002:
On this screen we see the service, route and plugin maintenance options, which are the tasks we are going to perform. First we will create the service to serve our REST API. Click on the side menu on Services and then on New Service:
Enter the name (client-service) and in the Add using URL option, enter the API path in URL (http://192.168.0.13/iris/rest/servico/cliente in our example). Click Create and the service is created:
Now let's create the route. In the side menu, click on Routes, then on New Route.
Select the service we created, enter a name for the route (client-route for example), inform the protocols that can be used (http and https), enter the host (192.168.0.13) and the methods (GET, PUT, POST, DELETE). Click the Add Path link and enter the path for this route (/api/client). Click Create and the route is created:
Now let's create the Limiting rate plugin. This plugin limits the number of requests a user can make in a given period of time. To do this, go back to the side menu and click on Plugins and then on New Plugin. In the search box, type Rate and see the plugins listed:
Select the Rate Limiting Advanced plugin. The plugin configuration screen will be presented:
On the setup screen, change the option to Scoped. Select the service we created. Enter the number of accesses in Config.Limit (5 for example) and the interval time in Config.Window.Size (60 for example). Change Config.Strategy to Local, and then click Create. Ready. Our plugin is created and already working:
Now we'll need to make a change to our python code to consume our API through IAM. To do this, let's change the API URL address to the address we created in the IAM Route:
API_URL = "http://192.168.0.13/iris/rest/servico/cliente" - Original Code
API_URL = "http://192.168.0.13:8000/api/cliente" - New URL going through IAM
Reactivate the Flask server and return to the application page. This time give a refresh sequence (F5) in order to call the API several times. See that in the 6th API call we have a failure:
Our application is not prepared to handle the failed HTTP status. Let's make some adjustments. First let's create an error page in the templates folder called erro.html:
<! html DOCTYPE>
<html lang="en-us">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Error</title>
</head>
<body>
<h1>An error has occurred!</h1>
<p>Error code: {{ status_code }}</p>
<p>Error message: {{ error_message }}</p>
</body>
</html>
Let's go back to the python code and make an adjustment to catch the error. In the section we call the index page assembly, we'll treat an HTTP code other than 200:
Where we have the following code in the route "/":
date = response.json()
list = date["customers"]
return render_template("index.html", records=list)
We changed to:
if response.status_code == 200:
date = response.json()
list = date["customers"]
return render_template("index.html", records=list)
else:
return render_template('erro.html', status_code=response.status_code, error_message=response.json().get('message', 'Unknown Error'))
Let's restart the Flask application and once again we'll call the API several times. This time we will get a handled error screen:
We can then see that IAM is already working, receiving the requests and applying the configured plugins. According to our configuration, IAM restricted access to the API by returning an HTTP status of 429 as configured in the plugin.
Going back to the administration screen in IAM we can already see some information about the consumption of the service:
We can see the number of requests that have been made to the service by HTTP Status for example. We can also change the way the graph is displayed:
IAM is a powerful tool that adds several advantages to Iris and allows a series of actions with its plugins. IAM can be a powerful ally for publishing services.
See you next time!
Announcement
Anastasia Dyubaylo · Dec 4, 2018
Hi Everyone!
As you know Advent of Code 2018 is in full swing! Till the 25th of December 2018 each day 2 programming problems to sharpen your programming skills!
And now we're ready to present great prizes for Members of InterSystems Global Masters Advocate Hub!
Win Conditions: To win our prize you should be on the top of ObjectScript Leaderboard and upload all the solutions in a public repository and present the code in InterSystems ObjectScript in UDL form. The example of the repo.
Prizes:
→ 1st Place: 10 000 points on Global Masters and FREE registration & hotel accommodation for the next InterSystems Global Summit 2019 [!!!]
→ 2nd Place: 5 000 points on Global Masters
→ 3rd Place: 3 000 points on Global Masters
Note: Don't forget to use our own InterSystems Leaderboard, with the code 130669-ab1f69bf or the direct link.
Good luck to all of you! How to join Global Masters?Please leave a comment to this post and we'll send you an invite! Yet another great example of a repo for Advent of Code in ObjectScript Hi Anastasia. I'd like to receive an invite to join Global Masters. Thank you. Hi Angelo!You're invited. Please check you mail! What I have seen happen a few times allready is that people from our leaderboard just copy the solution from another person in another language and submit the results without having done any coding at all. For example day 7 last solution to fill top 100: 00:30:52fastest solution part1: 00:09:27fastest solution part2(after doing part1): 00:03:34Someone on our leaderboard got this:solution part1: 00:39:16solution part2: 00:00:13He did the second part in 13 seconds while the fastest person on the leaderboard took 3 minutes and half to do it. The second solution took quite a bit of programming and the 100th player to finish star 2 needed allmost 10 minutes. Someone on our leaderboard: 13 seconds.This means he just copied a solution from another player, put in answer 1, after that was accepted he inserted answer 2 and submitted again.Clearly because a big price is involved people resort to playing unfairly.I only see 2 ways to fix this:- Remove players who have been caught cheating- Let all people post their public repository on here and after each star rerquire them to push their solution to that repository.This doesnt stop people from copying from other players in our competition but at least forces people to use Caché. Blantant copying will still be spotted because the code will resemble another competitors code. Also this will make it so that a copying person cant win the day anymore cause he will have to wait for at least one other caché player before he can start copying.What are your thoughts on this? My public repo: https://bitbucket.org/bertsarens/aoc2018/ Maybe we should even start the competition from the day where we are working on the new way. Hi Bert!I think you are right twice. First, no one can stop cheating in the world and the second no one can win this contest with cheating. And of course no one can stop copy-n-paste in coding :) On the other hand, there is a chance to know something new even with copy-pasting.As a community manager, I would love to see more open repositories with ObjectScript and I like what we had last year - the open discussion of Advent of Code problems solved with ObjectScript. Thanks for sharing your repo! Speaking about the prize it's clear that it can be interesting only to people who develop with InterSystems Data Platforms and I believe this is the key motivation to win. As we see it now the competition is active and healthy, we don't plan to change any rules but encourage the fair play. Submitted code is not public by default.Players get different inputs for puzzles.All timings are calculated from the moment the puzzle is published, not the moment you start working on it.Assuming fastest coder would publish the code, finding it and rewriting it is going to take time.Moreover fastest solutions usually use, let's say, advanced language-specific concepts, so starting from scratch could often go even faster. My repo the same for previous years. https://github.com/daimor/AdventOfCode Changing the rules during a competition is a very bad idea.Participating in a game includes accepting the published and know rules. Accept it or run a different game.Even my grand-children starting from 5 understand this. Bert,
I'd be curious where you get the data points from, I haven't been able to see other people's times. Also keep in mind, sometimes the way you do a solution is a maker/breaker on how quickly you get the second part. Day10 was another great example for that.
I don't think you're accusing me of this since it took me more than 16 minutes to get from part1 to part2 on day7. But just in case you're doubting, here's my code: https://github.com/kazamatzuri/AoC/tree/master/AoC/User
And yes, I did day 2/3 in python ... And for the record, I'm quite certain I'm not eligible for the price anyways ;) I would also be very cautious of trying to make judgements based on the way people implemented things. For some of the puzzles there are very clear 'best solutions' and one would expect people with the necessary training in data structures/algorithms will simple go with a similar implementation because 'that's the way you solve a problem like that'. I'm sure once we get to harder problems that might change, but for these beginning problems there isn't a good way to fix this. I am not accusing you of anything Fabian, on the contrary your results seem to show you are creating solutions yourself.The data points can be found here: https://adventofcode.com/2018/leaderboard/private/view/130669.jsonYou can view the results for yourself. Nice visualiser for the results: https://mcpower.github.io/aoc-leaderboard-viewer/ ah! I hadn't found the json api, that's neat. Thanks for pointing me towards that! I decided to participate too!Here is my repo.And here is a short "Coding talks" Video on how to code Advent of Code and use Github and VSCode. Wow day 15 took time but was a lot of fun:I allmost made the leadeboard today:
15 02:27:08 159 0 02:39:24 132 0
This is the current leaderboard:Though leaders are high, the nature of the contest lets anyone who'll join the game today to win the contest!So, join and win with InterSystems ObjectScript ) grats :)
I gave up for the night and finished late today.
~~~
15 19:11:21 1809 0 19:23:16 1624 0
~~~
worst time for me so far :) You still have lot of coding to do ;)
Congratz all, it was a fun race. Cool battle! Congrats, to all participants and @Fabian.Haupt as a winner!@Bert.Sarens, @Fabian.Haupt - you have more than 1,000 points both in ObjectScript leaderboard: why don't you listed on a general leaderboard? It was! Good Game everybody & Merry Xmas!
COS has some nice features, but most of the time it is lacking basic functionality that would make us competitive in these types of contests.
On top of that, there are no libraries, so you always have to code everything from scratch. Which has a certain appeal to it, but will not not get us close to the leaderboard. The closest I got was place 137 on day22, part2.
Given that mumps/COS only supports the most basic constructs and lacks any modern influence of programming tools, there isn't all that much we can do about that ;) Interesting thoughts, thank you, Fab!@Bert.Sarens, @Timothy.Leavitt, @Dmitry.Maslennikov, @Michael.Breen,@Sean.Connelly, @ivo.VerEecke4987 and others - and what are your top places by days?And your comments about ObjectScript and how could we perform better e.g. next year are very welcome too! Congratulations Fabian and Bert for completing all of the challenges.It's 5am when the challenges open up here so its hard to compete, but I think I would still be far behind some of the times that you have been achieving.Plus I get a little side tracked with the challenges, I've been learning Python this year so its been interesting looking at Python solutions after I've done it in ObjectScript. The code is always much more cleaner and compact. It got me thinking on day 6 about building helper libraries specific to the competition. I ended up with a solution that looks like this...
ClassMethod Day6()
{
set file=$$$file("C:\aoc\day6.txt",.sc) $$$QuitOnError(sc)
set matrix=file.ToMatrix(", ")
do matrix.Min(.minx,.miny)
do matrix.Max(.maxx,.maxy)
for x1=minx:1:maxx {
for y1=miny:1:maxy {
set min=""
while matrix.ForEach(.key,.x2,.y2) {
set dist=$zabs(x1-x2)+$zabs(y1-y2)
if dist=min set nearest="."
if (min="")||(dist<min) set min=dist,nearest=key
}
set count(nearest)=$get(count(nearest))+1
if (x1=minx)||(x1=maxx)||(y1=miny)||(y1=maxy) set infinite(nearest)=""
}
}
set most=0
set key=$order(count(""))
while key'="" {
if '$data(infinite(key)),count(key)>most set most=count(key),hasMost=key
set key=$order(count(key))
}
return count(hasMost)
}
I then got very side tracked on day 7 and started looking at transcompiling Python to ObjectScript. I managed to use Pythons own AST library to parse Python and push it into a global Mumps tree so I could tinker around converting Python to ObjectScript. Turns out the two languages are very compatible, unlike JavaScript which I gave up on a few years back. The interesting thing about Python is its incredible growth in popularity over the past few years, there is a really interesting article here on it...
https://stackoverflow.blog/2017/09/06/incredible-growth-python/
But then I would just settle on a "for x in y" in ObjectScript... Thanks Sean, very interesting! Looking forward to see your Python-to-ObjectScript transcompiler. Here is the final Leaderboard for the contest is:1st place - @Fabian.Haupt2nd place -@Bert.Sarens3rd place - @Dmitry.MaslennikovCongrats to winners and to all the participants!We'll provide all the prises shortly and would appreciate if you share your repos (a lot of participants already did).It would be great to know how people solve the same tasks with ObjectScript independently. Thanks in advance! It was great fun :)
https://github.com/kazamatzuri/AoC My congrats to winners and Season Greetings to everybody!I'm just curious: why some surnames are emphasized with green in the leader board? Hi, Alexey! Greens are clickable ) Hi Evgeny!Good joke - haha. All of them are clicable as all of them are drawn on the same jpeg file, aren't they? Some users configured with link to their GitHub account, and they shown in green, so can click, other users does not have links. If you would register there and have joined our leaderboard, you would be able to check it by yourself. But we don't see this stuff at community site. It's always funny (at least for me) when some stuff is discussed as commonly known while it really is not.Happy New Year to everyone! Sorry, Alexey )They are clickable for real but on the leaderboard pages. Here is the ObjectScript leaderboard, which you can join if you register on Advent of code site with email - and you'll have normal "white" name, and "green clickable one", if you join via Github or any other applicable for Advent site OpenID provider.E.g. see the overall "world-wide" leaderboard which is available without authentication. My top place by day was 242 on day 8 (that was the only day that I actually did the puzzle at midnight though - I'm not at all a night owl).My three wishes for ObjectScript would be:A greater variety of high-performance data structures with low-level support (rather than shoehorning everything into a local array/PPG/$ListBuild list to keep it wicked fast, or building more expensive objects that do exactly what I want)Libraries for working with those (and existing) data structures (or, better yet, an OO approach to everything, similar to %Library.DynamicArray/%Library.DynamicObject)Functional programming capabilities
Question
Evgeny Shvarov · Mar 1, 2019
Hi Community!
When you run IRIS container out-of-the-box and connect to it via terminal e.g. with:
docker-compose exec iris bash
You see something like:
root@7b19f545187b:/opt/app# irissession IRIS
Node: 7b19f545187b, Instance: IRIS
Username: ***
Password: ***
USER>
And you enter login and password every time.
How to programmatically setup docker-compose file to have IRIS container with OS authentication enabled? And have the following while entering the terminal:
root@7b19f545187b:/opt/app# irissession IRIS
Node: 7b19f545187b, Instance: IRIS
USER>
One substitution in your code to use $zboolean to account for cases where it had already been enabled (in which case your code would disable it).Instead of:
Set p("AutheEnabled")=p("AutheEnabled")+16
Use
Set p("AutheEnabled")=$zb(p("AutheEnabled"),16,7)
Documentation for $ZBOOLEAN Check out my series of articles Continuous Delivery of your InterSystems solution using GitLab it talks about many features, related to automating these kind of tasks. In particular, Part VII (CD using containers) talks about programmatic enabling of OS-level authentication. To activate OS authentication in your docker image, you can run this code, in %SYS namespace
Do ##class(Security.System).Get(,.p) Set p("AutheEnabled")=p("AutheEnabled")+16 Do ##class(Security.System).Modify(,.p)
If you work with community edition, you can use my image, where you can easily define also user and password for external use.
Running server
$ docker run -d --rm --name iris \
-p 52773:52773 \
-e IRIS_USER=test \
-e IRIS_PASSWORD=test \
daimor/intersystems-iris:2019.1.0S.111.0-community
Terminal connect
$ docker exec -it iris iris session iris
Node: 413a4da758e7, Instance: IRIS
USER>write $username
root
USER>write $roles
%All
Or with docker-compose, something like this
iris:
image: daimor/intersystems-iris:2019.1.0S.111.0-community
ports:
- 52773:52773
environment:
IRIS_USER: ${IRIS_PASSWORD:-test}
IRIS_PASSWORD: ${IRIS_PASSWORD:-test}
Article
Andrey Shcheglov · Dec 13, 2018
Astronomers’ tools
5 years ago, on December 19, 2013, the ESA launched an orbital telescope called Gaia. Learn more about the Gaia mission on the official website of the European Space Agency or in the article by Vitaly Egorov (Billion pixels for a billion stars).
However, few people know what technology the agency chose for storing and processing the data collected by Gaia. Two years before the launch, in 2011, the developers were considering a number of candidates (see “Astrostatistics and Data Mining” by Luis Manuel Sarro, Laurent Eyer, William O’Mullane, Joris De Ridder, pp. 111-112):
IBM DB2,
PostgreSQL,
Apache Hadoop,
Apache Cassandra and
InterSystems Caché (to be more precise, the Caché eXTreme Event Persistence technology).
Comparing the technologies side-by-side produced the following results (source):
Technology
Time
DB2
13min55s
PostgreSQL 8
14min50s
PostgreSQL 9
6min50s
Hadoop
3min37s
Cassandra
3min37s
Caché
2min25s
The first four will probably sound familiar even to schoolchildren. But what is Caché XEP?
Java technologies in Caché
If you look at the Java API stack provided by InterSystems, you will see the following:
The Caché Object Binding technology that transparently projects data in Java. In Caché terms, the generated Java proxy classes are called exactly like that - projections. This approach is the simplest, since it saves the “natural” relations between classes in the object model, but doesn’t guarantee great performance: a lot of service metadata describing the object model is transferred “over the wires”.
JDBC and various add-ons (Hibernate, JPA). I guess I won’t tell you anything new here apart from the fact that Caché supports two types of transaction isolation: READ_UNCOMMITTED and READ_COMMITTED – and works in the READ_UNCOMMITTED mode by default.
The Caché eXTreme family (also available in .NET and Node.js editions). This approach is characterized by the direct access to the low-level data representation (so-called “globals” – quanta of data in the Caché world) ensuring high performance. The Caché XEP library simultaneously provides object and quasi-relational access to data.
Object – the API client no longer needs to care about object-relational representation: following the Java object model (even in cases of complex multi-layer inheritance), the system automatically creates an object model on the Caché class level (or a DB schema if we want to use the terms of the relational representation).
Quasi-relational – in the sense that you can run SQL queries against multiple “events” stored in a database (to be exact, requests using the SQL subset) directly from the context of an eXTreme-connection. Indices and transactions are fully supported as well. Of course, all loaded data become immediately accessible via JDBC and a relational representation (supporting all the powerful features of ANSI SQL and SQL extensions specific to the Caché dialect), but the access speed will be completely different.
Summing up, here’s what we have:
“schema” import (Caché classes are created automatically), including
import of the Java class hierarchy;
instant relational access to data – you can work with Caché classes the way you work with tables;
support of indices and transactions via Caché eXTreme;
support of simple SQL queries via Caché eXTreme;
support of arbitrary SQL queries via the underlying JDBC over TCP connection (Caché uses the standard Type 4 (Direct-to-Database Pure Java) driver).
This approach offers some advantages in comparison with comparable relational (higher access speed) and various NoSQL solutions (instant access to data in the relational style).
The “nuance” of configuring Caché eXTreme prior to connecting is the environment set-up:
the GLOBALS_HOME variable has to point to the Caché installation folder and
LD_LIBRARY_PATH (DYLD_LIBRARY_PATH for Mac OS X or PATH for Windows) has to contain ${GLOBALS_HOME}/bin.
Additionally, you may need to increase the stack and heap size of the JVM (-Xss2m -Xmx768m).
Some practice
The authors were interested in how Caché eXTreme would behave while writing an uninterrupted stream of data in comparison with other data processing technologies. We used historical stock price data in the CSV format from the website of the “Finam” holding. Sample data file:
<TICKER>,<PER>,<DATE>,<TIME>,<LAST>,<VOL>
NASDAQ100,0,20130802,09:31:07,3 125.300000000,0
NASDAQ100,0,20130802,09:32:08,3 122.860000000,806 906
NASDAQ100,0,20130802,09:33:09,3 123.920000000,637 360
NASDAQ100,0,20130802,09:34:10,3 124.090000000,421 928
NASDAQ100,0,20130802,09:35:11,3 125.180000000,681 585
The code of the Caché class modeling the above structure might look like this:
Class com.intersystems.persistence.objbinding.Event Extends %Persistent [ ClassType = persistent, DdlAllowed, Final, SqlTableName = Event ]
{
Property Ticker As %String(MAXLEN = 32);
Property Per As %Integer(MAXVAL = 2147483647, MINVAL = -2147483648);
Property TimeStamp As %TimeStamp;
Property Last As %Double;
Property Vol As %Integer(MAXVAL = 9223372036854775807, MINVAL = -9223372036854775810);
}
We also wrote some basic and naive test code. This “naive” approach can be justified by the fact that we are not really measuring the speed of the code generated by JIT, but the speed at which the code that is completely unrelated to JVM (with the exception of Apache Derby) can write to the disk. Here’s how the test program window looks like:
Our contenders:
Apache Derby 10.14.2.0
Oracle 10.2.0.3.0
InterSystems Caché 2018.1 (JDBC)
InterSystems Caché 2018.1 (eXTreme)
Note that since tests are somewhat approximate, we saw no practical purpose in providing exact numbers: the margin of error is fairly high, while the goal of the article is to demonstrate the general tendency. For the same reasons, we are not specifying the exact version of JDK and the settings of the garbage collector: the server-side JVM 8u191 with -Xmx2048m -Xss128m reached a very similar level of performance on Linux and Mac OS X. One million events were saved in each test; several warm-up runs (up to 10) were performed before each test of a particular database. As for Caché settings, the routine cache was increased to 256 MB and the 8kb database cache was expanded to 1024 MB.
Our testing yielded the following results (the write speed values are expressed in events per second (eps)):
Technology
Time, s (less is better)
Write speed, eps (more is better)
Apache Derby
140±30
7100±1300
Oracle
780±50
1290±80
Caché JDBC
61±8
17000±2000
Caché eXTreme
6.7±0.8
152000±17000
Caché eXTreme, transaction journaling disabled
6.3±0.6
162000±14000
Derby offers speeds varying from 6200 to 8000 eps.
Oracle turned out to be as fast as 1290 eps.
Caché in the JDBC mode gives you a higher speed (from 15000 to 18000 eps), but there is a trade-off: the default transaction isolation level, as mentioned above, is READ_UNCOMMITTED.
The next option, Caché eXTreme, gives us 127000 to 167000 eps.
Finally, we took some risk and disabled the transaction log (for a given client process), and managed to achieve the write speed of 172000 eps on a test system.
Those who are interested in more accurate numbers can view the source code. You will need the following to build and run:
JDK 1.8+,
Git,
Maven
Oracle JDBC driver (available from Oracle Maven Repository)
Maven Install Plugin for for creating local Caché JDBC and Caché eXTreme artifacts:
$ mvn install:install-file -Dfile=cache-db-2.0.0.jar
$ mvn install:install-file -Dfile=cache-extreme-2.0.0.jar
$ mvn install:install-file -Dfile=cache-gateway-2.0.0.jar
$ mvn install:install-file -Dfile=cache-jdbc-2.0.0.jar
and, finally,
Caché 2018.1+.
Great post! I have a followup question: How did you load the Finam data? Was it "over the wire" all at once? Or were you reading from one or more csv files? Hi Joel!Finam.ru is a trade broker for MOEX - Russian stock exchange. Previously they had an API to download quotes for any securities. What I see now they at still have the UI to download quotes e.g. in CSV - here is the link do download GAZPROM quotes. Evgeny, if I understand you, you're saying that they downloaded the quotes to a CSV file and then they read in the CSV file. I'm now wondering if/how they "chunked" the quotes. Did they read in and store each quote one at a time, or did they read in 1000 quotes into a Java array and then store those quotes, read in another 1000 and store those, etc. Joel, the code just reads one line at a time, immediately storing its parsed value into the database (within the same execution thread).Strictly speaking, this is a mix of a CPU-bound and I/O-bound tasks within the same thread, and we should have 3 threads instead:one reading from the local disk (I/O);the 2nd one parsing the CSV line, andthe last one storing the parsed value into the (potentially, remote) database.The 3-thread approach should improve the timings (for all the databases being tested), but the relative values are unlikely to reveal any substantial changes.Additionally, I have to remark that the auto-commit flag is off over each session, with a single commit issued after all the data are saved. In a real-life application, one should additionally issue intermediate commits (say, after each 1000 or 10000 of records). Can you tell me if all the figures stated are those that Intersystems created/monitered/collated from their own installations of the alternative databases or are the figures collated and verified by an external 3rd party company?forgive me but its easy to slant figures based upon marketing requirements.kevin Kevin, all the benchmarks have been run by me at one of my workstations (Intel Core i7-4790).The same Seagate ST1000DM010-2EP102 7200 rpm HDD was dedicated to host the data files over all experiments, with no other read or write operations interrupting it during each session.So, summing it up, there has been no relation to ISC infrastructure.I can share the detailed hardware specs if you need them. Or you can try and run the benchmark on your own hardware. Hi Kevin!I examined the article. It has two sources of figures: from EISA and from Andrey's personal research. Both have no relation to InterSystems official benchmarking.So, the answer is no - nothing Intersystems official here, only 3rd party researches. Andrey, Evgeny:while it was 3d party research, there are some signs of Caché performance tuning: global cache and routine cache were manually enlarged, journaling was switched off at the process level. Were the similar tuning steps performed for other DBMSs used in the research? AFAIK, Oracle would not allow to stop transaction logging (=journaling) at all. Only few people have appropriate knowledge of all DBMS products listed in the article (maybe Andrey is one of them), and nobody would touch hidden or semi-hidden performance parameters without it.That's why I doubt if an Oracle engineer (or even DBA) would approve such "derby" results. Shouldn't we agree with an author of the sentence? :...a single view of Oracle has lead to vastly different results, like the difference between a circle and a square. Always remember that there is no substitute for a real-world test that simulates the actual behavior of your production systems, using your own production SQL (http://www.dba-oracle.com/t_benchmark_testing.htm) hv they migrate to IRIS? how is IRIS performance against others?
Announcement
Anastasia Dyubaylo · Mar 26, 2019
Hey Developers!Do you want to reap the benefits of the advances in the fields of artificial intelligence and machine learning? With InterSystems IRIS and the Machine Learning (ML) Toolkit it’s easier than ever.Join InterSystems Sales Engineers, @Sergey.Lukyanchikov and @Eduard.Lebedyuk, for the Machine Learning Toolkit for InterSystems IRIS webinar on Tuesday, April 23rd at 11 a.m. EDT to find out how InterSystems IRIS can be used as both a standalone development platform and an orchestration tool for predictive modelling that helps stitch together Python and other external tools.InterSystems IRIS also makes it easy to:Leverage the model in a live environment.Integrate seamlessly with transactional real-time business processes.Develop intelligent business processes with the business processes UI designer.After we get our models running, during their whole lifecycle, they generate many artifacts: tables, images, data structures, etc. IRIS Analytics can help with visualizing and exploring these artifacts. ML Toolkit use cases will also be presented to show how you can best utilize them in your solution.Date: Tuesday, April 23rd at 11 a.m. EDTRecommended Audience: Developers, Solution Architects, Data Scientists, and Data Engineers.Note: The language of the webinar is English.Register for FREE today! It is tomorrow! Don't miss Register here!
Announcement
Anastasia Dyubaylo · Apr 17, 2019
Hi Community!
Good news! One more upcoming event is nearby.
We're please to invite you to join the "J on the Beach" – an international rendezvous for developers and DevOps around Big Data technologies. A fun conference to learn and share the latest experiences, tips and tricks related to Big Data technologies and, the most important part, it’s On The Beach!
We're more than happy to invite you and your colleagues to our InterSystems booth for a personal conversation. InterSystems is also a silver sponsor of the JOTB.
In addition, this year we have a special Global Masters Meeting Point at the conference. You're very welcome to come to our booth to ask questions, share your ideas and, of course, pick up some samples of rewards and GM badges. Looking forward to seeing you soon!
So, remember!
Date: May 15-17, 2019Place: Palacio de Congresos y Exposiciones Adolfo Suarez, Marbella, Spain
More information you can find here: jonthebeach.com
Please feel free to ask any questions in the comments of this post.
Buy your ticket and save your seat today!
Article
Evgeny Shvarov · Nov 3, 2017
There are several options how to deliver user interface(UI) for DeepSee BI solutions. The most common approaches are:
use native DeepSee Dashboards, get web UI in Zen and deliver it in your web apps.
use DeepSee REST API, get and build your own UI widgets and dashboards.
The 1st approach is good because of the possibility to build BI dashboards without coding relatively fast, but you are limited with preset widgets library which is expandable but with a lot of development efforts.
The 2nd provides you the way to use any comprehensive js framework (D3, Highcharts, etc) to visualize your DeepSee data, but you need to code widgets and dashboards on your own.
Today I want to tell you about yet another approach which combines both listed above and provides Angular based web UI for DeepSee Dashboards - DeepSee Web library.
What is it?
DeepSee Web(DSW) is an Angular.js web app which renders DeepSee Dashboards, available to a user in a given Namespace using Highcharts.js, OpenStreetMap and some self-written js widgets.
How it works
DSW requests dashboards and widgets metadata available in a namespace using MDX2SJON library. To visualize a dashboard DSW reads the list of widgets and their types and uses js-widget analogs which were implemented for almost every DeepSee widget. It instantiates js widgets listed in a dashboard, requests the data according to widgets’ datasources using DeepSee REST API and MDX2JSON, retrieves the data in JSON format and performs the visualization.
Why is it cool?
DSW is great because:
You can develop DeepSee Dashboards using standard editor and then have a UI on Angular without any line of coding.
You can expand the library of widgets easily with any js library or self-written widget.
You can introduce smart-tiles on DSW covers.
You can display the data on map.
What about mobile devices?
DSW works pretty well in Mobile browsers (Safari, Chrome), and it has standalone DeepSight iOS app, here are some screenshots:
How much it costs?
It is free. You are very welcome to download latest release, open issues, make forks and send pull requests with fixes and new features.
Is it suitable for production usage?
Yes. DSW was started in 2014 and has about 60 releases to the date. Dozens of companies use DSW successfully in their DeepSee solutions.
For example, we are using DSW for developer community:
But be aware that according to the DSW license (MIT license) you are using it at your own risk.
Is it supported?
It is supported by the community. You are very welcome to open issues, make forks and send pull requests with fixes and new features.
Key contributors are [@Anton.Gnibeda], [@Nikita.Savchenko] and [@Eduard.Lebedyuk].
How to install?
It’s easy. First, install MDX2JSON. Download the latest installer from release, import/compile installer.xml in USER namespace, run
USER> D ##class(MDX2JSON.Installer).setup()
It will download the latest release from github, create MDX2JSON database and namespace, map it to %All and create the /MDX2JSON web app.
To check if everything installed well open localhost:57772/MDX2JSON/Test. If everything is fine it will return something like this:
{
"DefaultApp":"\/dsw",
"Mappings": {
"Mapped":["%SYS","MDX2JSON","SAMPLES","USER"
],
"Unmapped":["DOCBOOK"
]
},
"Parent":"18872dc518d8009bdc105425f541953d2c9d461e",
"ParentTS":"2017-05-26 16:20:17.307",
"Status":"OK",
"User":"",
"Version":2.2
}
Then install DSW. Download the latest release xml.
Import/compile it to USER Namespace. Run:
USER> Do ##class(DSW.Installer).setup()
It will create /dsw app and install all the js files in the /csp/dsw folder.
Open localhost:57772/dsw/index.html#!/?ns=SAMPLES to see how it works.
Known issues:
On some systems you need to set UTF8 support for non-CSP files in CSP Gateway. Run the following in terminal to turn it on:
USER> set ^%SYS("CSP","DefaultFileCharset")="utf-8"
Any demos?
Sure!
Online demo for Samples, login=dswdemo, password=dswdemopass!
Online demo for Maps, login=dswdemo, password=dswdemopass!
When should I install it?
Now! )
To be continued, stay tuned!
Evgeny,are there any plans to port it to Angular2+ ? e.g. connecting to ngx-charts? Hi, Dan!What are the benefits of using ngx-charts? Better performance? Wider chart library? Also available on OpenExchangeThe new version has independent mobile UI for DeepSee dashboards. Hello, i have some troubles to set up DSW with IIS, do you have some guide? everything works right with port 57772 (apache private server), but i need use port 80 (IIS live server)thanks in advance.Best Hi Jose!it's not the matter of DSW, it's the matter of setup IRIS or Caché with an external web server (which is recommended of course).E.g. check this discussion or the documentation. Thanks for your help Evgeny, now works right!! Looks like demo login not working anymore. Demo login was fixed. Thank you for the feedback!login=dswdemo, password=dswdemopass I have a very important question:
Can this extension be used in a productive/ commercial environment?
You wrote:
But be aware that according to the DSW license (MIT license) you are using it at your own risk.
But the extension uses HighchartsJS and this is only free for "Non-commercial".
Unfortunately, no statement was made and it may be a dangerous license trap, if my guess is correct. Hi T!
Thanks for this, you are right.
Highcharts.js changes license terms through time. I haven't found the license agreement actual for highcharts.js 3.0 which is delivered with the current DSW release. I remember it was something like "You pay for the support".
The license for the current version highcharts.js 7.2 is the following and could be purchased for commercial usage.
Images are broken in this article and Part 2. Which one? Don't see any broken images. Could you please point URLs? All images are not loaded when privacy protection is enabled in Firefox (which is enabled by default).
Copy them on community maybe? where is the demo? Hi Gethsemani!
It looks like we have shut down this demo server.
But you can test it on your laptop easily.
Testing with Docker:
0. Install Docker desktop
0.1. Install VSCode and Open it and install Docker and ObjectScript plugins.
1. Download this archive.
2. Unpack and open the folder in VSCode.
3. Right-click on Docker-compose.yml and choose Compose Restart. Wait for docker says in terminal "done".
4. Connect VSCode - click on the "disconnected" status bar and then choose "Refresh Connection".
5. You will see IRIS Terminal. Open ZPM (package manager) and install samples-bi module.
Node: 04319ab688f6, Instance: IRIS
IRISAPP>zpm
zpm: IRISAPP>install samples-bi
6. Then install DeepSee Web
zpm: IRISAPP>install dsw
7. Click on VSCode bar again and open Management Portal. It will be another port in your case.
Open DSW web-app in IRISAPP namespace with default credentials:
Then you will see DSW working in action:
HTH
Hi Gethesemani!
Recently I crafted a small dashboard on DeepSeeWeb and it is available online:
You can drill down to China and the USA.
Article
Evgeny Shvarov · Feb 15, 2020
Hi Developers!
As you know the concept of ObjectScript Package Manager consists of ZPM client - client application for IRIS which helps you to install packages from the registry. And the code which works "on the other side" is ZPM Registry - server which hosts packages and exposes API to submit, list and install it. Now when you install the ZPM client it installs packages from community package registry, which si hosted on pm.community.intersystems.com
But what if you want your own registry? E.g. you produce different software packages for your clients and you want to distribute it via private registry? Also, you may want to use your own registry to deploy solutions with different combinations of packages.
Is it possible? The answer is YES! You can have it if you deploy ZPM registry on your server with InterSystems IRIS.
To make it happen you would need to set up your own registry server.
How to do that?
ZPM Registry can be installed as a package zpm-registry. So you can install zpm-client first (article, video), or take a docker-image with zpm-client inside and install zpm-registry as a package:
USER>zpm
zpm: USER>install zpm-registry
When zpm-registry is installed it introduces the REST end-point on server:port/registry with a set of REST-API routes. Let’s examine it and let's open the GET requests on community registry as an example.
/
- root entry shows the version of the registry software.
/_ping
{"message":"ping"}
- entry to check the working status.
/_spec
- swagger spec entry
/packages/-/all
- displays all the available packages.
/packages/:package
- the set of GET entries for package deployment. ZPM client is using it when we ask it to install a particular package.
/package
- POST entry to publish a new package from the Github repository.
This and all other API you can examine e.g. in full documentation online generated with the help of /_spec entry:
OK! Let's install and test ZPM-registry on a local machine.
1. Run docker container with IRIS 2019.4 and preinstalled ZPM-client (from developer community repo on docker hub).
% docker run --name my-iris -d --publish 52773:52773 intersystemsdc/iris-community:2019.4.0.383.0-zpm
f40a06bd81b98097b6cc146cd61a6f8487d2536da1ffaf0dd344c615fe5d2844
% docker exec -it my-iris iris session IRIS
Node: f40a06bd81b9, Instance: IRIS
USER>zn "%SYS"
%SYS>Do ##class(Security.Users).UnExpireUserPasswords("*")
%SYS>zn "USER"
USER>zpm
zpm: USER>install zpm-registry
[zpm-registry] Reload START
[zpm-registry] Reload SUCCESS
[zpm-registry] Module object refreshed.
[zpm-registry] Validate START
[zpm-registry] Validate SUCCESS
[zpm-registry] Compile START
[zpm-registry] Compile SUCCESS
[zpm-registry] Activate START
[zpm-registry] Configure START
[zpm-registry] Configure SUCCESS
[zpm-registry] Activate SUCCESS
zpm: USER>
2. Let’s publish a package in our new privately-launched zpm-registry.
To publish a registry you can make a POST request on registry/package end-point and supply the URL of the repository, which contains module.xml in the root. E.g. let's take the repo of objectscript-math application on Open Exchange by @Peter.Steiwer : https://github.com/psteiwer/ObjectScript-Math
$ curl -i -X POST -H "Content-Type:application/json" -u user:password -d '{"repository":"https://github.com/psteiwer/ObjectScript-Math"}' 'http://localhost:52773/registry/package'
make sure to change the user and password to your credentials.
HTTP/1.1 200 OK
Date: Sat, 15 Feb 2020 20:48:13 GMT
Server: Apache
CACHE-CONTROL: no-cache
EXPIRES: Thu, 29 Oct 1998 17:04:19 GMT
PRAGMA: no-cache
CONTENT-LENGTH: 0
Content-Type: application/json; charset=utf-8
As we see, the request returns 200 which means the call is successful and the new package is available in the private registry for installation via ZPM clients. Let’s check the list of packages again:
Open in browser http://localhost:52773/registry/packages/-/all:
[{"name":"objectscript-math","versions":["0.0.4"]}]
Now it shows us that there is one package available.
That’s perfect!
Next question is how to install packages via ZPM client from an alternative repository.
By default when ZPM client is being installed it’s configured to work with public registry pm.community.intersystems.com.
This command shows what is the current registry set up:
zpm: USER>repo -list
registry
Source: https://pm.community.intersystems.com
Enabled? Yes
Available? Yes
Use for Snapshots? Yes
Use for Prereleases? Yes
zpm: USER>
But this could be altered. The following command changes the registry to your one:
zpm: USER>repo -n registry -r -url http://localhost:52773/registry/ -user usernname -pass password
Change here username and password to what is setup on your server available for /registry REST API.
Let’s check that alternative registry is available:
zpm: USER>repo -list
registry
Source: http://localhost:52773/registry/
Enabled? Yes
Available? Yes
Use for Snapshots? Yes
Use for Prereleases? Yes
Username: _SYSTEM
Password: <set>
zpm: USER>
So ZPM client is ready to work with another ZPM registry.
So, ZPM registry lets you build your own private registries which could be filled with any collection of packages either from public or from your own.
And ZPM client gives you the option to switch between registries and install from public or from any private registries.
Also check the article by @Mikhail.Khomenko which describes how to deploy InterSystems IRIS docker-container with ZPM registry in Kubernetes cloud provided by Google Kubernetes Engine.
Happy coding and stay tuned!
Is there any way to have multiple registries enabled at the same time in ZPM, with a priority so that it looks at them in order to find packages. That way you can use public packages from the community repo without having to have them imported/duplicated into your local repo. Or is the only way to keep switching between repos if you want to source from different places. Hi @Mark.Charlton1608 ! Yes, you can make other packages visible from your private registry so that packages from public registries can be available from a private one with white/black lists. Apologies for the late answer :)
Announcement
Anastasia Dyubaylo · Feb 14, 2020
Dear Community,
In advance of the upcoming release of InterSystems IRIS 2020.1 and InterSystems IRIS for Health 2020.1, we're pleased to invite you to the “Office Hours: InterSystems IRIS 2020.1 Preview” webinars. These Q&A sessions will provide a forum for you to ask questions and learn about the latest product features in the upcoming releases.
The webinar will feature members of InterSystems product management team who will answer questions and provide additional information about the new features and enhancements.
We will be hosting two sessions on Thursday, February 20, 2020. Reserve your spot by clicking one of the buttons below for the appropriate time slot.
➡️ 8AM (EST) WEBINAR
➡️ 1PM (EST) WEBINAR
In preparation for the webinar, we encourage you to try InterSystems IRIS or IRIS for Health 2020.1 Preview kits, which are available to download from WRC Distribution*. Also, Release Notes outlining key new features are available to view online:
InterSystems IRIS 2020.1 Release Notes
InterSystems IRIS for Health 2020.1 Release Notes
* WRC login credentials required
We are waiting for you at our webinar! Register now! 👍🏼 Looks really good.
Are there any good videos for
Intersystems API Mgmt
Interystems Cloud Manager
Machine Learning & NLP?? New video content!
Now this webinar recording is available on InterSystems Developers YouTube Channel:
Enjoy watching the video! 👍🏼
Announcement
Anastasia Dyubaylo · Apr 16, 2020
Hi Community,
We're pleased to invite you to join the upcoming InterSystems IRIS 2020.1 Tech Talk: Data Science, ML & Analytics on April 21st at 10:00 AM EDT!
In this first installment of InterSystems IRIS 2020.1 Tech Talks, we put the spotlight on data science, machine learning (ML), and analytics. InterSystems IntegratedMLTM brings automated machine learning to SQL developers. We'll show you how this technology supports feature engineering and chooses the most appropriate ML model for your data, all from the comfort of a SQL interface. We'll also talk about what's new in our open analytics offerings. Finally, we'll share some big news about InterSystems Reports, our "pixel-perfect" reporting option. See how you can now generate beautiful reports and export to PDF, Excel, or HTML.
Speakers:🗣 @Benjamin.DeBoe, Product Manager🗣 @tomd, Product Specialist, Machine Learning🗣 @Carmen.Logue, Product Manager, Data Platforms
Date: Tuesday, April 21, 2020Time: 10:00 AM EDT
➡️ JOIN THE TECH TALK! The webinar begins right now. Do not miss, you can still register!
➡️ JOIN THE TECH TALK! Hi Developers,
Please find the webinar recording here.
Enjoy!
Announcement
Andreas Dieckow · Jan 15, 2020
Beginning with InterSystems IRIS 2020.1, the minimum required version for AIX is AIX 7.1 TL4.
The InterSystems IRIS installer will detect if the required IBM XL C filesets are installed before continuingwith the installation. Does it need to be the 16.1 (or greater) XL C fileset (we have a lesser installed)? And just the runtime or the compilation as well? Verifying for our systems' guys. 16.1 or greater and just the runtime
Article
Evgeny Shvarov · Feb 24, 2020
Hi Developers!
Many of you publish your InterSystems ObjectScript libraries on Open Exchange and Github.
But what do you do to ease the usage and collaboration to your project for developers?
In this article, I want to introduce the way how to introduce an easy way to launch and contribute to any ObjectScript project just by copying a standard set of files to your repository.
Let's go!
TLDR - copy these files from the repository into your repository:
Dockerfile
docker-compose.yml
Installer.cls
iris.script
settings.json
.dockerignore.gitattributes.gitignore
And you get the standard way to launch and collaborate to your project. Below is the long article on how and why this works.
NB: In this article, we will consider projects which are runnable on InterSystems IRIS 2019.1 and newer.
Choosing the launch environment for InterSystems IRIS projects
Usually, we want a developer to try the project/library and be sure that this will be fast and safe exercise.
IMHO the ideal approach to launch anything new fast and safe is the Docker container which gives a developer a guarantee that anything he/she launches, imports, compiles and calculates is safe for the host machine and no system or code would be destroyed or spoiled. If something goes wrong you just stop and remove the container. If the application takes an enormous amount of disk space - you wipe out it with the container and your space is back. If an application spoils the database configuration - you just delete the container with spoiled configuration. Simple and safe like that.
Docker container gives you safety and standardization.
The simplest way to run vanilla InterSystems IRIS Docker container is to run an IRIS Community Edition image:
1. Install Docker desktop
2. Run in OS terminal the following:
docker run --rm -p 52773:52773 --init --name my-iris store/intersystems/iris-community:2020.1.0.199.0
3. Then open Management portal in your host browser on:
http://localhost:52773/csp/sys/UtilHome.csp
4. Or open a terminal to IRIS:
docker exec -it my-iris iris session IRIS
5. Stop IRIS container when you don't need it:
docker stop my-iris
OK! We run IRIS in a docker container. But you want a developer to install your code into IRIS and maybe make some settings. This is what we will discuss below.
Importing ObjectScript files
The simplest InterSystems ObjectScript project can contain a set of ObjectScript files like classes, routines, macro, and globals. Check the article on the naming convention and proposed folder structure.
The question is how to import all this code into an IRIS container?
Here is the momennt where Dockerfile helps us which we can use to take the vanilla IRIS container and import all the code from a repository to IRIS and do some settings with IRIS if we need. We need to add a Dockerfile in the repo.
Let's examine the Dockerfile from ObjectScript template repo:
ARG IMAGE=store/intersystems/irishealth:2019.3.0.308.0-community
ARG IMAGE=store/intersystems/iris-community:2019.3.0.309.0
ARG IMAGE=store/intersystems/iris-community:2019.4.0.379.0
ARG IMAGE=store/intersystems/iris-community:2020.1.0.199.0
FROM $IMAGE
USER root
WORKDIR /opt/irisapp
RUN chown ${ISC_PACKAGE_MGRUSER}:${ISC_PACKAGE_IRISGROUP} /opt/irisapp
USER irisowner
COPY Installer.cls .
COPY src src
COPY iris.script /tmp/iris.script # run iris and initial
RUN iris start IRIS \
&& iris session IRIS < /tmp/iris.script
First ARG lines set the $IMAGE variable - which we will use then in FROM. This is suitable to test/run the code in different IRIS versions switching them just by what is the last line before FROM to change the $IMAGE variable.
Here we have:
ARG IMAGE=store/intersystems/iris-community:2020.1.0.199.0
FROM $IMAGE
This means that we are taking IRIS 2020 Community Edition build 199.
We want to import the code from the repository - that means we need to copy the files from a repository into a docker container. The lines below help to do that:
USER root
WORKDIR /opt/irisapp
RUN chown ${ISC_PACKAGE_MGRUSER}:${ISC_PACKAGE_IRISGROUP} /opt/irisapp
USER irisowner
COPY Installer.cls .
COPY src src
USER root - here we switch user to a root to create a folder and copy files in docker.
WORKDIR /opt/irisapp - in this line we setup the workdir in which we will copy files.
RUN chown ${ISC_PACKAGE_MGRUSER}:${ISC_PACKAGE_IRISGROUP} /opt/irisapp - here we give the rights to irisowner user and group which are run IRIS.
USER irisowner - switching user from root to irisowner
COPY Installer.cls . - coping Installer.cls to a root of workdir. Don't miss the dot, here.
COPY src src - copy source files from src folder in the repo to src folder in workdir in the docker.
In the next block we run the initial script, where we call installer and ObjectScript code:
COPY iris.script /tmp/iris.script # run iris and initial
RUN iris start IRIS \
&& iris session IRIS < /tmp/iris.script
COPY iris.script / - we copy iris.script into the root directory. It contains ObjectScript we want to call to setup the container.
RUN iris start IRIS\ - start IRIS
&& iris session IRIS < /tmp/iris.script - start IRIS terminal and input the initial ObjectScript to it.
Fine! We have the Dockerfile, which imports files in docker. But we faced two other files: installer.cls and iris.script Let's examine it.
Installer.cls
Class App.Installer
{
XData setup
{
<Manifest>
<Default Name="SourceDir" Value="#{$system.Process.CurrentDirectory()}src"/>
<Default Name="Namespace" Value="IRISAPP"/>
<Default Name="app" Value="irisapp" />
<Namespace Name="${Namespace}" Code="${Namespace}" Data="${Namespace}" Create="yes" Ensemble="no">
<Configuration>
<Database Name="${Namespace}" Dir="/opt/${app}/data" Create="yes" Resource="%DB_${Namespace}"/>
<Import File="${SourceDir}" Flags="ck" Recurse="1"/>
</Configuration>
<CSPApplication Url="/csp/${app}" Directory="${cspdir}${app}" ServeFiles="1" Recurse="1" MatchRoles=":%DB_${Namespace}" AuthenticationMethods="32"
/>
</Namespace>
</Manifest>
}
ClassMethod setup(ByRef pVars, pLogLevel As %Integer = 3, pInstaller As %Installer.Installer, pLogger As %Installer.AbstractLogger) As %Status [ CodeMode = objectgenerator, Internal ]
{
#; Let XGL document generate code for this method.
Quit ##class(%Installer.Manifest).%Generate(%compiledclass, %code, "setup")
}
}
Frankly, we do not need Installer.cls to import files. This could be done with one line. But often besides importing code we need to setup the CSP app, introduce security settings, create databases and namespaces.
In this Installer.cls we create a new database and namespace with the name IRISAPP and create the default /csp/irisapp application for this namespace.
All this we perform in <Namespace> element:
<Namespace Name="${Namespace}" Code="${Namespace}" Data="${Namespace}" Create="yes" Ensemble="no">
<Configuration>
<Database Name="${Namespace}" Dir="/opt/${app}/data" Create="yes" Resource="%DB_${Namespace}"/>
<Import File="${SourceDir}" Flags="ck" Recurse="1"/>
</Configuration>
<CSPApplication Url="/csp/${app}" Directory="${cspdir}${app}" ServeFiles="1" Recurse="1" MatchRoles=":%DB_${Namespace}" AuthenticationMethods="32"
/>
</Namespace>
And we import files all the files from SourceDir with Import tag:
<Import File="${SourceDir}" Flags="ck" Recurse="1"/>
SourceDir here is a variable, which is set to the current directory/src folder:
<Default Name="SourceDir" Value="#{$system.Process.CurrentDirectory()}src"/>
Installer.cls with these settings gives us confidence, that we create a clear new database IRISAPP in which we import arbitrary ObjectScript code from src folder.
iris.script
Here you are welcome to provide any initial ObjectScript setup code you want to start your IRIS container.
E.g. here we load and run installer.cls and then we make UserPasswords forever just to avoid the first request to change the password cause we don't need this prompt for development.
; run installer to create namespace
do $SYSTEM.OBJ.Load("/opt/irisapp/Installer.cls", "ck")
set sc = ##class(App.Installer).setup() zn "%SYS"
Do ##class(Security.Users).UnExpireUserPasswords("*") ; call your initial methods here
halt
docker-compose.yml
Why do we need docker-compose.yml - couldn't we just build and run the image just with Dockerfile? Yes, we could. But docker-compose.yml simplifies the life.
Usually, docker-compose.yml is used to launch several docker images connected to one network.
docker-compose.yml could be used to also make launches of one docker image easier when we deal with a lot of parameters. You can use it to pass parameters to docker, such as ports mapping, volumes, VSCode connection parameters.
version: '3.6'
services:
iris:
build:
context: .
dockerfile: Dockerfile
restart: always
ports:
- 51773
- 52773
- 53773
volumes:
- ~/iris.key:/usr/irissys/mgr/iris.key
- ./:/irisdev/app
Here we declare service iris, which uses docker file Dockerfile and which exposes the following ports of IRIS: 51773, 52773, 53773. Also this service maps two volumes: iris.key from home directory of host machine to IRIS folder where it is expected and it maps the root folder of source code to /irisdev/app folder.
Docker-compose gives us the shorter and unified command to build and run the image whatever parameters you setup in docker compose.
in any case, the command to build and launch the image is:
$ docker-compose up -d
and to open IRIS terminal:
$ docker-compose exec iris iris session iris
Node: 05a09e256d6b, Instance: IRIS
USER>
Also, docker-compose.yml helps to set up the connection for VSCode ObjectScript plugin.
.vscode/settings.json
The part, which relates to ObjectScript addon connection settings is this:
{
"objectscript.conn" :{
"ns": "IRISAPP",
"active": true,
"docker-compose": {
"service": "iris",
"internalPort": 52773
}
}
}
Here we see the settings, which are different from default settings of VSCode ObjectScript plugin.
Here we say, that we want to connect to IRISAPP namespace (which we create with Installer.cls):
"ns": "IRISAPP",
and there is a docker-compose setting, which tells, that in docker-compose file inside service "iris" VSCode will connect to the port, which 52773 is mapped to:
"docker-compose": {
"service": "iris",
"internalPort": 52773
}
If we check, what we have for 52773 we see that this is the mapped port is not defined for 52773:
ports:
- 51773
- 52773
- 53773
This means that a random available on a host machine port will be taken and VSCode will connect to this IRIS on docker via random port automatically.
This is a very handy feature, cause it gives you the option to run any amount of docker images with IRIS on random ports and having VSCode connected to them automatically.
What about other files?
We also have:
.dockerignore - file which you can use to filter host machine files you don't want to be copied into docker image you build. Usually .git and .DS_Store are mandatory lines.
.gitattributes - attributes for git, which unify line endings for ObjectScript files in sources. This is very useful if the repo is collaborated by Windows and Mac/Ubuntu owners.
.gitignore - files, which you don't want git to track the changes history for. Typically some hidden OS level files, like .DS_Store.
Fine!
How to make your repository docker-runnable and collaboration friendly?
1. Clone this repository.
2. Copy all this files:
Dockerfile
docker-compose.yml
Installer.cls
iris.script
settings.json
.dockerignore.gitattributes.gitignore
to your repository.
Change this line in Dockerfile to match the directory with ObjectScript in the repo you want to import into IRIS (or don't change if you have it in /src folder).
That's it. And everyone (and you too) will have your code imported into IRIS in a new IRISAPP namespace.
How will people launch your project
the algorithm to execute any ObjectScript project in IRIS would be:
1. Git clone the project locally
2. Run the project:
$ docker-compose up -d
$ docker-compose exec iris iris session iris
Node: 05a09e256d6b, Instance: IRIS
USER>zn "IRISAPP"
How would any the developer contribute to your project
1. Fork the repository and git clone the forked repo locally
2. Open the folder in VSCode (they also need Docker and ObjectScript extensions are installed in VSCode)
3. Right-click on docker-compose.yml->Restart - VSCode ObjectScript will automatically connect and be ready to edit/compile/debug
4. Commit, Push and Pull request changes to your repository
Here is the short gif on how this works:
That's it! Happy coding!
Link to Management portal from is corrected: http://localhost:52773/csp/UtilHome.cspRight link is: http://localhost:52773/csp/sys/UtilHome.csp
Thanks @Artem.Reva could you please explain command
docker-compose exec iris iris session iris
i understand exec, but what is "iris iris session iris" ? The first iris after exec is the service name from docker-compose.yml, then goes command which has to be executed.
iris command is a replacement for ccontrol from Cache/Ensemble
session - subcommand for iris tool
And the latest iris as the instance name inside for IRIS inside the container Is putting all this in the main directory of the repository necessary?
I believe the two git files (.gitignore and .gitattributes) need to be there. But perhaps all files related to docker can be put in a "Docker" directory to avoid adding so many files to the main directory.
My main fear is people seeing all these files and not knowing where to start. Hi Peter!
Thanks for the question.
In addition to .gitignore and .gitattributes
.vscode/settings.json should be in the root too ( @Dmitry.Maslennikov please correct me if I'm wrong).
All the rest:
Dockerfile
docker-compose.yml
Installer.cls
irissession.sh
Could live in a dedicated folder.
BUT! We use Dockerfile to COPY installer.cls and source files from the repo to the image we build and Dockerfile sees the files which sit in the same folder or in subfolders. Specialists, please correct me here if I'm wrong.
So Dockerfile could possibly live inside the source folder - not sure this is what you want to achieve.
there are some possible issues to have docker related files in a dedicated folder.
When you would like to start an environment with docker-compose, you can do it with a command like this.
docker-compose up -d
but it will work only if the docker-compose.yml file name has not changed and it lays right in the current folder.
if you change its location or name, you will have to specify the new place
docker-compose -f ./docker/docker-compose.yml up -d
became not so simple, right?
Ok, next about Dockerfile.
when you build docker image, you have to specify context. So, the command below, just uses file Dockerfile in the current folder, and uses current folder as a context for build.
docker build .
To build docker image with Dockerfile placed somewhere else, you should specify it, suppose you still would like to have current folder as context.
docker build -f ./docker/Dockerfile .
any other files in the root, such as Installer.cls, irissession.sh or any other files which should be used during docker build have to be available from specified context folder. And you can't specify more than one context. So, any of those files should have some parent folder at least, and why not the root of a project.
with docker-compose.yml, we forget about docker build command, but we still have to care about docker-compose
When I try this on my laptop I am getting the following error: The terminal process terminated with exit code: 1
Looking at the terminal output everything seems normal up to that point. Have you encountered this issue before?
David
Hi David!
Could you please commit it and push to the repo on GitHub and I'll try on my laptop?
If not possible - join the discord. please provide more information from log Hi David, could you check it with the latest beta version? it happens.
If you want to open IRIS terminal you can try the following:
Evgeny;
This did work so thanks for the tip!
David Hi All!
Update the article to introduce iris.script approach - dockerfile is smaller, objectscirpt is clearer. 💡 This article is considered as InterSystems Data Platform Best Practice. I have followed these steps both from the GitHub readme file and this more detailed article and I'm having an issue.
To start, however, know I was able to follow the instructions on the the DockerHub page for IRIS and I got my container running and VS Code connected and working. Since I got that working, I decided to better understand starting with this template.
I downloaded the objectscript-docker-template from GitHub to my local drive K:\objectscript-docker-template. I ran the commands to install and run the IRIS container in Git Bash and I see it running in Docker Desktop. I can go in the container's command line and type 'iris TERMINAL IRIS' and I'm brought to the USER namespace.
Back in the project location, I run 'docker-compose up -d' and I get a series of errors:
$ docker-compose up -dCreating objectscript-docker-template_iris_1 ... error
ERROR: for objectscript-docker-template_iris_1 Cannot create container for service iris: status code not OK but 500: {"Message":"Unhandled exception: Filesharing has been cancelled","StackTrace":" at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src . . .
It goes on and on so I won't paste the whole thing.
If I stop the Docker container it's also deleted.
Has anyone had a similar issue or can see right off the bat what I might be doing wrong?
Thanks,
Mike Hi Mike! It looks like that your docker space is over.
Call
docker system prune -f
and see if this fixes the problem.
Caution! This will delete unused containers on your laptop. make sure you don't have sensitive data or uncommitted code in it. Usually you don't You may face the access issue, check the docker settings that you correctly provided a list of resources that can be shared.
@Evgeny Shvarov - "Check the article on the naming and proposed folder structure."
Is this supposed to link to a specific article? In a recent example I posted, I had the need to extend the naming and proposed folder structureIt was obvious if you were reading the downloaded repository. The related article was an advertisement and a "heads up" that it just wasn't the default structure as usual. Hi Ben!
Thanks for the heads up. We have several so far:
the proposed folder structure
the naming convention for packages
the naming convention for "community" packages.
Thanks! I see you updated the article with links. Very helpful :)