Search

Clear filter
Article
David E Nelson · Apr 26, 2019

A Containerized Machine Learning Playground with InterSystems IRIS Community Edition, Spark, and Zeppelin

The last time that I created a playground for experimenting with machine learning using Apache Spark and an InterSystems data platform, see Machine Learning with Spark and Caché, I installed and configured everything directly on my laptop: Caché, Python, Apache Spark, Java, some Hadoop libraries, to name a few. It required some effort, but eventually it worked. Paradise. But, I worried. Would I ever be able to reproduce all those steps? Maybe. Would it be possible for a random Windows or Java update to wreck the whole thing in an instant? Almost certainly. Now, thanks to the increasingly widespread availability of containers and the increasingly usable Docker for Windows, I have my choice of pre-configured machine learning and data science environments . See, for example, Jupyter Docker Stacks and Zeppelin on Docker Hub. With InterSystems making the community edition of the IRIS Data Platform available via container (InterSystems IRIS now Available on the Docker Store), I have easy access to a data platform supporting both machine learning and analytics among a host of other features. By using containers, I do not need to worry about any automatic updates wrecking my playground. If my office floods and my laptop is destroyed, I can easily recreate the playground with a single text file, which I have of course placed in source control ;-) In the following, I will share a Docker compose file that I used to create a container-based machine learning and data science playground. The playground involves two containers: one with a Zeppelin and Spark environment, the other with the InterSystems IRIS Data Platform community edition. Both use images available on Docker hub. I’ll then show how to configure the InterSystems Spark Connector to connect the two. I will end by loading some data into InterSystems IRIS and using Spark to do some data exploration, visualization, and some very basic machine learning . Of course, my example will barely scratch the surface of the capabilities of both Spark and InterSystems IRIS. However, I hope the article will be useful to help others get started doing more complex and useful work. Note: I created and tested everything that follows on my Windows 10 laptop, using Docker for Windows. For information on configuring Docker for Windows for use with InterSystems IRIS please see the following. The second of the two articles also discusses the basics of using compose files to configure Docker containers. Using InterSystems IRIS Containers with Docker for Windows Docker for Windows and the InterSystems IRIS Data Platform Compose File for the Two-Container Playground Hopefully, the comments in the following compose file do a reasonably adequate job of explaining the environment, but in case they do not, here are the highlights. The compose file defines: Two containers: One containing the InterSystems IRIS Community Edition and the other containing both the Zeppelin notebook environment and Apache Spark. Both containers are based on images pulled from the Docker store. A network for communication between the two containers. With this technique, we can use the container names as host names when setting up communication between the containers. Local directories mounted in each container. We can use these directories to make jar files available to the Spark environment and some data files available to the IRIS environment. A named volume for the durable %SYS feature needed by InterSystems IRIS. Named volumes are necessary for InterSystems IRIS when running in containers on Docker for Windows. For more about this see below for links to other community articles. Map some networking ports inside the containers to ports available outside the containers to provide easy access. version: '3.2' services: #container 1 with InterSystems IRIS iris: # iris community edition image to pull from docker store. image: store/intersystems/iris:2019.1.0.510.0-community container_name: iris-community ports: # 51773 is the superserver default port - "51773:51773" # 52773 is the webserver/management portal default port - "52773:52773" volumes: # Sets up a named volume durable_data that will keep the durable %SYS data - durable:/durable # Maps a /local directory into the container to allow for easily passing files and test scripts - ./local/samples:/samples/local environment: # Set the variable ISC_DATA_DIRECTORY to the durable_data volume that we defined above to use durable %SYS - ISC_DATA_DIRECTORY=/durable/irissys # Adds the IRIS container to the network defined below. networks: - mynet #container 2 with Zeppelin and Spark zeppelin: # zeppelin notebook with spark image to pull from docker store. image: apache/zeppelin:0.8.1 container_name: spark-zeppelin #Ports for accessing Zeppelin environment ports: #Port for Zeppelin notebook - "8080:8080" #Port for Spark jobs page - "4040:4040" #Maps /local directories for saving notebooks and accessing jar files. volumes: - ./local/notebooks:/zeppelin/notebook - ./local/jars:/home/zeppelin/jars #Adds the Spark and Zeppelin container to the network defined below. networks: - mynet #Declares the named volume for the IRIS durable %SYS volumes: durable: # Defines a network for communication between the two containers. networks: mynet: ipam: config: - subnet: 172.179.0.0/16 Launching the Containers Place the compose file in a directory on your system. Note that the directory name becomes the Docker project name. You will need to create sub-directories matching those mentioned in the compose file. So, my directory structure looks like this iris_spark_zeppelin local jars notebooks samples docker-compose.yml To launch the containers, execute the following Docker command from inside your project directory: C:\iris_spark_zeppelin>docker-compose up –d Note that the –d flag causes the containers in detached mode. You will not see them logging any information to the command line. You can inspect the log files for the containers using the docker logs command. For example, to see the log file for the iris-community container, execute the following: C:\>docker logs iris-community To inspect the status of the containers, execute the following command: C:\>docker container ls When the iris-community container is ready, you can access the IRIS Management Portal with this url: http://localhost:52773/csp/sys/UtilHome.csp Note: The first time you login to IRIS use the username/password: SuperUser/SYS. You will be re-directed to a password change page. You can access the Zeppelin notebook with this url: http://localhost:8080 Copying Some Jar FilesIn order to use the InterSystems Spark Connector, the Spark environment needs access to two jar files: 1. intersystems-jdbc-3.0.0.jar 2. intersystems-spark-1.0.0.jar Currently, these jar files are with IRIS in the iris-community container. We need to copy them out into the locally mapped directory so that the spark-zeppelin container can access them. To do this, we can use the Docker cp command to copy all the JDK 1.8 version files from inside the iris-community container into one of the local directories visible to the spark-zeppelin container. Open a CMD prompt in the project directory and execute the following command:C:\iris_spark_zeppelin>docker cp iris-community:/usr/irissys/dev/java/lib/JDK18 local/jars This will add a JDK18 directory containing the above jar files along with a few others to <project-directory>/local/jars. Adding Some Data No data, no machine learning. We can use the local directories mounted by the iris-community container to add some data to the data platform. I used the Iris data set (no relation to InterSystems IRIS Data Platform). The Iris data set contains data about flowers. It has long served as the “hello world” example for machine learning (Iris flower data set). You can download or pull an InterSystems class definition for generating the data, along with code for several related examples, from GitHub (Samples-Data-Mining). We are interested in only one file from this set: DataMining.IrisDataset.cls. Copy DataMining.IrisDataset.cls into your <project-directory>/local/samples directory. Next, open a bash shell inside the iris-community container by executing the following from a command prompt on your local system: C:\>docker exec –it iris-community bash From the bash shell, launch an IRIS terminal session: /# iris session iris IRIS asks for a username/password. If this is the first time that you are logging into IRIS in this container, use SuperUser/SYS. You will then be asked to change the password. If you have logged in before, for example through the Management Portal, then you changed the password already. Use your updated password now. Execute the following command to load the file into IRIS: USER>Do $System.OBJ.Load(“/samples/local/IrisDataset.cls”,”ck”) You should see output about the above class file compiling and loading successfully. Once this code is loaded, execute the following commands to generate the data for the Iris dataset USER>Set status = ##class(DataMining.IrisDataset).load() USER>Write status The output from the second command should be 1. The database now contains data for 150 examples of Iris flowers. Launching Zeppelin and Configuring Our Notebook First, download the Zeppelin notebook note available here: https://github.com/denelson/DevCommunity. The name of the note is “Machine Learning Hello World”. You can open the Zeppelin notebook in your web browser using the following url: http://localhost:8080 It looks something like this. Click the “Import note” link and import “Machine Learning Hello World.json”. The first code paragraph contains code that will load the InterSystems JDBC driver and Spark Connector. By default, Zeppelin notebooks provide the z variable for accessing Zeppelin context. See Zeppelin Context in the Zeppelin documentation. %spark.dep //z supplies Zeppelin context z.reset() z.load("/home/zeppelin/jars/JDK18/intersystems-jdbc-3.0.0.jar") z.load("/home/zeppelin/jars/JDK18/intersystems-spark-1.0.0.jar") Before running the paragraph, click the down arrow next to the word “anonymous” and then select “Interpreter”. On the Interpreters page, search for spark, then click the restart button on the right-hand-side and then ok on the ensuing pop-up. Now return to the Machine Learning Hello World notebook and run the paragraph by clicking the little arrow all the way at the right. You should see output similar to that in the following screen capture: Connecting to IRIS and Exploring the Data Everything is all configured. Now we can connect code running in the spark-zeppelin container to InterSystems IRIS, running in our iris-community container, and begin exploring the data we added earlier. The following Python code connects to InterSystems IRIS and reads the table of data that we loaded in an earlier step (DataMining.IrisDataset) and then displays the first ten rows. Here are a couple of notes about the following code: We need to supply a username and password to IRIS. Use the password that you provided in an earlier step when you logged into IRIS and were forced to change your password. I used SuperUser/SYS1 “iris” in the spark.read.format(“iris”) snippet is an alias for the com.intersystems.spark class, the spark connector. The connection url, including “IRIS” at the start, specifies the location of the InterSystems IRIS default Spark master server The spark variable points to the Spark session supplied by the Zeppelin Spark interpreter. %pyspark uname = "SuperUser" pwd = "SYS1" #spark session available by default through spark variable. #URL uses the name of the container, iris-community, as the host name. iris = spark.read.format("iris").option("url","IRIS://iris-community:51773/USER").option("dbtable","DataMining.IrisDataset").option("user",uname).option("password",pwd).load() iris.show(10) Note: For more information on configuring the Spark connection to InterSystems IRIS, see Using the InterSystems Spark Connector in the InterSystems IRIS documentation. For more information on the spark session and other context variables provided by Zeppelin, see SparkContext, SQLContext, SparkSession, ZeppelinContext in the Zeppelin documentation. Running the above paragraph results in the following output: Each row represents an individual flower and records its petal length and width, its sepal length and width, and the Iris species it belongs to. Here is some SQL-esque code for further exploration: %pyspark iris.groupBy("Species").count().show() Running the paragraph produces the following output: So there are three different Iris species represented in the data. The data represents each species equally. Using Python’s matplotlib library, we can even draw some graphs. Here is code to plot Petal Length vs. Petal Width: %pyspark %matplotlib inline import matplotlib.pyplot as plt #Retrieve an array of row objects from the DataFrame items = iris.collect() petal_length = [] petal_width = [] for item in items: petal_length.append(item['PetalLength']) petal_width.append(item['PetalWidth']) plt.scatter(petal_width,petal_length) plt.xlabel("Petal Width") plt.ylabel("Petal Length") plt.show() Running the paragraph creates the following scatter plot: Even to the untrained eye, it looks like there is a pretty strong correlation between Petal Width and Petal Length. We should be able to reliably predict petal length based on petal width. A Little Machine Learning Note: I copied the following code from my earlier playground article, cited above. In order to predict petal length based on petal width, we need a model of the relationship between the two. We can create such a model very easily using Spark. Here is some code that uses Spark's linear regression API to train a regression model. The code does the following: Creates a new Spark DataFrame containing the petal length and petal width columns. The petal width column represents the "features" and the petal length column represents the "labels". We use the features to predict the labels. Randomly divides the data into training (70%) and test (30%) sets. Uses the training dat to fit the linear regression model. Runs the test data through the model and then displays the petal length, petal width, features, and predictions. %pyspark from pyspark.ml.regression import LinearRegression from pyspark.ml.feature import VectorAssembler # Transform the "Features" column(s) into the correct vector format df = iris.select('PetalLength','PetalWidth') vectorAssembler = VectorAssembler(inputCols=["PetalWidth"], outputCol="features") data=vectorAssembler.transform(df) # Split the data into training and test sets. trainingData,testData = data.randomSplit([0.7, 0.3], 0.0) # Configure the model. lr = LinearRegression().setFeaturesCol("features").setLabelCol("PetalLength").setMaxIter(10) # Train the model using the training data. lrm = lr.fit(trainingData) # Run the test data through the model and display its predictions for PetalLength. predictions = lrm.transform(testData) predictions.show(10) Running the paragraph results in the following output: The Regression Line The “model” is really just a regression line through the data. It would be nice to have the slope and y-intercept of that line. It would also be nice to be able to visualize that line superimposed on our scatter plot. The following code retrieves the slope and y-intercept from the trained model and then uses them to add a regression line to the scatter plot of the petal length and width data. %pyspark %matplotlib inline import matplotlib.pyplot as plt # retrieve the slope and y-intercepts of the regression line from the model. slope = lrm.coefficients[0] intercept = lrm.intercept print("slope of regression line: %s" % str(slope)) print("y-intercept of regression line: %s" % str(intercept)) items = iris.collect() petal_length = [] petal_width = [] petal_features = [] for item in items: petal_length.append(item['PetalLength']) petal_width.append(item['PetalWidth']) fig, ax = plt.subplots() ax.scatter(petal_width,petal_length) plt.xlabel("Petal Width") plt.ylabel("Petal Length") y = [slope*x+intercept for x in petal_width] ax.plot(petal_width, y, color='red') plt.show() Running the paragraph results in the following output: What’s Next? There is much, much more we can do. Obviously, we can load much larger and much more interesting datasets into IRIS. See, for example the Kaggle datasets (https://www.kaggle.com/datasets) With a fully licensed IRIS we could configure sharding and see how Spark running through the InterSystems Spark Connector takes advantage of the parallelism sharding offers. Spark, of course, provides many more machine learning and data analysis algorithms. It supports several different languages, including Scala and R. The article is considered as InterSystems Data Platform Best Practice. Hi,Your article helps me a lot.one question more: how can we get a fully licensed IRIS?It seems that there is no download page in the official site. Hi!You can request for a fully licensed IRIS on this pageIf you want to try or use IRIS features with IRIS Community Edition:Try IRIS onlineUse IRIS Community from DockerHub on your laptop as is, or with different samples from Open Exchange. Check how to use IRIS Docker image on InterSystems Developers video channel.or run Community on Express IRIS images on AWS, GCP or Azure.HTH
Article
Gevorg Arutiunian · Apr 27, 2019

How to Create New Database, Namespace and Web Application for InterSystems IRIS programmatically

Here is an ObjectScript snippet which lets to create database, namespace and a web application for InterSystems IRIS: ``` set currentNS = $namespace zn "%SYS" write "Create DB ...",! set dbName="testDB" set dbProperties("Directory") = "/InterSystems/IRIS/mgr/testDB" set status=##Class(Config.Databases).Create(dbName,.dbProperties) write:'status $system.Status.DisplayError(status) write "DB """_dbName_""" was created!",!! write "Create namespace ...",! set nsName="testNS" //DB for globals set nsProperties("Globals") = dbName //DB for routines set nsProperties("Routines") = dbName set status=##Class(Config.Namespaces).Create(nsName,.nsProperties) write:'status $system.Status.DisplayError(status) write "Namespace """_nsName_""" was created!",!! write "Create web application ...",! set webName = "/csp/testApplication" set webProperties("NameSpace") = nsName set webProperties("Enabled") = $$$YES set webProperties("IsNameSpaceDefault") = $$$YES set webProperties("CSPZENEnabled") = $$$YES set webProperties("DeepSeeEnabled") = $$$YES set webProperties("AutheEnabled") = $$$AutheCache set status = ##class(Security.Applications).Create(webName, .webProperties) write:'status $system.Status.DisplayError(status) write "Web application """webName""" was created!",! zn currentNS ``` Also check these manuals: - [Creating Database](https://irisdocs.intersystems.com/iris20181/csp/documatic/%25CSP.Documatic.cls?PAGE=CLASS&LIBRARY=%25SYS&CLASSNAME=Config.Databases) - [Namespace](https://irisdocs.intersystems.com/iris20181/csp/documatic/%25CSP.Documatic.cls?PAGE=CLASS&LIBRARY=%25SYS&CLASSNAME=Config.Namespaces) - [CSP Application](https://irisdocs.intersystems.com/iris20181/csp/documatic/%25CSP.Documatic.cls?PAGE=CLASS&LIBRARY=%25SYS&CLASSNAME=Security.Applications) For web applications see the answer https://community.intersystems.com/post/how-do-i-programmatically-create-web-application-definition Thank you! It's great having all 3 examples in one place :) Just a quick note. I found that when creating a new database it was best to initially use SYS.Database so you can specifiy max size etc.. s db=##class(SYS.Database).%New() s db.Directory=directory s db.Size=initialSize s db.MaxSize=maxSize s db.GlobalJournalState=3 s Status=db.%Save() Then finalise with Config.Database s Properties("Directory")=directory s Status=##Class(Config.Databases).Create(name,.Properties) s Obj=##Class(Config.Databases).Open(name) s Obj.MountRequired=1 s Status=Obj.%Save() This might not be the best way to do it, I'm open to improvements.
Article
Evgeny Shvarov · Feb 24, 2020

Dockerfile and Friends or How to Run and Collaborate to ObjectScript Projects on InterSystems IRIS

Hi Developers! Many of you publish your InterSystems ObjectScript libraries on Open Exchange and Github. But what do you do to ease the usage and collaboration to your project for developers? In this article, I want to introduce the way how to introduce an easy way to launch and contribute to any ObjectScript project just by copying a standard set of files to your repository. Let's go! TLDR - copy these files from the repository into your repository: Dockerfile docker-compose.yml Installer.cls iris.script settings.json .dockerignore.gitattributes.gitignore And you get the standard way to launch and collaborate to your project. Below is the long article on how and why this works. NB: In this article, we will consider projects which are runnable on InterSystems IRIS 2019.1 and newer. Choosing the launch environment for InterSystems IRIS projects Usually, we want a developer to try the project/library and be sure that this will be fast and safe exercise. IMHO the ideal approach to launch anything new fast and safe is the Docker container which gives a developer a guarantee that anything he/she launches, imports, compiles and calculates is safe for the host machine and no system or code would be destroyed or spoiled. If something goes wrong you just stop and remove the container. If the application takes an enormous amount of disk space - you wipe out it with the container and your space is back. If an application spoils the database configuration - you just delete the container with spoiled configuration. Simple and safe like that. Docker container gives you safety and standardization. The simplest way to run vanilla InterSystems IRIS Docker container is to run an IRIS Community Edition image: 1. Install Docker desktop 2. Run in OS terminal the following: docker run --rm -p 52773:52773 --init --name my-iris store/intersystems/iris-community:2020.1.0.199.0 3. Then open Management portal in your host browser on: http://localhost:52773/csp/sys/UtilHome.csp 4. Or open a terminal to IRIS: docker exec -it my-iris iris session IRIS 5. Stop IRIS container when you don't need it: docker stop my-iris OK! We run IRIS in a docker container. But you want a developer to install your code into IRIS and maybe make some settings. This is what we will discuss below. Importing ObjectScript files The simplest InterSystems ObjectScript project can contain a set of ObjectScript files like classes, routines, macro, and globals. Check the article on the naming convention and proposed folder structure. The question is how to import all this code into an IRIS container? Here is the momennt where Dockerfile helps us which we can use to take the vanilla IRIS container and import all the code from a repository to IRIS and do some settings with IRIS if we need. We need to add a Dockerfile in the repo. Let's examine the Dockerfile from ObjectScript template repo: ARG IMAGE=store/intersystems/irishealth:2019.3.0.308.0-community ARG IMAGE=store/intersystems/iris-community:2019.3.0.309.0 ARG IMAGE=store/intersystems/iris-community:2019.4.0.379.0 ARG IMAGE=store/intersystems/iris-community:2020.1.0.199.0 FROM $IMAGE USER root WORKDIR /opt/irisapp RUN chown ${ISC_PACKAGE_MGRUSER}:${ISC_PACKAGE_IRISGROUP} /opt/irisapp USER irisowner COPY Installer.cls . COPY src src COPY iris.script /tmp/iris.script # run iris and initial RUN iris start IRIS \ && iris session IRIS < /tmp/iris.script First ARG lines set the $IMAGE variable - which we will use then in FROM. This is suitable to test/run the code in different IRIS versions switching them just by what is the last line before FROM to change the $IMAGE variable. Here we have: ARG IMAGE=store/intersystems/iris-community:2020.1.0.199.0 FROM $IMAGE This means that we are taking IRIS 2020 Community Edition build 199. We want to import the code from the repository - that means we need to copy the files from a repository into a docker container. The lines below help to do that: USER root WORKDIR /opt/irisapp RUN chown ${ISC_PACKAGE_MGRUSER}:${ISC_PACKAGE_IRISGROUP} /opt/irisapp USER irisowner COPY Installer.cls . COPY src src USER root - here we switch user to a root to create a folder and copy files in docker. WORKDIR /opt/irisapp - in this line we setup the workdir in which we will copy files. RUN chown ${ISC_PACKAGE_MGRUSER}:${ISC_PACKAGE_IRISGROUP} /opt/irisapp - here we give the rights to irisowner user and group which are run IRIS. USER irisowner - switching user from root to irisowner COPY Installer.cls . - coping Installer.cls to a root of workdir. Don't miss the dot, here. COPY src src - copy source files from src folder in the repo to src folder in workdir in the docker. In the next block we run the initial script, where we call installer and ObjectScript code: COPY iris.script /tmp/iris.script # run iris and initial RUN iris start IRIS \ && iris session IRIS < /tmp/iris.script COPY iris.script / - we copy iris.script into the root directory. It contains ObjectScript we want to call to setup the container. RUN iris start IRIS\ - start IRIS && iris session IRIS < /tmp/iris.script - start IRIS terminal and input the initial ObjectScript to it. Fine! We have the Dockerfile, which imports files in docker. But we faced two other files: installer.cls and iris.script Let's examine it. Installer.cls Class App.Installer { XData setup { <Manifest> <Default Name="SourceDir" Value="#{$system.Process.CurrentDirectory()}src"/> <Default Name="Namespace" Value="IRISAPP"/> <Default Name="app" Value="irisapp" /> <Namespace Name="${Namespace}" Code="${Namespace}" Data="${Namespace}" Create="yes" Ensemble="no"> <Configuration> <Database Name="${Namespace}" Dir="/opt/${app}/data" Create="yes" Resource="%DB_${Namespace}"/> <Import File="${SourceDir}" Flags="ck" Recurse="1"/> </Configuration> <CSPApplication Url="/csp/${app}" Directory="${cspdir}${app}" ServeFiles="1" Recurse="1" MatchRoles=":%DB_${Namespace}" AuthenticationMethods="32" /> </Namespace> </Manifest> } ClassMethod setup(ByRef pVars, pLogLevel As %Integer = 3, pInstaller As %Installer.Installer, pLogger As %Installer.AbstractLogger) As %Status [ CodeMode = objectgenerator, Internal ] { #; Let XGL document generate code for this method. Quit ##class(%Installer.Manifest).%Generate(%compiledclass, %code, "setup") } } Frankly, we do not need Installer.cls to import files. This could be done with one line. But often besides importing code we need to setup the CSP app, introduce security settings, create databases and namespaces. In this Installer.cls we create a new database and namespace with the name IRISAPP and create the default /csp/irisapp application for this namespace. All this we perform in <Namespace> element: <Namespace Name="${Namespace}" Code="${Namespace}" Data="${Namespace}" Create="yes" Ensemble="no"> <Configuration> <Database Name="${Namespace}" Dir="/opt/${app}/data" Create="yes" Resource="%DB_${Namespace}"/> <Import File="${SourceDir}" Flags="ck" Recurse="1"/> </Configuration> <CSPApplication Url="/csp/${app}" Directory="${cspdir}${app}" ServeFiles="1" Recurse="1" MatchRoles=":%DB_${Namespace}" AuthenticationMethods="32" /> </Namespace> And we import files all the files from SourceDir with Import tag: <Import File="${SourceDir}" Flags="ck" Recurse="1"/> SourceDir here is a variable, which is set to the current directory/src folder: <Default Name="SourceDir" Value="#{$system.Process.CurrentDirectory()}src"/> Installer.cls with these settings gives us confidence, that we create a clear new database IRISAPP in which we import arbitrary ObjectScript code from src folder. iris.script Here you are welcome to provide any initial ObjectScript setup code you want to start your IRIS container. E.g. here we load and run installer.cls and then we make UserPasswords forever just to avoid the first request to change the password cause we don't need this prompt for development. ; run installer to create namespace do $SYSTEM.OBJ.Load("/opt/irisapp/Installer.cls", "ck") set sc = ##class(App.Installer).setup() zn "%SYS" Do ##class(Security.Users).UnExpireUserPasswords("*") ; call your initial methods here halt docker-compose.yml Why do we need docker-compose.yml - couldn't we just build and run the image just with Dockerfile? Yes, we could. But docker-compose.yml simplifies the life. Usually, docker-compose.yml is used to launch several docker images connected to one network. docker-compose.yml could be used to also make launches of one docker image easier when we deal with a lot of parameters. You can use it to pass parameters to docker, such as ports mapping, volumes, VSCode connection parameters. version: '3.6' services: iris: build: context: . dockerfile: Dockerfile restart: always ports: - 51773 - 52773 - 53773 volumes: - ~/iris.key:/usr/irissys/mgr/iris.key - ./:/irisdev/app Here we declare service iris, which uses docker file Dockerfile and which exposes the following ports of IRIS: 51773, 52773, 53773. Also this service maps two volumes: iris.key from home directory of host machine to IRIS folder where it is expected and it maps the root folder of source code to /irisdev/app folder. Docker-compose gives us the shorter and unified command to build and run the image whatever parameters you setup in docker compose. in any case, the command to build and launch the image is: $ docker-compose up -d and to open IRIS terminal: $ docker-compose exec iris iris session iris Node: 05a09e256d6b, Instance: IRIS USER> Also, docker-compose.yml helps to set up the connection for VSCode ObjectScript plugin. .vscode/settings.json The part, which relates to ObjectScript addon connection settings is this: { "objectscript.conn" :{ "ns": "IRISAPP", "active": true, "docker-compose": { "service": "iris", "internalPort": 52773 } } } Here we see the settings, which are different from default settings of VSCode ObjectScript plugin. Here we say, that we want to connect to IRISAPP namespace (which we create with Installer.cls): "ns": "IRISAPP", and there is a docker-compose setting, which tells, that in docker-compose file inside service "iris" VSCode will connect to the port, which 52773 is mapped to: "docker-compose": { "service": "iris", "internalPort": 52773 } If we check, what we have for 52773 we see that this is the mapped port is not defined for 52773: ports: - 51773 - 52773 - 53773 This means that a random available on a host machine port will be taken and VSCode will connect to this IRIS on docker via random port automatically. This is a very handy feature, cause it gives you the option to run any amount of docker images with IRIS on random ports and having VSCode connected to them automatically. What about other files? We also have: .dockerignore - file which you can use to filter host machine files you don't want to be copied into docker image you build. Usually .git and .DS_Store are mandatory lines. .gitattributes - attributes for git, which unify line endings for ObjectScript files in sources. This is very useful if the repo is collaborated by Windows and Mac/Ubuntu owners. .gitignore - files, which you don't want git to track the changes history for. Typically some hidden OS level files, like .DS_Store. Fine! How to make your repository docker-runnable and collaboration friendly? 1. Clone this repository. 2. Copy all this files: Dockerfile docker-compose.yml Installer.cls iris.script settings.json .dockerignore.gitattributes.gitignore to your repository. Change this line in Dockerfile to match the directory with ObjectScript in the repo you want to import into IRIS (or don't change if you have it in /src folder). That's it. And everyone (and you too) will have your code imported into IRIS in a new IRISAPP namespace. How will people launch your project the algorithm to execute any ObjectScript project in IRIS would be: 1. Git clone the project locally 2. Run the project: $ docker-compose up -d $ docker-compose exec iris iris session iris Node: 05a09e256d6b, Instance: IRIS USER>zn "IRISAPP" How would any the developer contribute to your project 1. Fork the repository and git clone the forked repo locally 2. Open the folder in VSCode (they also need Docker and ObjectScript extensions are installed in VSCode) 3. Right-click on docker-compose.yml->Restart - VSCode ObjectScript will automatically connect and be ready to edit/compile/debug 4. Commit, Push and Pull request changes to your repository Here is the short gif on how this works: That's it! Happy coding! Link to Management portal from is corrected: http://localhost:52773/csp/UtilHome.cspRight link is: http://localhost:52773/csp/sys/UtilHome.csp Thanks @Artem.Reva could you please explain command docker-compose exec iris iris session iris i understand exec, but what is "iris iris session iris" ? The first iris after exec is the service name from docker-compose.yml, then goes command which has to be executed. iris command is a replacement for ccontrol from Cache/Ensemble session - subcommand for iris tool And the latest iris as the instance name inside for IRIS inside the container Is putting all this in the main directory of the repository necessary? I believe the two git files (.gitignore and .gitattributes) need to be there. But perhaps all files related to docker can be put in a "Docker" directory to avoid adding so many files to the main directory. My main fear is people seeing all these files and not knowing where to start. Hi Peter! Thanks for the question. In addition to .gitignore and .gitattributes .vscode/settings.json should be in the root too ( @Dmitry.Maslennikov please correct me if I'm wrong). All the rest: Dockerfile docker-compose.yml Installer.cls irissession.sh Could live in a dedicated folder. BUT! We use Dockerfile to COPY installer.cls and source files from the repo to the image we build and Dockerfile sees the files which sit in the same folder or in subfolders. Specialists, please correct me here if I'm wrong. So Dockerfile could possibly live inside the source folder - not sure this is what you want to achieve. there are some possible issues to have docker related files in a dedicated folder. When you would like to start an environment with docker-compose, you can do it with a command like this. docker-compose up -d but it will work only if the docker-compose.yml file name has not changed and it lays right in the current folder. if you change its location or name, you will have to specify the new place docker-compose -f ./docker/docker-compose.yml up -d became not so simple, right? Ok, next about Dockerfile. when you build docker image, you have to specify context. So, the command below, just uses file Dockerfile in the current folder, and uses current folder as a context for build. docker build . To build docker image with Dockerfile placed somewhere else, you should specify it, suppose you still would like to have current folder as context. docker build -f ./docker/Dockerfile . any other files in the root, such as Installer.cls, irissession.sh or any other files which should be used during docker build have to be available from specified context folder. And you can't specify more than one context. So, any of those files should have some parent folder at least, and why not the root of a project. with docker-compose.yml, we forget about docker build command, but we still have to care about docker-compose When I try this on my laptop I am getting the following error: The terminal process terminated with exit code: 1 Looking at the terminal output everything seems normal up to that point. Have you encountered this issue before? David Hi David! Could you please commit it and push to the repo on GitHub and I'll try on my laptop? If not possible - join the discord. please provide more information from log Hi David, could you check it with the latest beta version? it happens. If you want to open IRIS terminal you can try the following: Evgeny; This did work so thanks for the tip! David Hi All! Update the article to introduce iris.script approach - dockerfile is smaller, objectscirpt is clearer. 💡 This article is considered as InterSystems Data Platform Best Practice. I have followed these steps both from the GitHub readme file and this more detailed article and I'm having an issue. To start, however, know I was able to follow the instructions on the the DockerHub page for IRIS and I got my container running and VS Code connected and working. Since I got that working, I decided to better understand starting with this template. I downloaded the objectscript-docker-template from GitHub to my local drive K:\objectscript-docker-template. I ran the commands to install and run the IRIS container in Git Bash and I see it running in Docker Desktop. I can go in the container's command line and type 'iris TERMINAL IRIS' and I'm brought to the USER namespace. Back in the project location, I run 'docker-compose up -d' and I get a series of errors: $ docker-compose up -dCreating objectscript-docker-template_iris_1 ... error ERROR: for objectscript-docker-template_iris_1 Cannot create container for service iris: status code not OK but 500: {"Message":"Unhandled exception: Filesharing has been cancelled","StackTrace":" at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src . . . It goes on and on so I won't paste the whole thing. If I stop the Docker container it's also deleted. Has anyone had a similar issue or can see right off the bat what I might be doing wrong? Thanks, Mike Hi Mike! It looks like that your docker space is over. Call docker system prune -f and see if this fixes the problem. Caution! This will delete unused containers on your laptop. make sure you don't have sensitive data or uncommitted code in it. Usually you don't You may face the access issue, check the docker settings that you correctly provided a list of resources that can be shared. @Evgeny Shvarov - "Check the article on the naming and proposed folder structure." Is this supposed to link to a specific article? In a recent example I posted, I had the need to extend the naming and proposed folder structureIt was obvious if you were reading the downloaded repository. The related article was an advertisement and a "heads up" that it just wasn't the default structure as usual. Hi Ben! Thanks for the heads up. We have several so far: the proposed folder structure the naming convention for packages the naming convention for "community" packages. Thanks! I see you updated the article with links. Very helpful :)
Announcement
Anastasia Dyubaylo · Feb 14, 2020

[February 20, 2020] Office Hours: InterSystems IRIS 2020.1 Preview

Dear Community, In advance of the upcoming release of InterSystems IRIS 2020.1 and InterSystems IRIS for Health 2020.1, we're pleased to invite you to the “Office Hours: InterSystems IRIS 2020.1 Preview” webinars. These Q&A sessions will provide a forum for you to ask questions and learn about the latest product features in the upcoming releases. The webinar will feature members of InterSystems product management team who will answer questions and provide additional information about the new features and enhancements. We will be hosting two sessions on Thursday, February 20, 2020. Reserve your spot by clicking one of the buttons below for the appropriate time slot. ➡️ 8AM (EST) WEBINAR ➡️ 1PM (EST) WEBINAR In preparation for the webinar, we encourage you to try InterSystems IRIS or IRIS for Health 2020.1 Preview kits, which are available to download from WRC Distribution*. Also, Release Notes outlining key new features are available to view online: InterSystems IRIS 2020.1 Release Notes InterSystems IRIS for Health 2020.1 Release Notes * WRC login credentials required We are waiting for you at our webinar! Register now! 👍🏼 Looks really good. Are there any good videos for Intersystems API Mgmt Interystems Cloud Manager Machine Learning & NLP?? New video content! Now this webinar recording is available on InterSystems Developers YouTube Channel: Enjoy watching the video! 👍🏼
Announcement
Anastasia Dyubaylo · Feb 22, 2023

[Video] Real-Life Experiences Migrating a Caché Application Portfolio to InterSystems IRIS

Hey Developers, Enjoy watching the new video on InterSystems Developers YouTube: ⏯ Real-Life Experiences Migrating a Caché Application Portfolio to InterSystems IRIS @ Global Summit 2022 Learn about AppServices' experiences migrating our collection of applications to InterSystems IRIS. We'll cover migration-related topics at the OS, platform, and application levels. Presenters: 🗣 @Matthew.Giesmann, Developer, Application Services, InterSystems🗣 @Wangyi.Huang, Technical Specialist, Application Services, InterSystems Enjoy and stay tuned! 👍
Article
Kate Lau · Mar 15, 2023

Walk through of deploying InterSystems API Manager (IAM) on AWS EC2

In this article, I am trying to walk through my deploying step of IAM on my EC2(ubuntu). What is IAM? IAM is InterSystems API Manageryou may reference to the link below to get more idea about IAM https://docs.intersystems.com/components/csp/docbook/Doc.View.cls?KEY=PAGE_apimgr Before deploying IAM Check the license of your API host Enable the User IAM Deploy IAM Reference https://community.intersystems.com/post/introducing-intersystems-api-manager Download the image from the following link https://wrc.intersystems.com/wrc/coDistGen.csp I downloaded the following version to my pc Upload the image to my EC2 I use the command scp to upload the image to my cloud Make sure docker and docker compose are installed If not, please reference to the following link https://docs.docker.com/engine/install/ubuntu/ Untar the image file tar zpxvf IAM-3.0.2.0-4.tar.gz Load the image into docker sudo docker load -i iam_image.tar Run the iam-setup.sh source ./iam-setup.sh Edit the file : docker-compose.yml In order to let us visit the IAM UI from the environment outside the EC2, replace the localhost by the EC2 public address in the parameter KONG_PORTAL_GUI_HOST and KONG_ADMIN_GUI_URL vi docker-compose.yml Start the container sudo docker compose up -d Check the IAM UI You can access the UI of the IAM by the following link http://yourEC2publicAddress:8002/overview In fact I encountered the error of unable to pass the variable to "ISC_IRIS_URL", "ISC_IAM_IAMGE","ISC_CA_CERT" I suspect that the iam-setup.sh now working very well As a work around, I hard code the variables in the docker-compose.yml and run the sudo docker compose up -d again Great article,Kate! Could you also help put this on CN DC? Thx a lot! Michael
Announcement
Larry Finlayson · Oct 5, 2022

Managing InterSystems Servers October 24-28, 2022 - Registration space available

Managing InterSystems Servers October 24-28, 2022 9:00am-5:00 US-Eastern Time (EDT) This five-day virtual course teaches system and database administrators how to install, configure and secure InterSystems server software, configure for high availability and disaster recovery, and monitor the system. Students also learn troubleshooting techniques. This course is applicable to both InterSystems IRIS and Caché. Although the course is mostly platform independent, students can complete the exercises using either Windows or Ubuntu. Self Register Here
Announcement
Evgeny Shvarov · Sep 15, 2022

Technical Bonuses Results for InterSystems Interoperability Contest: Building Sustainable Solutions 2022

Hi developers! Here is the score of technical bonuses for participants' applications in the InterSystems Interoperability Contest: Building Sustainable Solutions 2022! Project Sustainability Sustainability Dataset Interoperability Production Custom Interoperability Adapter PEX Embedded Python Docker ZPM Online Demo Code Quality First Article on DC Second Article on DC Video on YouTube Total Bonus Nominal 5 3 3 2 3 3 2 2 2 1 2 1 3 32 appmsw-banks-ru 3 2 2 2 9 n8n-nodes-iris 2 2 3 7 iris-energy-isodata 5 3 3 3 2 2 1 2 3 24 interoperability-soap 3 3 2 2 2 12 samba-iris-adapter 3 2 3 2 2 1 2 3 18 interoperability-test 3 2 2 2 1 2 1 3 16 iris-megazord 5 3 2 - - - - 2 3 15 production-monitor 3 2 - 2 1 - 8 Sustainable Machine Learning 5 3 3 3 2 2 2 20 Recycler 5 2 2 9 Bonuses are subject to change upon the update. Please claim here in the comments below or in the Discord chat. I claim YouTube Video bonus for interoperability-test. I have added the link in GitHub ReadMe. Does that automatically update on Open Exchange or do I need to make a new release? https://youtu.be/LqyRVxpmxGY I wonder how some apps have a link on Open Exchange to vote in the contest? I claim python embedded bonus, sambaservice class uses python: https://raw.githubusercontent.com/yurimarx/samba-iris-adapter/master/src/dc/samba/SambaService.cls Hi @Evgeny.Shvarov Thanks for sharing the Technical Bonuses.Please note that iris-energy-isodata app has online demo.Thanks Hi @Muhammad.Waseem! Could you please add the link of online demo in the OEX app here https://openexchange.intersystems.com/package/iris-energy-isodata Hi @Evgeny.Shvarov! Link is already there Hi @Evgeny Shvarov Please note that iris-energy-isodata app has youtube video as well.Thanks I just published an article about interoperability-soap here: https://community.intersystems.com/post/background-story-around-interoperability-soap I was not able to edit the app on Open Exchange to put the Article link there :-(
Question
Evgeny Shvarov · Apr 9, 2017

Best practices for filtering widgets from different cubes in InterSystems DeepSee

Hi!Sometimes I need to filter the widget on a dashboard from a different cube. And I face the following problem:Widget A refers to a query from Cube A and I want to filter Widget B from Widget B.Widget's B pivot refers to Cube B, and which has different dimensions for the same data.E.g. cube A has the dimension Author and the Cube B has the dimension Member for the same data. So there is no way to filter such a widget B from the widget A.Actually, once we filter a given widget B with another widget A, we add the Filter Expression to the MDX query which looks like member's expression from Cube A, like:[Outlet].[H1].[Region].&[Asia]Is there any way to alter the filter expression for Widget B, just changing the value of the last part (Asia in this case) of the filter expression? One way to do this is by using a pivot variable. Create the same pivot variable "Region" in both pivots on which your widgets are based. These pivot variables should return the members, in your example Asia, Europe, N. America, S. America. You can define the manually or in a termlist, or use a kpi to retrieve them. For the example in the screenshot below I created a HoleFood2 cube with a Outlet2.H1.Region2 level. This level is "incompatible" with an Outlet.H1.Region level in HoleFoods. In my manual Region pivot variable I simply defined two regions, which can be selected manually. Once you have these two pivot variables create a calculated dimension on each pivot using the pivot variable. In your example in HoleFoods the expression should be Outlet.[$variable.Region]. Place the calculated dimension on Filters.This is how I did it in HoleFoods:and this is how I did it in HoleFoods2:Finally, add an ApplyVariable control on one of your widgets with "*" as target. Selecting a region will filter both widgets.
Article
Kimberly Dunn · Sep 26, 2017

InterSystems IRIS Data Platform: the Power Behind a Better Online Shopping Experience

This summer the Database Platforms department here at InterSystems tried out a new approach to our internship program. We hired 10 bright students from some of the top colleges in the US and gave them the autonomy to create their own projects which would show off some of the new features of the InterSystems IRIS Data Platform. The team consisting of Ruchi Asthana, Nathaniel Brennan, and Zhe “Lily” Wang used this opportunity to develop a smart review analysis engine, which they named Lumière. As they explain:A rapid increase in Internet users along with the growing power of online reviews has given birth to fields like opinion mining and sentiment analysis. Today, most people seek positive and negative opinions of a product before making a purchase. Customers find information from reviews extremely useful because they want to know what people are saying about the product they want to buy. Information from reviews is also crucial to marketing teams, who are constantly seeking customer feedback to improve the quality of their products. While it is universal that people want feedback about online products, they are often not willing to read through all the hundreds or even thousands of customer reviews that are available. Therefore our tool extracts the information both vendors and customers need so they can make the best decision without having to read through any reviews.That sounds really great, doesn’t it? Check out the rest of their whitepaper to get more details about what they were able to accomplish and how InterSystems IRIS enabled them to do it!
Article
Evgeny Shvarov · Nov 3, 2017

DeepSee Web: InterSystems Analytics Visualization with AngularJS. Part 1

There are several options how to deliver user interface(UI) for DeepSee BI solutions. The most common approaches are: use native DeepSee Dashboards, get web UI in Zen and deliver it in your web apps. use DeepSee REST API, get and build your own UI widgets and dashboards. The 1st approach is good because of the possibility to build BI dashboards without coding relatively fast, but you are limited with preset widgets library which is expandable but with a lot of development efforts. The 2nd provides you the way to use any comprehensive js framework (D3, Highcharts, etc) to visualize your DeepSee data, but you need to code widgets and dashboards on your own. Today I want to tell you about yet another approach which combines both listed above and provides Angular based web UI for DeepSee Dashboards - DeepSee Web library. What is it? DeepSee Web(DSW) is an Angular.js web app which renders DeepSee Dashboards, available to a user in a given Namespace using Highcharts.js, OpenStreetMap and some self-written js widgets. How it works DSW requests dashboards and widgets metadata available in a namespace using MDX2SJON library. To visualize a dashboard DSW reads the list of widgets and their types and uses js-widget analogs which were implemented for almost every DeepSee widget. It instantiates js widgets listed in a dashboard, requests the data according to widgets’ datasources using DeepSee REST API and MDX2JSON, retrieves the data in JSON format and performs the visualization. Why is it cool? DSW is great because: You can develop DeepSee Dashboards using standard editor and then have a UI on Angular without any line of coding. You can expand the library of widgets easily with any js library or self-written widget. You can introduce smart-tiles on DSW covers. You can display the data on map. What about mobile devices? DSW works pretty well in Mobile browsers (Safari, Chrome), and it has standalone DeepSight iOS app, here are some screenshots: How much it costs? It is free. You are very welcome to download latest release, open issues, make forks and send pull requests with fixes and new features. Is it suitable for production usage? Yes. DSW was started in 2014 and has about 60 releases to the date. Dozens of companies use DSW successfully in their DeepSee solutions. For example, we are using DSW for developer community: But be aware that according to the DSW license (MIT license) you are using it at your own risk. Is it supported? It is supported by the community. You are very welcome to open issues, make forks and send pull requests with fixes and new features. Key contributors are [@Anton.Gnibeda​], [@Nikita.Savchenko] and [@Eduard.Lebedyuk]. How to install? It’s easy. First, install MDX2JSON. Download the latest installer from release, import/compile installer.xml in USER namespace, run USER> D ##class(MDX2JSON.Installer).setup() It will download the latest release from github, create MDX2JSON database and namespace, map it to %All and create the /MDX2JSON web app. To check if everything installed well open localhost:57772/MDX2JSON/Test. If everything is fine it will return something like this: { "DefaultApp":"\/dsw", "Mappings": { "Mapped":["%SYS","MDX2JSON","SAMPLES","USER" ], "Unmapped":["DOCBOOK" ] }, "Parent":"18872dc518d8009bdc105425f541953d2c9d461e", "ParentTS":"2017-05-26 16:20:17.307", "Status":"OK", "User":"", "Version":2.2 }​ Then install DSW. Download the latest release xml. Import/compile it to USER Namespace. Run: USER> Do ##class(DSW.Installer).setup() It will create /dsw app and install all the js files in the /csp/dsw folder. Open localhost:57772/dsw/index.html#!/?ns=SAMPLES to see how it works. Known issues: On some systems you need to set UTF8 support for non-CSP files in CSP Gateway. Run the following in terminal to turn it on: USER> set ^%SYS("CSP","DefaultFileCharset")="utf-8" Any demos? Sure! Online demo for Samples, login=dswdemo, password=dswdemopass! Online demo for Maps, login=dswdemo, password=dswdemopass! When should I install it? Now! ) To be continued, stay tuned! Evgeny,are there any plans to port it to Angular2+ ? e.g. connecting to ngx-charts? Hi, Dan!What are the benefits of using ngx-charts? Better performance? Wider chart library? Also available on OpenExchangeThe new version has independent mobile UI for DeepSee dashboards. Hello, i have some troubles to set up DSW with IIS, do you have some guide? everything works right with port 57772 (apache private server), but i need use port 80 (IIS live server)thanks in advance.Best Hi Jose!it's not the matter of DSW, it's the matter of setup IRIS or Caché with an external web server (which is recommended of course).E.g. check this discussion or the documentation. Thanks for your help Evgeny, now works right!! Looks like demo login not working anymore. Demo login was fixed. Thank you for the feedback!login=dswdemo, password=dswdemopass I have a very important question: Can this extension be used in a productive/ commercial environment? You wrote: But be aware that according to the DSW license (MIT license) you are using it at your own risk. But the extension uses HighchartsJS and this is only free for "Non-commercial". Unfortunately, no statement was made and it may be a dangerous license trap, if my guess is correct. Hi T! Thanks for this, you are right. Highcharts.js changes license terms through time. I haven't found the license agreement actual for highcharts.js 3.0 which is delivered with the current DSW release. I remember it was something like "You pay for the support". The license for the current version highcharts.js 7.2 is the following and could be purchased for commercial usage. Images are broken in this article and Part 2. Which one? Don't see any broken images. Could you please point URLs? All images are not loaded when privacy protection is enabled in Firefox (which is enabled by default). Copy them on community maybe? where is the demo? Hi Gethsemani! It looks like we have shut down this demo server. But you can test it on your laptop easily. Testing with Docker: 0. Install Docker desktop 0.1. Install VSCode and Open it and install Docker and ObjectScript plugins. 1. Download this archive. 2. Unpack and open the folder in VSCode. 3. Right-click on Docker-compose.yml and choose Compose Restart. Wait for docker says in terminal "done". 4. Connect VSCode - click on the "disconnected" status bar and then choose "Refresh Connection". 5. You will see IRIS Terminal. Open ZPM (package manager) and install samples-bi module. Node: 04319ab688f6, Instance: IRIS IRISAPP>zpm zpm: IRISAPP>install samples-bi 6. Then install DeepSee Web zpm: IRISAPP>install dsw 7. Click on VSCode bar again and open Management Portal. It will be another port in your case. Open DSW web-app in IRISAPP namespace with default credentials: Then you will see DSW working in action: HTH Hi Gethesemani! Recently I crafted a small dashboard on DeepSeeWeb and it is available online: You can drill down to China and the USA.
Article
Developer Community Admin · Oct 21, 2015

InterSystems Caché Benchmark: Achieving Millions of Database Accesses per Second Inexpensively

AbstractIn a recent benchmark test of an application based on InterSystems Caché, a sustainable rate of 8.9million database accesses/second, with peaks of 16.9 million database accesses/second, was achieved. These results were from a test performed on a connected system of eight applications servers, using Intel Xeon 5570 processors, and running Linux as the operating system. This benchmark shows that:Caché can achieve unheard of levels of performance for an object database. It provides full persistence of data at speeds that are normally only reached by in-memory databases.Caché demonstrates this high performance while running on inexpensive servers.Caché offers excellent horizontal scalability, with throughput scaling linearly as application servers are added.
Article
Evgeny Shvarov · Mar 20, 2018

Analysing Developer Community Activity Using InterSystems Analytics Technology DeepSee

Hi, Community!I’m sure you are using Developer Community analytics built with InterSystems Analytics technology DeepSee: You can find DC analytics n InterSystems->Analytics menu.DC Analytics shows interactive dashboards on key figures of DC entities: Posts, Comments, and Members. Since the last week, this analytics project is available for everyone with source code and data on DC Github!It is available for everyone (who is familiar with InterSystems data platform ;) to download and install on Caché/Ensemble/IRIS, load the data and have same analytics locally. And what is more interesting IMHO you can improve the analytics and pull request to the Github!So! What is the data?The schema of persistent classes is very complex and consists of 3 classes: Posts, Comments, and Members. See the diagram built in ClassExplorer, courtesy of @Nikita.Savchenko .Another portion of persistent data is a daily data on views and achievements of members.This data can be imported and is available in Releases section in a form of gz with globals in xml format.InstallationHow to get this on your InterSystems Data Platform instance?You would need IDP instance of 2014.1 and newer.1 Install MDX2JSON (30 sec)2 Install DeepSee Web(DSW) (1 minute)3 Create a new Namespace (name it e.g. DCANALYTICS), enable DeepSee and iKnow.4 Go to Releases and download the latest DCAnalytics_classes*.xml file.Import it into DCANALYTICS namespace e.g. with $System.OBJ.ObjLoad(), Atelier or Class Import UI in Control Panel:5. Start the data import and installation. Call the setup method and provide the path to DCAnalytics_globals.gz file. : DCANALYTICS> do ##class(Community.Utils).setup("DCAnalytics_globals.gz") The setup does the following: 1. Imports globals for persistent classes (without indices) 2. Builds indices 3. Builds cubes 6. Setup tiles for DSW. Download from releases the latest DSW.config.and.iKnow.files.zip, unpack and move file dcanalytics.json from archive to <your_instance>/CSP/dcanalytics/configs/. The name of dcanalytics.json should match the name of the namespace. DONE! Open the url <server:port>/dsw/index.html?ns=DCANALYTICS and get your own DC Analytics. There are also dashboards, which use iKnow to show links between terms and articles. To setup iKnow part of the solution do the following: Download from release DSW.config.and.iKnow.files.zip and move files sets.txt and backlist.txt from archive to <your_instance>/Mgr/DCANALYTICS/.Run in the terminal following: DCANALYTICS> do ##class(Community.iKnow.Utils).setup() DCANALYTICS> do ##class(Community.iKnow.Utils).update() DCANALYTICS> do ##class(Community.Utils).UpdateСubes() Open iKnow dashboard on: <server:port>/dsw/index.html#!/d/iKnow.dashboard?ns=DCANALYTICS And you will see something like this: What’s Next? – Make it better! Hope, as a usual developer, you don’t like the implementation, don’t like the style, dashboards are awful or not enough - this is great! You can fix it! Let the collaboration started! So, fork it, make it better and provide the pull request. We’ll review it, merge it and introduce to the solution. Make your own better Developer Community Analytics with InterSystems Data Platform! Hi, guys!There is a community project of DSW reports - which provides a way to prepare and send DeepSee Web reports in PDF by schedule.So, we introduced this feature to DC online analytics to have a weekly report in PDF - like this one.If you want to receive this report on Mondays too please put your comment here, we'll put you on the list.And you are very welcome to introduce your issues how we can improve it, or provide your pull requests.
Announcement
Anastasia Dyubaylo · Jul 6, 2018

(Webinar July 24 ) Sharding as the basis of Scaling in InterSystems IRIS Data Platform

Hi Everybody! We are pleased to invite you to the upcoming webinar "Sharding as the basis of Scaling in InterSystems IRIS Data Platform" on 24th of July at 10:00 (Moscow time)! The webinar focuses on the sharding technology that offers new capabilities for horizontal scalability of data in the InterSystems IRIS platform. Parallelization of data storage and processing power allows you to dynamically scale your applications. In particular, the following topics would be discussed in detail: sharding basics; usecases where it's advisable to use sharding; rapid creation of a sharding cluster with ICM; creating sharded cluster LIVE; advantages of using sharding with Apache Spark, JDBC. Presenter: @Vladimir.Prushkovskiy, InterSystems Sales Engineer. Audience: The webinar is designed for developers. Note: The language of the webinar is Russian. We are waiting for you at our webinar! Register now!
Article
Semion Makarov · Sep 10, 2017

Using SYSMON Dashboards for monitoring the work of InterSystems Caché, Ensemble and HealthShare

System Monitor is a flexible and highly configurable tool supplied with Caché (Ensemble, HealthShare), which collects the essential metrics of the operating system and Caché itself. System Monitor also notifies administrators about issues with Caché and the operating system, when one or several parameters reach the admin-defined thresholds. Notifications are sent by email or delivered otherwise via a custom notifications class. Notifications can be configured with the help of the ^%SYSMONMGR tool. For email delivery, the admin needs to to specify the parameters of the email server, the recipient’s email address and authentication settings. After that, the user can add the required addresses to the delivery list and test the settings by sending a test message. Once done, the tool will send email notifications about the remaining hard drive space, license expiry information and more. Additional information about notifications can be found here. Immediately after startup (by default, the tool is launched along with a Caché instance), System Monitor starts collecting metrics and recording them to system tables. Collected data is available via SQL. Besides, you can use SYSMON Dashboards to view and analyze these metrics starting from version 2015.1. SYSMON Dashboards is an open-source project for viewing and analyzing metrics. The project is supplied with a set of analytical dashboards featuring graphs for OS and Caché parameters. SYSMON Dashboards uses the DeepSee technology for analytics and building analytical dashboards. The installation process is fairly simple. Here’s what you need to do: Download the latest release,Import the class to any space (for example, USER),Start the installation using the following command: do ##class(kutac.monitor.utils.Installer).setup(). All other settings will be configured automatically. After installation, the DeepSee portal will get a set of DeepSee toolbars for viewing and analyzing performance metrics. In order to view DeepSee toolbars, I use DeepSeeWeb, an open-source project that uses an extended set of components for visualizing DeepSee analytical panels. SYSMON Dashboards also includes a web interface for configuring the monitor and notifications. For detailed configuration, I recommend using the ^%SYSMONMGR utility. The SYSMON Dashboards settings page helps monitor a set of metrics, as well as to start/stop the monitor. Configuration of Email notification settings via the web interface is no different than the standard process: you need to specify the server parameters, address and authentication details. Example of email settings configuration: This way, by using the standard Caché tool, SYSMON Dashboards and DeepSeeWeb, you can considerably simplify the task of monitoring InterSystems platforms. Thanks Semion. I downloaded and installed SYSMON dashboards on a development server but I can't find the SYSMON Configurator tool you're mentioning and showing. Where is that located? Sorry for waiting. Go to URL: "{yourserver:port}/csp/sysmon/index.html".