Search

Clear filter
Article
Nikita Savchenko · Feb 12, 2019

How to Develop InterSystems Applications in Your Favorite IDE

ˮ This is one of my articles which was never published in English. Let's fix it! Hello! This article is about quite a practical way of developing InterSystems solutions without using the integrated tools like Studio or Atelier. All the code of the project can be stored in the form of "traditional" source code files, edited in your favorite development environment (for example, Visual Studio Code), indexed by any version control system and arbitrarily combined with many external tools for code analysis, preprocessing, packaging and so on. The approach described in this article is suitable for any types of projects on top of InterSystems products. In my case, I developed a couple of my applications (WebTerminal, Visual Editor, Class Explorer) using this approach. This article demonstrates a development cycle which is not traditional for InterSystems, but rather is the practical one, which you may prefer to use for some of your developments. TL;DR Here are some examples of projects that utilize the approach described in this article: WebTerminal, Class Explorer, Visual Editor, Entity Browser (possibly, some other projects have picked up this idea - comment below!). If you want to check the file structure of these projects, click "Open" located right after "Repository" label, and you'll be redirected to GitHub. I've been developing these projects completely without Studio/Atelier! Below is a description of several simplest ways to organize such project development technique. Each method can be modified and expanded to a full-fledged tool for importing, assembling, or even debugging projects in InterSystems Caché, however, the purpose of this article is to provide the basics only and to show that it can work. The described approach to development feature the following: The entire project (its source code) is located in the file system, with any arbitrary directory structure. The project directory is indexed by the Git version control system, has readme file, configs and scripts required for importing/compiling the project. The source code of the classes is in CLS format (as it appears in Studio/Atelier). Work on the project is carried out entirely in the file system, code writing - in any external text editor or IDE. The main feature of this approach is that you can connect any additional tools, for example, code preprocessing (like stub replacements at the compilation stage), front end assembly and so on. This article will not cover ObjectScript routines, CSP and other files but only ObjectScript class files. The work with the first ones can be organized in the same way as with ObjectScript classes. When necessary, by analogy with the presented example, you can implement the support of importing ObjectScript routines yourself. Regarding CSP files, these are just files on the disk, so you don't need to import them at all. To make CSP files work with your InterSystems application, just copy them to the directory of your application. The method described in the article does not require any additional tools and platforms, except for the installed version 2016.2+ of InterSystems Caché (Ensemble, Healthcare) or InterSystems IRIS. Additional assembly and preprocessing of the client code in this article is build with Node.JS, however, you can use any other technology you like. NodeJS is an open source and easy to use platform, which is chosen here because there are many ready-to-go packages already built for the tasks we are about to perform. Motivation Behind Development in Non-InterSystems IDEs The question arises, why not just continue to develop in the Studio, or switch to a “new studio”, Atelier? What is the point of not using these IDEs at all? The ObjectScript programming language is very different from other common languages ​​such as C#, Java, JavaScript, Python, Golang and others. The key difference here is that the language is "closed" by itself. Out-of-the box, many tools come directly from InterSystems, which is slowly changing with the introduction of InterSystems Open Exchange, a collection of community-created applications and tools for InterSystems products, and the politics of the corporation to make InterSystems more open. On my opinion, these changes are necessary to make ObjectScript a world-class player in the list of programming languages. Moreover, historically, ObjectScript programs, as well as their source code, are stored directly in the DBMS itself. Before the UDL support was introduced in InterSystems Caché 2016.2 (or CDL in version 2013.2 - read below), in order to extract the source code from the database, it was necessary to write a considerable program to export plain text sources to the files, and put even more effort into getting the code back into DBMS. Now, exporting and importing plain text source code is possible with just a single command, so that you can easily organize a “traditional” model for solutions development: editing source code files — compiling — getting results. Before Atelier, it wasn't simply possible to develop InterSystems applications on Linux / MacOS without a VM, since Caché Studio was supported only for Windows. Now that Atelier is based on Eclipse IDE, you can develop on any platform supported by Eclipse. However, the method described in the article is completely cross-platform. Some projects have many other sources and files besides ObjectScript classes. The question here is how to properly organize the source code of the entire project. Today, the next development cycle is used for projects using InterSystems Tech: you work on sources in Studio/Atelier, and then you can do export XML/CLS files to a vcs-indexed file system with a help of embedded tools. These exported files are not intended for modifications. In case of Atelier, the development cycle is designed around Atelier only, and each and every extension has to be supported by the IDE. There is a little support of external tools, build tools, code analyzers, preprocessors, as well as there is no support for an arbitrary project structure and so on. To sum up, mostly what was designed initially is supported only. Finally, the most important motivation, taking into account all of the above, is to open ObjectScript Programming Language for the whole world. This has already began: InterSystems introduced InterSystems Open Exchange, an ObjectScript support was developed for Sublime Text Editor, Atom and Visual Studio Code, and so on. See? That's what it is about! Syntax highlighting in Visual Studio Code - an "external" IDE for ObjectScript Introduction Exporting ObjectScript program sources to UDL (Universal Definition Language) format landed completely just in InterSystems Caché 2016.2. In previous versions, starting with InterSystems Caché 2013.2, the class code export was also possible, using the methods of the class %Compiler.UDL.TextServices. It is also worth mentioning that, starting from version 2016.2, the Atelier REST API is also available for importing/exporting class definitions for InterSystems products. Before UDL, it was only possible to export and import the XML representation of classes, which was just a big mess for version control systems. Through XML class definition also had a plaintext code you could edit, you weren't able to see clean commit diffs (example: one of my projects which still has some XMLs in it), do merge requests and so on. UDL cleans this up and opens new possibilities for projects development on top of InterSystems products. The result of this article is the simplest possible project organized entirely in the file system, and several scripts that create a single command to build and import the whole thing into DBMS. Prerequisites Let's assume that we have an ObjectScript project, which consists of class definitions (as well as, possibly, some routine code and a static front end). This is a necessary and sufficient condition to start applying the development method described in this article to an existing or a new project. It is assumed that the machine you work on has a locally installed DBMS IRIS/Caché/Ensemble/HealthShare version 2016.2+. To implement this method of development in earlier versions of InterSystems Caché (starting with 2013.2), you will need to adapt the suggested examples using the %Compiler.UDL.TextServices methods. If you don't have any InterSystems' products installed, you can try one out here. During the installation, specify the Unicode encoding instead of 8-bit, and leave all the other items suggested by the installation wizard unchanged. The article uses the Git version control system. If you do not have Git installed, you have to install it. Creating a Project The directory structure of the demonstration project is as follows: Wherein: The source code of the project is located in the “source” directory, and, in the corresponding “cls” subdirectory, there is a tree of packages and classes. In the screenshot of the project structure above, as an example, the DevProject package is located, along with Robot class (DevProject.Robot) and REST subpackage. The import.* Script imports the project into the DBMS. The project code shown above is available on GitHub. It is suggested to clone the project to the local machine by following the instructions below: git clone https://github.com/ZitRos/cache-dev-project The project contains the source/cls directory, which contains the usual package hierarchy. For the demonstration, a simplest class was created containing the Message class-method, which displays the message “Welcome, Anonymous!”: Class DevProject.Robot{ClassMethod Message(name As %String = "Anonymous"){ write "Welcome, ", name, "!"}} To import this and other classes into DBMS, you can use one of the following ways: 1. Use Atelier: It doesn't make sense to perform all these steps each time we would like to test our project. Hence, we are going to automate this. 2. Execute the following command in the terminal window: do $system.OBJ.ImportDir("D:/Path/To/the/Project/source/cls",,"ck /checkuptodate=all",,1) This command recursively loads all files from the D:/Path/To/the/Project/source/cls directory into the current namespace, and also compiles those classes that have changed since the last import. Thus, reloaded classes without changes will not take time to compile. The second option is also isn't the most convenient solution - every time a project starts, you need to open a Caché terminal, enter a login-password pair (on instances with a normal protection level enabled), switch to the desired namespace and finally execute the command saved somewhere in a notebook. Finally, it is possible to automate this using the third option. 3. Create a script to automate all the routine stuff and use just this: import Calling the latter command in the case of development in almost any external IDE can be simplified even more, to the click of a single button or running a program which will watch files and re-import each time something changes. Thus, the entire project is in the file system, work is being done with the plaintext files, and if necessary, just a single command imports and compiles the whole project without a hassle. The Import Script Let's take a closer look onto a script that imports a project into DBMS. In order to do this, it needs some additional information about your InterSystems instance, namely, it's install location, import namespace, as well as the username and password to log in to the system. This data is coded directly into the script, however, it could be separated to a config file. The source code of the script is available on GitHub for Windows and *nix systems. All that needs to be done is several variables change in the script once before starting work on the project. The script executes the cache.exe executable file, which is located in the /bin/ directory of the installed DBMS, and passes two arguments to it: the database directory and the namespace. Then, the script sends a user name, a password, and a few simple ObjectScript commands to the input interface via the terminal interface, importing classes and reporting a successful import or error. Thus, the user gets all the necessary information about the import and compilation of the classes, as well as any errors that may have occurred during the compilation process. Here's the example of the output of the import.bat script: Importing project...Node: DESKTOP-ILGFMGK, Instance: ENSEMBLE20162 USER>Load of directory started on 06/29/2016 22:59:10 Loading file C:\Users\ZitRo\Desktop\cache-dev-project\source\cls\DevProject\Robot.cls as udlLoading file C:\Users\ZitRo\Desktop\cache-dev-project\source\cls\DevProject\REST\Index.cls as udl Compilation started on 06/29/2016 22:59:10 with qualifiers 'ck /checkuptodate=all'Class DevProject.REST.Index is up-to-date.Compiling class DevProject.RobotCompiling routine DevProject.Robot.1Compilation finished successfully in 0.003s. Load finished successfully.IMPORT STATUS: OK Now we can ensure that the project was indeed imported: USER > do ##class(DevProject.Robot).Message()Welcome, Anonymous! More Complex Example To maximize the benefits of developing a project using InterSystems technologies, we will try to do something more attractive by adding a graphical interface and building the project with the use of NodeJS platform and Gulp task runner. The result is a web page which image is shown below. Emphasis will be placed on how it is possible to organize the development of such a project. First let's took at architecture of the suggested solution. The project consists of static client code (HTML, CSS, JavaScript), a class on the server that describes the REST interface, and one persistent class. A client with a GET request gets a list of robots that are located on the server. Also, when you click on the “Spawn a new robot!” button, the client sends a GET request to the server, as a result of which a new instance of the Robot class is created and added to the display list. (note: robot creation request should actually be a POST request, but we won't complicate things much in this example) Technical implementation of the project can be viewed on GitHub (in “extended” branch). In the article, further attention will be paid to the method of developing such projects. Here, unlike the previous example, the client part of the application is added, which is located in the source/static directory, and the project is built using Node.JS and Gulp. For example, besides the other code in the project there you can find some special comments like these: <div class="version">v<!-- @echo package.version --></div> When building a project, this comment will be replaced with the project version, which is listed in the package.json file. The build script also minimizes CSS and JavaScript code, and copies all the processed code into the `build` directory. In the import script, unlike the previous example, the following changes were added: Before importing, the project is assembled (bundled). Import files are now imported from the build directory, as they pass through the preprocessor. The files are copied from the build/static directory to the Caché's csp/user directory with CSP files. Thus, after importing the application it immediately becomes available. Detailed instructions for installing and running this project are available in the description of the repository. The result is a project that needs to be set up only once, by modifying several variables in the import.* file. The considered development cycle is used in several of my own projects: WebTerminal, Class Explorer, Visual Editor, Entity Browser. Soon it may be used in other projects, including your own ones :) IDE and Debugging This development method does not provide any debugging utilities, it is simple as that. If you use more comprehensive debugging tools for your ObjectScript code rather than simply logging something to globals, you still have to use integrated debugging tools in InterSystems products. However, besides of that, the described development method has a big advantage: the ability to use your favorite development environment for writing ObjectScript code, whether this is vim or a simple notebook, MacOS or *nix or any other programming language is used - you get the same workflow. But on the other hand, ObjectScript does not feature such a comprehensive support outside the Studio/Atelier. This means that even syntax highlighting is currently not quite well handled by external editors, let alone autocompletion. But all this is about to change in the near future, as more and more effort is being put into the open-source initiatives. In the meantime, you can use elementary syntax highlighting by keywords that some IDEs offer, such as IntelliJ or Visual Studio Code: In the case of IntelliJ IDEA is your favorite IDE, you can try it right now - here is the settings file you need to import using the File -> Import Settings menu. Highlighting is quite simple and incomplete, any additions are welcome. Conclusion The purpose of this article is to introduce something new into the world of development of InterSystems applications, present another version of development to the public and contribute to the spread of ObjectScript as a programming language as a whole. Any feedback, ideas and discussions are very welcome! Thank you! Nice writing, Nikita!Just want to mention that there is a new community option to code ObjectScript you've probably never tried - VSCode plugin for ObjectScript by @Dmitry.Maslennikov.A lot of developers can name VSCode as "favorite" IDE and the plugin can do really a lot for InterSystems IRIS developers today. This is an interesting approach, and I do like it. One question that comes to mind is handling the concept of refactoring packages or class names/files. An example if your cache-dev-project gets restructured in a branch that would be used for testing and DevProject package turns into something more descriptive like RobotProject. When the import script runs the code the server will have a DevProject package and a RobotProject if it was switching between the branches. If one of the packages or classes becomes obsolete then it would be nice to have a way to delete the Class from the server code.I don't think VSCode has a way to handle this. Just a little food for thought. I would not be so sure in your doubts about VSCode. VSCode itself supports refactoring staff, we just do not have it in ObjectScript extension, yet. Deleting obsolete classes, for sure very interesting and quite difficult task. But better to solve it another way, with just clean rebuild. Or for example I can add action delete in the context menu in server explorer, so, a developer will be able to manually delete any class/routine on the server from VSCode. Hello Matthew! Thank you for your feedback.Indeed good point. One idea that comes to my mind for this case is to improve the import script to file the list of classes which were ever imported and those which are used now. By using this list the import script can resolve which classes to delete and which to keep. However, deleting classes can always introduce unwanted side effects, but in terms of a project this should be consistent. Dmitry this sounds great to have a way to do refactoring. I would love to get some update on coming features to VSCode as I do use it often. As for adding the Action to delete in the context menu that would be a very helpful feature. There are times a class is created for new functionality and then gets deleted if a better solution is found. I would much rather manually delete 1 class than perform a clean rebuild. I will admit sometimes a clean rebuild is needed, but depending on the circumstances can take longer. Fill the issue, please, so, I will keep it saved
Article
Evgeny Shvarov · Mar 14, 2019

Using Docker with your InterSystems IRIS development repository

Hi Community! I think everyone keeps the source code of the project in the repository nowadays: Github, GitLab, bitbucket, etc. Same for InterSystems IRIS projects check any on Open Exchange. What do we do every time when start or continue working with a certain repository with InterSystems Data Platform? We need a local InterSystems IRIS machine, have the environment for the project set up and the source code imported. So every developer performs the following: Check out the code from repo Install/Run local IRIS installation Create a new namespace/database for a project Import the code into this new namespace Setup all the rest environment Start/continue coding the project If you dockerize your repository this steps line could be shortened to this 3 steps: Check out the code from repo Run docker-compose build Start/continue coding the project Profit - no any hands-on for 3-4-5 steps which could take minutes and bring head ache sometime. You can dockerize (almost) any your InterSystems repo with a few following steps. Let’s go! How to dockerize the repo and what does this mean? Basically, the idea is to have docker installed in your machine which will build the code and environment into a container which will then run in docker and will work in the way developer introduced at a first place. No any "What is the OS version?", "What else did you have on this IRIS installation?". It's every time a clean page (or a clean IRIS container) which we use to setup environment (namespaces, databases, web-apps, users/roles ) and import code into a clean just created database. Will this "dockerize" procedure harm greatly your current repo? No. It will need to add 2-3 new files in the root of the repo and following a few rules which you can setup on your own. Prerequisites Download and Install docker. Download and install IRIS docker image. In this example, I will use full InterSystems IRIS preview: iris:2019.1.0S.111.0 which you can download from WRC-preview., see the details. If you work with the instance which needs a key place the iris.key in the place you will use all the time. I put it on my Mac into Home directory. Dockerising the repo To dockerise your repo you need to add three files into the root folder of your repo. Here is the example of dockerized repo - ISC-DEV project, which helps to import/export source code from IRIS database. This repo has additional Dockerfile, docker-compose.yml and installer.cls I will describe below. First is Dockerfile, which will be used by docker-compose build command Dockerfile FROM intersystems/iris:2019.1.0S.111.0 # need be the same image as installed WORKDIR /opt/app COPY ./Installer.cls ./ COPY ./cls/ ./src/ RUN iris start $ISC_PACKAGE_INSTANCENAME quietly EmergencyId=sys,sys && \ /bin/echo -e "sys\nsys\n" \ # giving %ALL to the user admin " Do ##class(Security.Users).UnExpireUserPasswords(\"*\")\n" \ " Do ##class(Security.Users).AddRoles(\"admin\", \"%ALL\")\n" \ # importing and running the installer " Do \$system.OBJ.Load(\"/opt/app/Installer.cls\",\"ck\")\n" \ " Set sc = ##class(App.Installer).setup(, 3)\n" \ " If 'sc do \$zu(4, \$JOB, 1)\n" \ # introducing OS Level authorization (to remove login/pass prompt in container) " Do ##class(Security.System).Get(,.p)\n" \ " Set p(\"AutheEnabled\")=p(\"AutheEnabled\")+16\n" \ " Do ##class(Security.System).Modify(,.p)\n" \ " halt" \ | iris session $ISC_PACKAGE_INSTANCENAME && \ /bin/echo -e "sys\nsys\n" \ | iris stop $ISC_PACKAGE_INSTANCENAME quietly CMD [ "-l", "/usr/irissys/mgr/messages.log" ] This Dockerfile copies installer.cls and the source code from /cls folder of repo into /src foler into the container It also runs some config settings, which give admin user %All role, infinite password ‘SYS’, introduces OS level authorization and runs the %Installer. What’s in %Installer? Class App.Installer { XData MyInstall [ XMLNamespace = INSTALLER ] { <Manifest> <Default Name="NAMESPACE" Value="ISCDEV"/> <Default Name="DBNAME" Value="ISCDEV"/> <Default Name="APPPATH" Dir="/opt/app/" /> <Default Name="SOURCESPATH" Dir="${APPPATH}src" /> <Default Name="RESOURCE" Value="%DB_${DBNAME}" /> <Namespace Name="${NAMESPACE}" Code="${DBNAME}-CODE" Data="${DBNAME}-DATA" Create="yes" Ensemble="0"> <Configuration> <Database Name="${DBNAME}-CODE" Dir="${APPPATH}${DBNAME}-CODE" Create="yes" Resource="${RESOURCE}"/> <Database Name="${DBNAME}-DATA" Dir="${APPPATH}${DBNAME}-DATA" Create="yes" Resource="${RESOURCE}"/> </Configuration> <Import File="${SOURCESPATH}" Recurse="1"/> </Namespace> </Manifest> } ClassMethod setup(ByRef pVars, pLogLevel As %Integer = 3, pInstaller As %Installer.Installer, pLogger As %Installer.AbstractLogger) As %Status [ CodeMode = objectgenerator, Internal ] { Return ##class(%Installer.Manifest).%Generate(%compiledclass, %code, "MyInstall") } } It creates the namespace/database ISCDEV and imports the code from source folder -/src. Next is docker-compose.yml file, which will be used when we run the container with docker-compose up command. version: '2.4' services: iris: build: . restart: always ports: - 52773:52773 volumes: - ~/iris.key:/usr/irissys/mgr/iris.key This config will tell docker on what port we will expect IRIS working on our host. First (52773) is a host, second is container’s internal port of a container (52773) in volumes section docker-compose.yml provides access to an iris key on you machine inside the container in the place, where IRIS is looking for it: - ~/iris.key:/usr/irissys/mgr/iris.key To start coding with this repo you do the following: 1. Clone/git pull the repo into any local directory. 2. Open the terminal in this directory and run user# docker-compose build this will build the container. 3. Run the IRIS container with your project user# docker-compose up -d Open your favorite IDE, connect to the server on localhost://52773 and develop your success with InterSystems IRIS Data Platforms ;) You can use this 3 files to dockerize your repository. Just put the right name for source code in Dockerfile, the right namespace(s) in Installer.cls and place for iris.key in docker-compose.yml and use benefits of Docker containers in your day-to-day development with InterSystems IRIS. Nice one Evgeny! I like it!I'm sure it'll help all those that want to leverage the agility of containers and our quarterly container releases. Thank you, Luca! Besides agility, I like the saving of the developer's time on environment setup. Docker IMHO is the fastest way for a developer to start compiling the code. And what's even better - it's a standard way from project-to-project: docker-compose build #(when needed) docker-compose up -d #(always) And forgot to add, that to open IRIS terminal just call the following: user$ docker-compose exec iris iris session iris Very nice tip, worked fine, and with this I can use IRIS terminal from VSCode terminal VSCode, while configured to work with docker, have a short action to open terminal, through menu on connection status Hello, And a great tutorial, thanks! One question tho. Do you know how to pass environment variables to the %Installer? In my scenario, I would like to configure a CI/CD pipeline and define some environment variables when building the docker container and reference those variables within the %Installer to create for ex. A specific user account (username and password as env variables). How can I achieve this? I've tried setting the env variables within the Dockerfile with ENV someEnv="1234" and try to get the variable with %System.Util.GetEnviron("someEnv") within %Installer, but it just returns an empty string. If you have any insight or tips, it would be appreciated. Cheers!Kari Vatjus-Anttila Not sure why this doesn't work. Calling for experts @Dmitry.Maslennikov @Eduard.Lebedyuk Yeah, working with Environment variables is quite tricky, it may not be in a place where you would expect it. I would not recommend it for %Installer, you should focus on Variables feature there, and pass variable to setup method when you call it. It should work. Here's an example: Installer Dockerfile Please consider providing sample code. 💡 This article is considered as InterSystems Data Platform Best Practice.
Announcement
Anastasia Dyubaylo · Jul 17, 2019

New Video: Natural Language Processing with InterSystems IRIS

Hi Community!You're very welcome to watch a new video on InterSystems Developers YouTube, recorded by @Benjamin.DeBoe, InterSystems Product Manager:Natural Language Processing with InterSystems IRIS This video provides a quick overview of what kinds of problems InterSystems IRIS’ NLP capabilities can solve. The technology is available in the Community Edition, so no reason not to spend the first five minutes of your lunch break watching the video, and the remaining time on kicking the tires!And...Additional materials to the video you can find on the InterSystems Video Portal.Enjoy and stay tuned! @Benjamin.DeBoe !Transferring a question from YouTube where IRIS NLP could be found in IRIS Community Edition? Could you please help? IRIS NLP, previously known as iKnow, is an embedded technology, meaning it's there in the form of APIs. These articles on building a domain and using the knowledge portal should be a helpful start, as is this series of step-by-step videos (which are a little older I'll admit; start with the "fundamentals" one) and of course other articles on the developer community tagged for iKnow.
Announcement
Anastasia Dyubaylo · Jul 10, 2019

New Video: InterSystems Platforms and FHIR STU3

Hi Community!New video is already on InterSystems Developers YouTube Channel:InterSystems Platforms and FHIR STU3This video provides an overview of our FHIR STU3 support, with a demonstration to showcase key features, including data transformation APIs.Takeaway: InterSystems enables me to use the FHIR STU3 standard.Presenter: @Craig.Lee, @Marc.Mundt Additional materials to the video you can find in this InterSystems Online Learning Course.Enjoy watching the video!
Announcement
Anastasia Dyubaylo · Jul 12, 2019

New Coding Talk: Locking in InterSystems ObjectScript

Hi Everyone! You're very welcome to watch the new video on InterSystems Developers YouTube, recorded by @Sourabh.Sethi6829 in a new format called "Coding Talks": Locking in InterSystems ObjectScript In this video, we all are going to do a deep dive into a very common yet complex topic: locks in InterSystems ObjectScript.In this video, we will start from the very basics to the most complex and interesting aspects of locks and data integrity. Recommended Audience: All developers who are aware of MUMPS or Caché Objects. Codeset you can find here. For any questions, please write to @Sourabh.Sethi6829 at sethisourabh.hit@gmail.com. Enjoy watching this video! Thank you for another interesting video. However, if the sound quality was better, it would make the viewing much more enjoyable. Simply use lavalier microphone and adjust the compression for much better results. Hi Pasi,Thanks for your feedback! We will take into account your preferences. @Sourabh.Sethi6829, very helpful video and good work!But sharing the code in Google Docs? Could you please share it on Github and in ObjectScript? E.g. like in this project?
Question
Kishan Ravindran · Jul 1, 2017

How to Enable iKnow Functionality in InterSystems Caché

In my cache studio i couldn't find the a namespace of iknow so how can i check is my studio version is compatible to to the one i am using now. If i don't have one then can be able to create a new namespace in studio? I checked out how it can be done using intersystem doc. But for mine i think for my user the license does not apply is there any other way i can work on. Can anyone help me on this. Thank you John Murray and Benjamin DeBoe for your answers. Both your Document was helpful. Here's a way of discovering if your license includes the iKnow feature:USER>w $system.License.GetFeature(11)1USER>A return value of 1 indicates that you are licensed for iKnow. If the result is 0 then your license does not include iKnow.See here for documentation about this method, which tells you that 11 is the feature number for iKnow.Regarding namespaces, these are created in Portal, not in Studio. See this documentation. Thanks John,indeed, you'd need a proper license in order to work with iKnow. If the method referred above would return 0, please contact your sales representative to request a temporary trial license and appropriate assistance for implementing your use case.Also, iKnow doesn't come as a separate namespace. You can create (regular) namespaces as you prefer and use them to store iKnow domain data. You may need to enable your web application for iKnow, which is disabled by default for security reasons in the same way DeepSee is. See this paragraph here for more details.
Announcement
Derek Robinson · Nov 22, 2017

New InterSystems Online Course: Getting Started with ICM

Hi all! We have just released a new online course, Getting Started with ICM, that provides an introduction to InterSystems Cloud Manager (ICM) -- one of the new technologies coming with the release of InterSystems IRIS!After taking this one-hour course, you will be able to:Explain what ICM is and the business benefits that come with itIdentify the major cloud computing providers and the benefits of cloud computingProvision a multi-node infrastructure on your selected cloud platformDeploy your InterSystems IRIS applications to your provisioned infrastructureUnprovision your infrastructure to avoid costly chargesRun additional commands to further manage and modify your cloud deployments with ICMWe hope you enjoy the course!
Question
Mike Kadow · Nov 8, 2017

Documentation type, standard InterSystems type or Atelier type?

In trying to understand Atelier I am directed to go through its hierarchy type of documentation.Is the Atelier documentation going to continue as a hierarchy or at some point is it going to be integrated into the InterSystems type of documentation?When looking for an answer it would be nice to use only one method. On a side note, the attached Relevant Articles seem to have nothing to do with the subject of my query. There are currently no plans to merge the Atelier documentation with the docs for other InterSystems technologies (Caché, Ensemble, HealthShare, InterSystems IRIS Data Platform). Atelier is a separate product and will continue to have its own documentation that follows industry standards for Eclipse plug-ins.
Question
Ponnumani Gurusamy · Oct 6, 2016

What is the Difference of Function Routine and Procedure in InterSystems Caché

Difference between function , routine and procedure in object script. I think this is what you are looking for: http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GORIENT_ch_cos A function is something that takes a bunch of inputs and returns one or more values. If the returned values are entirely determined by the inputs, and the function doesn't have any side effects (logging, perhaps, or causing state changes outside itself), then it's called a pure function.A procedure is a function that doesn't return a value. In particular, this means that a procedure can only cause side effects. (That might include mutating an input parameter!)A routine is either a procedure or a function or is the bridge between a function and a procedure, should also include instructions for accessing the function arguments and returning the result.
Article
Mark Bolinsky · Dec 5, 2016

InterSystems Technologies on Amazon EC2: Reference Architecture

Enterprises need to grow and manage their global computing infrastructures rapidly and efficiently while simultaneously optimizing and managing capital costs and expenses. Amazon Web Services (AWS) and Elastic Compute Cloud (EC2) computing and storage services meet the needs of the most demanding Caché based application by providing
 a highly robust global computing infrastructure. The Amazon EC2 infrastructure enables companies to rapidly provision compute capacity and/or quickly and flexibly extend their existing on- premises infrastructure into the cloud. AWS provides a rich set of services and robust, enterprise-grade mechanisms for security, networking, computation, and storage. At the heart of AWS is the Amazon EC2. A cloud computing infrastructure that supports a variety of operating systems and machine configurations (e.g., CPU, RAM, network). AWS provides preconfigured virtual machine (VM) images known as Amazon Machine Images, or AMIs, with guest operating systems including various Linux® and Windows distributions and versions. They may have additional software used as the basis for virtualized instances running in AWS. You can use these AMIs as starting points to instantiate and install or configure additional software, data, and more to create application- or workload-specific AMIs. Care must be taken, as with any platform or deployment model, to ensure all aspects of an application environment are considered such as performance, availability, operations, and management procedures. Specifics of each of following areas will be covered in this document. Network setup and configuration. This section covers the setup of the network for Caché-based applications within AWS, including subnets to support the logical server groups for different layers and roles within the reference architecture. Server setup and configuration. This section covers the services and resources involved in the design of the various servers for each layer. It also includes the architecture for high availability across availability zones. Security. This section discusses security mechanisms in AWS, including how to configure the instance and network security to enable authorized access to the overall solution as well as between layers and instances. Deployment and management. This section provides details on packaging, deployment, monitoring, and management. Architecture and Deployment Scenarios This document provides several reference architectures within AWS as examples for providing robust well performing and highly available applications based on InterSystems technologies including Caché, Ensemble, HealthShare, TrakCare, and associated embedded technologies such as DeepSee, iKnow, CSP, Zen and Zen Mojo. To understand how Caché and associated components can be hosted on AWS, let’s first review the architecture and components of a typical Caché deployment and explore some common scenarios and topologies. Caché Architecture Review InterSystems data platform continuously evolves to provide an advanced database management system and rapid application development environment for breakthroughs in processing and analyzing complex data models and developing Web and mobile applications. This is a new generation of database technology that provides multiple modes of data access. Data is only described once in a single integrated data dictionary and is instantly available using object access, high-performance SQL, and powerful multidimensional access – all of which can simultaneously access the same data. The available Caché high-level architecture component tiers and services are illustrated in Figure-1. These general tiers also apply to both InterSystems TrakCare and HealthShare products as well. Figure-1: High-level component tiers Common Deployment Scenarios There are numerous combinations possible for deployment, however in this document two scenarios will be covered; a hybrid model and complete cloud hosted model. Hybrid Model In this scenario, a company wants to leverage both on-premises enterprise resources and AWS EC2 resources for either disaster recovery, internal maintenance contingency, re-platforming initiatives, or short/long term capacity augmentation when needed. This model can offer a high level of availability for business continuity and disaster recovery for a failover mirror member set on-premises. Connectivity for this model in this scenario relies on a VPN tunnel between the on-premises deployment and the AWS availability zone(s) to present AWS resources as an extension to the enterprise’s data center. There are other connectivity methods such as AWS Direct Connect. However, this is not covered as part of this document. Further details about AWS Direct Connect can be found here. Details for setting up this example Amazon Virtual Private Cloud (VPC) to support disaster recovery of your on-premises data center can he found here. Figure-2: Hybrid model using AWS VPC for disaster recovery of on-premises The above example shows a failover mirror pair operating within your on-premises data center with a VPN connection to your AWS VPC. The VPC illustrated provides multiple subnets in dual availability zones in a given AWS region. There are two Disaster Recovery (DR) Async mirror members (one in each availability zone) to provide resiliency. Cloud Hosted Model In this scenario, your Caché-based application including both data and presentation layers are kept completely in the AWS cloud using multiple availability zones within a single AWS region. The same VPN tunnel, AWS Direct Connect, and even pure Internet connectivity models are available. Figure-3: Cloud hosted model supporting full production workload The above example in Figure-3 illustrates a deployment model for supporting an entire production deployment of your application in your VPC. This model leverages dual availability zones with synchronous failover mirroring between the availability zones along with load balanced web servers and associated application servers as ECP clients. Each tier is isolated in a separate security group for network security controls. IP addresses and port ranges are only opened as required based on your application’s needs. Storage and Compute Resource Storage There are multiple types of storage options available. For the purpose of this reference architecture Amazon Elastic Block Store (Amazon EBS) and Amazon EC2 Instance Store (also called ephemeral drives) volumes are discussed for several possible use cases. Additional details of the various storage options can be found here and here. Elastic Block Storage (EBS) EBS provides durable block-level storage for use with Amazon EC2 instances (virtual machines) which can be formatted and mounted as traditional file systems in either Linux or Windows, and most importantly the volumes are off-instance storage that persists independently from the running life of a single Amazon EC2 instance which is important for database systems. In addition, Amazon EBS provides the ability to create point-in-time snapshots of volumes, which are persisted in Amazon S3. These snapshots can be used as the starting point for new Amazon EBS volumes, and to protect data for long-term durability. The same snapshot can be used to instantiate as many volumes as you want. These snapshots can be copied across AWS regions, making it easier to leverage multiple AWS regions for geographical expansion, data center migration, and disaster recovery. Sizes for Amazon EBS volumes range from 1 GB to 16 TB, and are allocated in 1 GB increments. Within Amazon EBS there are three different types: Magnetic Volumes, General Purpose (SSD), and Provisioned IOPS (SSD). The following sub sections provide a short description of each. Magnetic Volumes Magnetic volumes offer cost-effective storage for applications with moderate or bursting I/O requirements. Magnetic volumes are designed to deliver approximately 100 input/output operations per second (IOPS) on average, with a best effort ability to burst to hundreds of IOPS. Magnetic volumes are also well- suited for use as boot volumes, where the burst capability provides fast instance startup times. General Purpose (SSD) General Purpose (SSD) volumes offer cost-effective storage that is ideal for a broad range of workloads. These volumes deliver single-digit millisecond latencies, the ability to burst to 3,000 IOPS for extended periods of time, and a baseline performance of 3 IOPS/GB up to a maximum of 10,000 IOPS (at 3,334 GB). General Purpose (SSD) volumes can range in size from 1 GB to 16 TB. Provisioned IOPS (SSD) Provisioned IOPS (SSD) volumes are designed to deliver predictable high performance for I/O-intensive workloads, such as database workloads that are sensitive to storage performance and consistency in random access I/O throughput. You specify an IOPS rate when creating a volume, and then Amazon EBS delivers within 10 percent of the provisioned IOPS performance 99.9 percent of the time over a given year. A Provisioned IOPS (SSD) volume can range in size from 4 GB to 16 TB, and you can provision up to 20,000 IOPS per volume. The ratio of IOPS provisioned to the volume size requested can be a maximum of 30; for example, a volume with 3,000 IOPS must be at least 100 GB in size. Provisioned IOPS (SSD) volumes have a throughput limit range of 256 KB for each IOPS provisioned, up to a maximum of 320 MB/second (at 1,280 IOPS). The architectures discussed in this document use EBS volumes as these are more suited for production workloads that require predictable low-latency Input/Output Operations per Second (IOPs) and throughput. Care must be taken when selecting a particular VM type because not all EC2 instance types can have access to EBS storage. Note: Because Amazon EBS volumes are network-attached devices, other network I/O performed by an Amazon EC2 instance, and also the total load on the shared network, can affect the performance of individual Amazon EBS volumes. To allow your Amazon EC2 instances to fully utilize the Provisioned IOPS on an Amazon EBS volume, you can launch selected Amazon EC2 instance types as Amazon EBS–optimized instances. Details about EBS volumes can be found here. EC2 Instance Storage (ephemeral drives) EC2 instance storage consists of a preconfigured and pre-attached block of disk storage on the same physical server that hosts your operating Amazon EC2 instance. The amount of the disk storage provided varies by Amazon EC2 instance type. In the Amazon EC2 instance families that provide instance storage, larger instances tend to provide both more and larger instance store volumes. There are storage-optimized (I2) and dense-storage (D2) Amazon EC2 instance families that provide special-purpose instance storage that are targeted to specific use cases. For example, I2 instances provide very fast SSD-backed instance storage capable of supporting over 365,000 random read IOPS and 315,000 write IOPS and provide cost attractive pricing models. Unlike EBS volumes, the storage is not permanent and can be only used for the instance’s lifetime, and cannot be detached or attached to another instance. Instance storage is meant for temporary storage of information that is continually changing. In the realm of InterSystems technologies and products, items such as using Ensemble or Health Connect as an Enterprise Service Bus (ESB), application servers using Enterprise Cache Protocol (ECP), or using web servers with CSP Gateway would be great use cases for this type of storage and storage-optimized instance types along with using provisioning and automation tools to streamline their effectiveness and support elasticity. Details about Instance store volumes can be found here. Compute EC2 Instances There are numerous instance types available that are optimized for various use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacities allowing for countless combinations to right-size the resource requirements for your application. For the purpose of this document General Purpose M4 Amazon EC2 instance types will be referenced as means to right-size an environment and these instances provide EBS volume capabilities and optimizations. Alternatives are possible based on your application’s capacity requirements and pricing models. M4 instances are the latest generation of General Purpose instances. This family provides a balance of compute, memory, and network resources, and it is a good choice for many applications. Capacities range from 2 to 64 virtual CPU and 8 to 256GB of memory with corresponding dedicated EBS bandwidth. In addition to the individual instance types, there are also tiered classifications such as Dedicated Hosts, Spot instances, Reserved instances, and Dedicated instances each with varying degrees of pricing, performance, and isolation. Confirm availability and details of the currently available instances here. Availability and Operations Web/App Server Load Balancing External and internal load balanced web servers may be required for your Caché based application. External load balancers are used for access over the Internet or WAN (VPN or Direct Connect) and internal load balancers are potentially used for internal traffic. AWS Elastic Load balancing provides two types of load balancers – Application load balancer and Classic Load balancer. Classic Load Balancer The Classic Load Balancer routes traffic based on application or network level information and is ideal for simple load balancing of traffic across multiple EC2 instances where high availability, automatic scaling, and robust security are required. The specific details and features can be found here. Application Load Balancer An Application Load Balancer is a load balancing option for the Elastic Load Balancing service that operates at the application layer and allows you to define routing rules based on content across multiple services or containers running on one or more Amazon EC2 instances. Additionally, there is support for WebSockets and HTTP/2. The specific details and features can be found here. Example In this following example, a set of three web servers are defined with each one in a separate availability zone to provide the highest levels of availability. The web server load balancers must be configured with Sticky Sessions to support the ability to pin user sessions to specific EC2 instances using cookies. Traffic will be routed to the same instances as the user continues to access your application. The following diagram in Figure-4 illustrates a simple example of the Classic Load Balancer within AWS. Figure-4: Example of a Classic Load Balancer Database Mirroring When deploying Caché based applications on AWS, providing high availability for the Caché database server requires the use of synchronous database mirroring to provide high availability in a given primary AWS region and potentially asynchronous database mirroring to replicate data to a hot standby in a secondary AWS region for disaster recovery depending on your uptime service level agreements requirements. A database mirror is a logical grouping of two database systems, known as failover members, which are physically independent systems connected only by a network. After arbitrating between the two systems, the mirror automatically designates one of them as the primary system; the other member automatically becomes the backup system. External client workstations or other computers connect to the mirror through the mirror Virtual IP (VIP), which is specified during mirroring configuration. The mirror VIP is automatically bound to an interface on the primary system of the mirror. Note: In AWS, it is not possible to configure the mirror VIP in the traditional manner, so an alternative solution has been devised. However, mirroring is supported across subnets. The current recommendation for deploying a database mirror in AWS is to configure three instances (primary, backup, arbiter) in the same VPC across three different availability zones. This ensures that at any given time, AWS will guarantee external connectivity with at least two of these VMs with a 99.95% SLA. This provides adequate isolation and redundancy of the database data itself. Details on AWS EC2 service level agreements can be found here. There is no hard upper limit on network latency between failover members. The impact of increasing latency differs by application. If the round trip time between the failover members is similar to the disk write service time, no impact is expected. Round trip time may be a concern, however, when the application must wait for data to become durable (sometimes referred to as a journal sync). Details of database mirroring and network latency can be found here. Virtual IP Address and Automatic Failover Most IaaS cloud providers lack the ability to provide for a Virtual IP (VIP) address that is typically used in database failover designs. To address this, several of the most commonly used connectivity methods, specifically ECP clients and CSP Gateways, have been enhanced within Caché, Ensemble, and HealthShare to no longer rely on VIP capabilities making them mirror-aware. Connectivity methods such as xDBC, direct TCP/IP sockets, or other direct connect protocols, still require the use of a VIP. To address those, InterSystems database mirroring technology makes it possible to provide automatic failover for those connectivity methods within AWS using APIs to interact with either an AWS Elastic Load Balancer (ELB) to achieve VIP-like functionality, thus providing a complete and robust high availability design within AWS. Additionally, AWS has recently introduced a new type of ELB called an Application Load Balancer. This type of load balancer runs at Layer 7 and supports content-based routing and supports applications that run in containers. Content based routing is especially useful for Big Data type projects using a partitioned data or data sharding deployment. Just as with Virtual IP, this is an abrupt change in network configuration and does not involve any application logic to inform existing clients connected to the failed primary mirror member that a failover is happening. Depending on the nature of the failure, those connections can get terminated as a result of the failure itself, due to application timeout or error, as a result of the new primary forcing the old primary instance down, or due to expiration of the TCP keep-alive timer used by the client. As a result, users may have to reconnect and log in. Your application’s behavior would determine this behavior. Details about the various types of available ELB can be found here. AWS EC2 Instance call-out to AWS Elastic Load Balancer Method In this model the ELB can either have a server pool defined with both failover mirror members and potentially DR asynchronous mirror member(s) with only one of the entries active that is the current primary mirror member, or only a server pool with single entry of the active mirror member. Figure-5: API Method to interact with Elastic Load Balancer (internal) When a mirror member becomes the primary mirror member an API call is issued from your EC2 instance to the AWS ELB to adjust/instruct the ELB of the new primary mirror member. Figure-6: Failover to Mirror Member B using API to Load Balancer The same model applies for promoting a DR Asynchronous mirror member in the event of both the primary and backup mirror members become unavailable. Figure-7: Promotion of DR Async mirror to primary using API to Load Balancer As per standard recommended DR procedure, in Figure-6 above the promotion of the DR member involves a human decision due to the possibility of data loss from asynchronous replication. Once that action is taken however, no administrative action is required on the ELB. It automatically routes traffic once the API is called during promotion. API Details This API to call-out to the AWS load balancer resource is defined in the ^ZMIRROR routine specifically as part of the procedure call: $$CheckBecomePrimaryOK^ZMIRROR() Within this procedure, insert whatever API logic or methods you chose to use from AWS ELB REST API, command line interface, etc. An effective and secure way to interact with the ELB is to use AWS Identity and Access Management (IAM) roles so you don’t have to distribute long-term credentials to an EC2 instance. The IAM role supplies temporary permissions that Caché can use to interact with the AWS ELB. Details for using IAM roles assigned to your EC2 instances can be found here. AWS Elastic Load Balancer Polling Method A polling method using the CSP Gateway’s mirror_status.cxw page available in 2017.1 can be used as the polling method in the ELB health monitor to each mirror member added to the ELB server pool. Only the primary mirror will respond ‘SUCCESS’ thus directing network traffic to only the active primary mirror member. This method does not require any logic to be added to ^ZMIRROR. Please note that most load-balancing network appliances have a limit on the frequency of running the status check. Typically, the highest frequency is no less than 5 seconds, which is usually acceptable to support most uptime service level agreements. A HTTP request for the following resource will test the Mirror Member status of the LOCAL Cache configuration. /csp/bin/mirror_status.cxw For all other cases, the path to these Mirror status requests should resolve to the appropriate Cache server and NameSpace using the same hierarchical mechanism as that used for requesting real CSP pages. Example: To test the Mirror Status of the configuration serving applications in the /csp/user/ path: /csp/user/mirror_status.cxw Note: A CSP license is not consumed by invoking a Mirror Status check. Depending on whether or not the target instance is the active Primary Member the Gateway will return one of the following CSP responses: ** Success (Is the Primary Member) =============================== HTTP/1.1 200 OK Content-Type: text/plain Connection: close Content-Length: 7 SUCCESS ** Failure (Is not the Primary Member) =============================== HTTP/1.1 503 Service Unavailable Content-Type: text/plain Connection: close Content-Length: 6 FAILED ** Failure (The Cache Server does not support the Mirror_Status.cxw request) =============================== HTTP/1.1 500 Internal Server Error Content-Type: text/plain Connection: close Content-Length: 6 FAILED The following figures illustrate the various scenarios of using the polling method. Figure-8: Polling to all mirror members As the above Figure-8 shows, all mirror members are operational, and only the primary mirror member is returning “SUCCESS” to the load balancer, and so network traffic will be directed to only this mirror member. Figure-9: Failover to Mirror Member B using polling The following diagram demonstrates the promotion of DR asynchronous mirror member(s) into the load-balanced pool, this typically assumes the same load-balancing network appliance is servicing all mirror members (geographically split scenarios are covered later in this article). As per standard recommended DR procedure, the promotion of the DR member involves a human decision due to the possibility of data loss from asynchronous replication. Once that action is taken however, no administrative action is required on the ELB. It automatically discovers the new primary. Figure-10: Failover and Promotion of DR Asynchronous Mirror Member using polling Backup and Restore There are multiple options available for backup operations. The following three options are viable for your AWS deployment with InterSystems products. The first two options incorporate a snapshot type procedure which involves suspending database writes to disk prior to creation of the snapshot and then resuming updates once the snapshot was successful. The following high-level steps are taken to create a clean backup using either of the snapshot methods: Pause writes to the database via database Freeze API call. Create snapshots of the operating system + data disks. Resume Caché writes via database Thaw API call. Backup facility archives to backup location Additional steps such as integrity checks can be added on a periodic interval to ensure a clean and consistent backup. The decision points on which option to use depends on the operational requirements and policies of your organization. InterSystems is available to discuss the various options in more detail. EBS Snapshots EBS snapshots are very fast and efficient ways to create a point-in-time snapshot onto highly available and lower cost Amazon S3 storage. EBS snapshots along with InterSystems External Freeze and Thaw API capabilities allow for true 24x7 operational resiliency and assurance of clean regular backups. There are numerous options for automating the process using both AWS provided services such as Amazon CloudWatch Events or 3rd party solutions available in the marketplace such as Cloud Ranger or N2W Software Cloud Protection Manager to name a few. Additionally, you can programmatically create your own customized backup solution via the use of AWS direct API calls. Details on how to leverage the APIs are available here and here. Note: InterSystems does not endorse or explicitly validate any of these third party products. Testing and validation is up to the customer. Logical Volume Manager Snapshots Alternatively, many of the third-party backup tools available on the market can be used by deploying individual backup agents within the VM itself and leveraging file-level backups in conjunction with Linux Logical Volume Manager (LVM) snapshots or Windows Volume Shadow Copy Service (VSS). One of the major benefits to this model is having the ability to have file-level restores of Linux and Windows based instances. A couple of points to note with this solution; since AWS and most other IaaS cloud providers do not provide tape media, all backup repositories are disk-based for short term archiving and have the ability to leverage Amazon S3 low cost storage and eventually Amazon Glacier for long-term retention (LTR). It is highly recommended if using this method to use a backup product that supports de-duplication technologies to make the most efficient use of disk-based backup repositories. Some examples of these backup products with cloud support include but is not limited to: Commvault, EMC Networker, HPE Data Protector, and Veritas Netbackup. Note: InterSystems does not endorse or explicitly validate any of these third party products. Testing and validation is up to the customer. Caché Online Backup For small deployments the built-in Caché Online Backup facility is also a viable option. The InterSystems database online backup utility backs up data in database files by capturing all blocks in the databases then writes the output to a sequential file. This proprietary backup mechanism is designed to cause no downtime to users of the production system. In AWS, after the online backup has finished, the backup output file and all other files in use by the system must be copied to an EC2 acting as a file share (CIFS/NFS). This process needs to be scripted and executed within the virtual machine. Online backup is the entry-level approach for smaller sites wishing to implement a low cost solution for backup. However, as databases increase in size, external backups with snapshot technology are recommended as a best practice with advantages including the backup of external files, faster restore times, and an enterprise-wide view of data and management tools. Disaster Recovery When deploying a Caché based application on AWS, DR resources including network, servers and storage are recommended to be in a different AWS region or at a minimum separate availability zones. The amount of capacity required in the designated DR AWS region depends on your organizational needs. In most cases 100% of the production capacity is required when operating in a DR mode, however lesser capacity can be provisioned until more is needed as an elastic model. Lesser capacity can be in the form of fewer web and application servers and potentially even a smaller EC2 instance type for the database server can be used and upon promotion the EBS volumes are attached to a large EC2 instance type. Asynchronous database mirroring is used to continuously replicate to the DR AWS region’s EC2 instances. Mirroring uses database transaction journals to replicate updates over a TCP/IP network in a way that has minimal performance impact on the primary system. Journal file compression and encryption is highly recommended to be configured with these DR Asynchronous mirror members. All external clients on the public Internet who wish to access the application will be routed through Amazon Route53 as an added DNS service. Amazon Route53 is used as a switch to direct traffic to the current active data center. Amazon Route53 performs three main functions: Domain registration – Amazon Route53 lets you register domain names such as example.com. Domain Name System (DNS) service – Amazon Route53 translates friendly domains names like www.example.com into IP addresses like 192.0.2.1. Amazon Route53 responds to DNS queries using a global network of authoritative DNS servers, which reduces latency. Health checking – Amazon Route53 sends automated requests over the Internet to your application to verify that it's reachable, available, and functional. Details of these functions can be found here. For the purpose of this document DNS Failover and Route53 Health Checking will be discussed. Details of Health Check monitoring and DNS failover can be found here and here. Route53 works by making regular requests to each endpoint and then verifying the response. If an endpoint fails to provide a valid response. It is no longer included in DNS responses, which instead will return an alternative, available endpoint. In this way, user traffic is directed away from failing endpoints and toward endpoints that are available. Using the above methods traffic will only be allowed to a specific region and specific mirror member. This is controlled by the endpoint definition which is a mirror_status.cxw page discussed previously in this article presented from the InterSystems CSP Gateway. Only the primary mirror member will ever report “SUCCESS” as a HTTP 200 from the health check. The following diagram demonstrates at a high-level the Failover Routing Policy. Details of this method and others can be found here. Figure-11: Amazon Route53 Failover Routine Policy At any given time, only one of the regions will report online based on the endpoint monitoring. This ensures that traffic only flows to one region at a given time. There are no added steps needed for failover between the regions since the endpoint monitoring will detect the application in the designated primary AWS region is down and the application is now live in the secondary AWS region. This is because the DR Asynchronous mirror member has been manually promoted to primary which then allows the CSP Gateway to report HTTP 200 to the Elastic Load Balancer endpoint monitoring. There are many alternatives to the above described solution, and can be customized based on your organization operational requirements and service level agreements. Monitoring Amazon CloudWatch is available to provide monitoring services for all your AWS cloud resources and your applications. Amazon CloudWatch can be used to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in AWS resources. Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health. Details can be found here. Automated Provisioning There are numerous tools available on the market today including Terraform, Cloud Forms, Open Stack, and Amazon’s own CloudFormation. Using these and coupling with other tools such as Chef, Puppet, Ansible, and others can provide for the complete Infrastructure-as-Code supporting DevOps or simply bootstrapping your application in a completely automated fashion. Details of Amazon CloudFormation can be found here. Network Connectivity Depending on your application’s connectivity requirements, there are multiple connectivity models available using either Internet, VPN, or a dedicated link using Amazon Direct Connect. The method to choose will depend on the application and user needs. The bandwidth usage for each of the three methods vary, and best to check with your AWS representative or Amazon Management Console for confirmation of available connectivity options for a given region. Security Care needs to be taken when deciding to deploy an application in any public IaaS cloud provider. Your organization’s standard security policies, or new ones developed specifically for cloud, should be followed to maintain security compliance of your organization. You will also have to understand data sovereignty which is relevant when an organization’s data is stored outside of their country and is subject to the laws of the country in which the data resides. Cloud deployments have the added risk of data now outside client data centers and physical security control. The use of InterSystems database and journal encryption for data at rest (databases and journals) and data in flight (network communications) with AES and SSL/TLS encryption respectively are highly recommended. As with all encryption key management, proper procedures need to be documented and followed per your organization’s policies to ensure data safety and prevent unwanted data access or security breech. Amazon provides extensive documentation and examples to provide for a highly secure operating environment for your Caché based applications. Be sure to review the Identity Access Management (IAM) for various discussion topics found here. Architecture Diagram Examples The diagram below illustrates a typical Caché installation providing high availability in the form of database mirroring (both synchronous failover and DR Asynchronous), application servers using ECP, and multiple load balanced web servers. TrakCare Example The following diagram illustrates a typical TrakCare deployment with multiple load balanced webservers, two print servers as ECP clients, and database mirror configuration. The Virtual IP address is only used for connectivity not associated with ECP or the CSP Gateway. The ECP clients and CSP Gateway are mirror-aware and do not require a VIP. If you are using Direct Connect, there are several options including multiple circuits and multi-region access that can be enabled for disaster recovery scenarios. It is important to work with the telecommunications provider(s) to understand the high availability and disaster recovery scenarios they support. The sample reference architecture diagram below includes high availability in the active or primary region, and disaster recovery to another AWS region if the primary AWS region is unavailable. Also within this example, the database mirrors contain the TrakCare DB, TrakCare Analytics, and Integration namespace all within that single mirror set. Figure-12: TrakCare AWS Reference Architecture Diagram – Physical Architecture In addition, the following diagram is provided showing a more logical view of architecture with the associated high-level software products installed and functional purpose. Figure-13: TrakCare AWS Reference Architecture Diagram – Logical Architecture HealthShare Example The following diagram illustrates a typical HealthShare deployment with multiple load balanced webservers, with multiple HealthShare products including Information Exchange, Patient Index, Personal Community, Health Insight, and Health Connect. Each of those respective products include a database mirror pair for high availability within multiple availability zones. The Virtual IP address is only used for connectivity not associated with ECP or the CSP Gateway. The CSP Gateways used for web service communications between the HealthShare products are mirror-aware and do not require a VIP. The sample reference architecture diagram below includes high availability in the active or primary region, and disaster recovery to another AWS region if the primary region is unavailable. Figure-14: HealthShare AWS Reference Architecture Diagram – Physical Architecture In addition, the following diagram is provided showing a more logical view of architecture with the associated high-level software products installed, connectivity requirements and methods, and the respective functional purpose. Figure-15: HealthShare AWS Reference Architecture Diagram – Logical Architecture
Announcement
Olga Zavrazhnova · Dec 24, 2019

New Global Masters Reward: InterSystems Certification Voucher

Hi Community, Great news for all Global Masters lovers! Now you can redeem a Certification Voucher for 10,000 points! Voucher gives you 1 exam attempt for any exam available at the InterSystems exam system. We have a limited edition of 10 vouchers, so don't hesitate to get yours! Passing the exam allows you to claim the electronic badge that can be embedded in social media accounts to show the world that your InterSystems technology skills are the first-rate. ➡️ Learn more about the InterSystems Certification Program here. NOTE: Reward is available for Global Masters members of Advocate level and above; InterSystems employees are not eligible to redeem this reward. Vouchers are non-transferable. Good for one attempt for any exam in InterSystems exam system. Can be used for a future exam. Valid for one year from the redemption’s date (the Award Date). Redeem the prize and prove your mastery of our technology! 👍🏼 Already 9 available ;) How to get this voucher? @Kejia.Lin This appears to be a reward on Global Masters. You can also find a quick link at the light blue bar on the top of this page. Once you join Global Masters you can get points for interacting with the community and ultimately use these points to claim rewards, such as the one mentioned here.
Announcement
Anastasia Dyubaylo · Feb 22, 2020

New Video: Code in Any Language with InterSystems IRIS

Hi Developers, New video, recorded by @Benjamin.DeBoe, is available on InterSystems Developers YouTube: ⏯ Code in Any Language with InterSystems IRIS InterSystems Product Manager @Benjamin.DeBoe talks about combining your preferred tools and languages when building your application on InterSystems IRIS Data Platform. Try InterSystems IRIS: https://www.intersystems.com/try Enjoy watching the video! 👍🏼
Announcement
Derek Robinson · Feb 21, 2020

ICYMI: A Discussion with Jenny Ames about InterSystems IRIS

I wanted to share each of the first three episodes of our new Data Points podcast with the community here — we previously posted announcements for episodes on IntegratedML and Kubernetes — so here is our episode on InterSystems IRIS as a whole! It was great talking with @jennifer.ames about what sets IRIS apart, some of the best use cases she's seen in her years as a trainer in the field and then as an online content developer, and more. Check it out, and make sure to subscribe at the link above — Episode 4 will be released next week!
Article
Mark Bolinsky · Mar 3, 2020

InterSystems IRIS and Intel Optane DC Persistent Memory

InterSystems and Intel recently conducted a series of benchmarks combining InterSystems IRIS with 2nd Generation Intel® Xeon® Scalable Processors, also known as “Cascade Lake”, and Intel® Optane™ DC Persistent Memory (DCPMM). The goals of these benchmarks are to demonstrate the performance and scalability capabilities of InterSystems IRIS with Intel’s latest server technologies in various workload settings and server configurations. Along with various benchmark results, three different use-cases of Intel DCPMM with InterSystems IRIS are provided in this report. Overview Two separate types of workloads are used to demonstrate performance and scaling – a read-intensive workload and a write-intensive workload. The reason for demonstrating these separately is to show the impact of Intel DCPMM on different use cases specific to increasing database cache efficiency in a read-intensive workload, and increasing write throughput for transaction journals in a write-intensive workload. In both of these use-case scenarios, significant throughput, scalability and performance gains for InterSystems IRIS are achieved. The read-intensive workload leveraged a 4-socket server and massive long-running analytical queries across a dataset of approximately 1.2TB of total data. With DCPMM in “Memory Mode”, benchmark comparisons yielded a significant reduction in elapsed runtime of approximately six times faster when compared to a previous generation Intel E7v4 series processor with less memory. When comparing like-for-like memory sizes between the E7v4 and the latest server with DCPMM, there was a 20% improvement. This was due to both the increased InterSystems IRIS database cache capability afforded by DCPMM and the latest Intel processor architecture. The write-intensive workload leverages a 2-socket server and InterSystems HL7 messaging benchmark which consists of numerous inbound interfaces, each message has several transformations and then four outbound messages for each inbound. One of the critical components in sustaining high throughput is the message durability guarantees of IRIS for Health, and the transaction journal write performance is crucial in that operation. With DCPMM in “APP DIRECT” mode as DAX XFS presenting an XFS file system for transaction journals, this benchmark demonstrated a 60% increase in message throughput. To summarize the test results and configurations; DCPMM offers significant throughput gains when used in the proper InterSystems IRIS setting and workload. The high-level benefits are increasing database cache efficiency and reducing disk IO block reads in read-intensive workloads and also increasing write throughput for journals in write-intensive workloads. In addition, Cascade Lake based servers with DCPMM provide an excellent update path for those looking into refreshing older hardware and improving performance and scaling. InterSystems technology architects are available to help with those discussions and provide advice on suggested configurations for your existing workloads. READ-INTENSIVE WORKLOAD BENCHMARK For the read-intensive workload, we used an analytical query benchmark comparing an E7v4 (Broadwell) with 512GiB and 2TiB database cache sizes, against the latest 2nd Generation Intel® Xeon® Scalable Processors (Cascade Lake) with 1TB and 2TB database cache sizes using Intel® Optane™ DC Persistent Memory (DCPMM). We ran several workloads with varying global buffer sizes to show the impact and performance gain of larger caching. For each configuration iteration we ran a COLD, and a WARM run. COLD is where the database cache was not pre-populated with any data. WARM is where the database cache has already been active and populated with data (or at least as much as it could) to reduce physical reads from disk. Hardware Configuration We compared an older 4-Socket E7v4 (aka Broadwell) host to a 4-socket Cascade Lake server with DCPMM. This comparison was chosen because it would demonstrate performance gains for existing customers looking for a hardware refresh along with using InterSystems IRIS. In all tests, the same version of InterSystems IRIS was used so that any software optimizations between versions were not a factor. All servers have the same storage on the same storage array so that disk performance wasn’t a factor in the comparison. The working set is a 1.2TB database. The hardware configurations are shown in Figure-1 with the comparison between each of the 4-socket configurations: Figure-1: Hardware configurations Server #1 Configuration Server #2 Configuration Processors: 4 x E7-8890 v4 @ 2.5Ghz Processors: 4 x Platinum 8280L @ 2.6Ghz Memory: 2TiB DRAM Memory: 3TiB DCPMM + 768GiB DRAM Storage: 16Gbps FC all-flash SAN @ 2TiB Storage: 16Gbps FC all-flash SAN @ TiB DCPMM: Memory Mode only Benchmark Results and Conclusions There is a significant reduction in elapsed runtime (approximately 6x) when comparing 512GiB to either 1TiB and 2TiB DCPMM buffer pool sizes. In addition, it was observed that in comparing 2TiB E7v4 DRAM and 2TiB Cascade Lake DCPMM configurations there was a ~20% improvement as well. This 20% gain is believed to be mostly attributed to the new processor architecture and more processor cores given that the buffer pool sizes are the same. However, this is still significant in that in the 4-socket Cascade Lake tested only had 24 x 128GiB DCPMM installed, and can scale to 12TiB DCPMM, which is about 4x the memory of what E7v4 can support in the same 4-socket server footprint. The following graphs in figure-2 depict the comparison results. In both graphs, the y axis is elapsed time (lower number is better) comparing the results from the various configurations. Figure-2: Elapse time comparison of various configurations WRITE-INTENSIVE WORKLOAD BENCHMARK The workload in this benchmark was our HL7v2 messaging workload using all T4 type workloads. The T4 Workload used a routing engine to route separately modified messages to each of four outbound interfaces. On average, four segments of the inbound message were modified in each transformation (1-to-4 with four transforms). For each inbound message four data transformations were executed, four messages were sent outbound, and five HL7 message objects were created in the database. Each system is configured with 128 inbound Business Services and 4800 messages sent to each inbound interface for a total of 614,400 inbound messages and 2,457,600 outbound messages. The measurement of throughput in this benchmark workload is “messages per second”. We are also interested in (and recorded) the journal writes during the benchmark runs because transaction journal throughput and latency are critical components in sustaining high throughput. This directly influences the performance of message durability guarantees of IRIS for Health, and the transaction journal write performance is crucial in that operation. When journal throughput suffers, application processes will block on journal buffer availability. Hardware Configuration For the write-intensive workload, we decided to use a 2-socket server. This is a smaller configuration than our previous 4-socket configuration in that it only had 192GB of DRAM and 1.5TiB of DCPMM. We compared the workload of Cascade Lake with DCPMM to that of the previous 1st Generation Intel® Xeon® Scalable Processors (Skylake) server. Both servers have locally attached 750GiB Intel® Optane™ SSD DC P4800X drives. The hardware configurations are shown in Figure-3 with the comparison between each of the 2-socket configurations: Figure-3: Write intensive workload hardware configurations Server #1 Configuration Server #2 Configuration Processors: 2 x Gold 6152 @ 2.1Ghz Processors: 2 x Gold 6252 @ 2.1Ghz Memory: 192GiB DRAM Memory:1.5TiB DCPMM + 192GiB DRAM Storage: 2 x 750GiB P4800X Optane SSDs Storage: 2 x 750GiB P4800X Optane SSDs DCPMM: Memory Mode & App Direct Modes Benchmark Results and Conclusions Test-1: This test ran the T4 workload described above on the Skylake server detailed as Server #1 Configuration in Figure-3. The Skylake server provided a sustained throughput of ~3355 inbound messages a second with a journal file write rate of 2010 journal writes/second. Test-2: This test ran the same workload on the Cascade Lake server detailed as Server #2 Configuration in Figure-3, and specifically with DCPMM in Memory Mode. This demonstrated a significant improvement of sustained throughput of ~4684 inbound messages per second with a journal file write rate of 2400 journal writes/second. This provided a 39% increase compared to Test-1. Test-3: This test ran the same workload on the Cascade Lake server detailed as Server #2 Configuration in Figure-3, this time using DCPMM in App Direct Mode but not actually configuring DCPMM to do anything. The purpose of this was to gauge just what the performance and throughput would be comparing Cascade Lake with DRAM only to Cascade Lake with DCPMM + DRAM. Results we not surprising in that there was a gain in throughput without DCPMM being used, albeit a relatively small one. This demonstrated an improvement of sustained throughput of ~4845 inbound message a second with a journal file write rate of 2540 journal writes/second. This is expected behavior because DCPMM has a higher latency compared to DRAM, and with the massive influx of updates there is a penalty to performance. Another way of looking at it there is <5% reduction in write ingestion workload when using DCPMM in Memory Mode on the same exact server. Additionally, when comparing Skylake to Cascade Lake (DRAM only) this provided a 44% increase compared to the Skylake server in Test-1. Test-4: This test ran the same workload on the Cascade Lake server detailed as Server #2 configuration in Figure-3, this time using DCPMM in App Direct Mode and using App Direct Mode as DAX XFS mounted for the journal file system. This yielded even more throughput of 5399 inbound messages per second with a journal file write rate of 2630/sec. This demonstrated that DCPMM in App Direct mode for this type of workload is the better use of DCPMM. Comparing these results to the initial Skylake configuration there was a 60% increase in throughput compared to the Skylake server in Test-1. InterSystems IRIS Recommended Intel DCPMM Use Cases There are several use cases and configurations for which InterSystems IRIS will benefit from using Intel® Optane™ DC Persistent Memory. Memory Mode This is ideal for massive database caches for either a single InterSystems IRIS deployment or a large InterSystems IRIS sharded cluster where you want to have much more (or all!) of your database cached into memory. You will want to adhere to a maximum of 8:1 ratio of DCPMM to DRAM as this is important for the “hot memory” to stay in DRAM acting as an L4 cache layer. This is especially important for some shared internal IRIS memory structures such as seize resources and other memory cache lines. App Direct Mode (DAX XFS) – Journal Disk Device This is ideal for using DCPMM as a disk device for transaction journal files. DCPMM appears to the operating system as a mounted XFS file system to Linux. The benefit of using DAX XFS is this alleviates the PCIe bus overhead and direct memory access from the file system. As demonstrated in the HL7v2 benchmark results, the write latency benefits significantly increased the HL7 messaging throughput. Additionally, the storage is persistent and durable on reboots and power cycles just like a traditional disk device. App Direct Mode (DAX XFS) – Journal + Write Image Journal (WIJ) Disk Device In this use case, this extends the use of App Direct mode to both the transaction journals and the write image journal (WIJ). Both of these files are write-intensive and will certainly benefit from ultra-low latency and persistence. Dual Mode: Memory + App Direct Modes When using DCPMM in dual mode, the benefits of DCPMM are extended to allow for both massive database caches and ultra-low latency for the transaction journals and/or write image journal devices. In this use case, DCPMM appears both as mounted XFS filesystem to the OS and RAM to operating systems. This is achieved by allocating a percentage of DCPMM as DAX XFS and the remaining is allocated in memory mode. As mentioned previously, the DRAM installed will operate as an L4 like cache to the processors. “Quasi” Dual Mode To extend the use case models on a bit of slant with a Quasi Dual Mode in that you have a concurrent transaction and analytic workloads (also known as HTAP workloads) type workload where there is a high rate of inbound transactions/updates for OLTP type workloads, and also an analytical or massive querying need, then having each InterSystems IRIS node type within an InterSystems IRIS sharded cluster operating with different modes for DCPMM. In this example there is the addition of InterSystems IRIS compute nodes which will handle the massive querying/analytics workload running with DCPMM Memory Mode so that they benefit from massive database cache in the global buffers, and the data nodes either running in Dual mode or App Direct the DAX XFS for the transactional workloads. Conclusion There are numerous options available for InterSystems IRIS when it comes to infrastructure choices. The application, workload profile, and the business needs drive the infrastructure requirements, and those technology and infrastructure choices influence the success, adoption, and importance of your applications to your business. InterSystems IRIS with 2nd Generation Intel® Xeon® Scalable Processors and Intel® Optane™ DC Persistent Memory provides for groundbreaking levels of scaling and throughput capabilities for your InterSystems IRIS based applications that matter to your business. Benefits of InterSystems IRIS and Intel DCPMM capable servers include: Increases memory capacity so that multi-terabyte databases can completely reside in InterSystems IRIS or InterSystems IRIS for Health database cache with DCPMM in Memory Mode. In comparison to reading from storage (disks), this can increase query response performance by up six times with no code changes due to InterSystems IRIS proven memory caching capabilities that take advantage of system memory as it increases in size. Improves the performance of high-rate data interoperability throughput applications based on InterSystems IRIS and InterSystems IRIS for Health, such as HL7 transformations, by as much as 60% in increased throughput using the same processors and only changing the transaction journal disk from the fastest available NVMe drives to leveraging DCPMM in App Direct mode as a DAX XFS file system. Exploiting both the memory speed data transfers and data persistence is a significant benefit to InterSystems IRIS and InterSystems IRIS for Health. Augment the compute resources where needed for a given workload whether read or write-intensive, or both, without over-allocating entire servers just for the sake of one resource component with DCPMM in Mixed Mode. InterSystems Technology Architects are available to discuss hardware architectures ideal for your InterSystems IRIS based application. Great article, Mark! I have a few notes and questions: 1. Here's a brief comparison of different storage categories: Intel® Optane™ DC Persistent Memory has read throughput of 6.8 GB/s and write throughput 1.85 GB/s (source). Intel® Optane™ SSD has read throughput of 2.5 GB/s and write throughput of 2.2 GB/s at (source). Modern DDR4 RAM has read throughput of ~25 GB/s. While I certainly see the appeal of DC Persistent Memory if we need more memory than RAM can provide, is it useful on smaller scale? Say I have a few hundred gigabytes of indices I need to keep in global buffer and be able to read-access fast. Would plain DDR4 RAM be better? Costs seem comparable and read throughput of 25 Gb/s seems considerably better. 2. What RAM was used in a Server #1 configuration? 3. Why are there different CPUs between servers? 4. Workload link does not work. 6252 supports DCPM, while 6152 does not 6252 can be used for both DCPM and DRAM configuration. Hi Eduard, Thanks for you questions. 1- On small scale I would stay with traditional DRAM. DCPMM becomes beneficial when >1TB of capacity. 2- That was DDR4 DRAM memory in both read-intensive and write-intensive Server #1 configurations. In the read-intensive server configuration it was specifically DDR-2400, and in the write-intensive server configuration it was DDR-2600. 3- There are different CPUs in configuration in the read-intensive workload because this testing is meant to demonstrate upgrade paths from older servers to new technologies and the scalability increases offered in that scenario. The write-intensive workload only used a different server in the first test to compare previous generation to the current generation with DCPMM. Then the three following results demonstrated the differences in performance within the same server - just different DCPMM configurations. 4- Thanks. I will see what happened to the link and correct. Correct. Gold 6252 series (aka "Cascade Lake") supports both DCPMM and DRAM. However, keep in mind that when using DCPMM you need to have DRAM and should adhere to at least a 8:1 ratio of DCPMM:DRAM.
Article
Renan Lourenco · Mar 9, 2020

InterSystems IRIS for Health ENSDEMO (supports arm64)

# InterSystems IRIS for Health ENSDEMO Yet another basic setup of ENSDEMO content into InterSystems IRIS for Health. **Make sure you have Docker up and running before starting.** ## Setup Clone the repository to your desired directory ```bash git clone https://github.com/OneLastTry/irishealth-ensdemo.git ``` Once the repository is cloned, execute: **Always make sure you are inside the main directory to execute docker-compose commands.** ```bash docker-compose build ``` ## Run your Container After building the image you can simply execute below and you be up and running 🚀: *-d will run the container detached of your command line session* ```bash docker-compose up -d ``` You can now access the manager portal through http://localhost:9092/csp/sys/%25CSP.Portal.Home.zen - **Username:** SuperUser - **Password:** SYS - **SuperServer port:** 9091 - **Web port:** 9092 - **Namespace:** ENSDEMO ![ensdemo](https://openexchange.intersystems.com/mp/img/packages/468/screenshots/zhnwycjrflt4q7gttwsidcntxk.png) To start a terminal session execute: ```bash docker exec -it ensdemo iris session iris ``` To start a bash session execute: ```bash docker exec -it ensdemo /bin/bash ``` Using [InterSystems ObjectScript](https://marketplace.visualstudio.com/items?itemName=daimor.vscode-objectscript) Visual Studio Code extension, you can access the code straight from _vscode_ ![vscode](https://openexchange.intersystems.com/mp/img/packages/468/screenshots/bgirfnblz2zym4zi2q92lnxkmji.png) ## Stop your Container ```bash docker-compose stop ``` ## Support to ZPM ```bash zpm "install irishealth-ensdemo" ``` Nice, Rhenan! And ZPM it, please, too! Interesting. Is it available for InterSystems IRIS? Will do soon! Haven't tested but I would guess yes, I will run some tests changing the version in Dockerfile and post the outcome here. Hi, here is a similar article about ensdemo for IRIS and IRIS for health. https://community.intersystems.com/post/install-ensdemo-iris Works for IRIS4Health Also available as ZPM module now: USER>zpm "install irishealth-ensdemo"[irishealth-ensdemo] Reload START (/usr/irissys/mgr/.modules/USER/irishealth-ensdemo/1.0.0/)[irishealth-ensdemo] Reload SUCCESS[irishealth-ensdemo] Module object refreshed.[irishealth-ensdemo] Validate START[irishealth-ensdemo] Validate SUCCESS[irishealth-ensdemo] Compile START[irishealth-ensdemo] Compile SUCCESS[irishealth-ensdemo] Activate START[irishealth-ensdemo] Configure START[irishealth-ensdemo] Configure SUCCESS[irishealth-ensdemo] MakeDeployed START[irishealth-ensdemo] MakeDeployed SUCCESS[irishealth-ensdemo] Activate SUCCESS USER> Here is the set of productions available: Is there any documentation on what the ens-demo module can do? Unfortunately not as I'd like it to have. Even when ENSDEMO was part of Ensemble information was a bit scattered all over. If you access the Ensemble documentation and search for "Demo." you can see some of the references I mentioned. (since IRIS does not have ENSDEMO by default, documentation has also been removed) Thanks, @Renan.Lourenco ! Perhaps, we could wrap this part of the documentation as a module too. Could be a nice extension to the app. I like your idea @Evgeny.Shvarov !! How do you envision that, a simple index with easy access like: DICOM: Link1 Link2 HL7 Link1 Link2 Or something more elaborated? Also would that be a separate module altogether or part of the existing? I see that the documentation pages are IRIS CSP classes. So I guess it could work if installed in IRIS. I guess also there is a set of static files (FILECOPY could help). IMHO, the reasonable approach to have a separate repo ensdemo-doc and a separate module the, which will be a dependent module to irishealth-ensdemo So people could contribute to documentation independently and update it independently too. I had my bit of fun with documentation before, they are not as straightforward as they appear to be. That's why I thought of having a separate index. I guess you know more about it. I’d also ping @Dmitry.Maslennikov as he tried to make a ZPM package for the whole documentation.