Clear filter
Article
Eduard Lebedyuk · Mar 1, 2018
Everybody has a testing environment. Some people are lucky enough to have a totally separate environment to run production in. -- Unknown.In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:Git 101Git flow (development process)GitLab installationGitLab WorkFlowGitLab CI/CDCI/CD with containersThis first part deals with the cornerstone of modern software development - Git version control system and various Git flows.Git 101While the main topic we're going to discuss is software development in general and how GitLab can enable us in that endeavor, Git, or rather several underlying high-level concepts in the Git design that are important for the better understanding of the later concepts.That said, Git is a version control system, based on these ideas (there are many more, that's the most important ones):Non-linear development means that while our software is released consequentially from version 1 to 2 to 3, under the table the move from version 1 to 2 is done in parallel - several developers develop a number of features/bug-fixes simultaneously.Distributed development means that developer is independent from one central server or other developers and can develop in his own environment easily.Merging - previous two ideas bring us to the situation where many different versions of truth exist simultaneously and we need to unite them back into one complete state.Now, I'm not saying that Git invented these concepts. No. Rather Git made them easy and popular and that, coupled with several related innovations, i.e. infrastructure as code/containerization changed software development.Core git termsRepository is a project that stores data and meta-information about the data."Physically" repository is a directory on a disk.Repository stores files and directories.Repository also stores a complete history of changes for each file.Repository can be stored:Locally, on your own computerRemotely on a remote serverBut there's no particular difference between local and remote repositories from the git point of view.Commit is a fixed state of the repository. Obviously, if each commit stored the full state of the repository, our repository would grow very big very fast. That's why a commit stores a diff which is a difference between the current commit and its parent commit.Different commits can have a different number of parents:0 - the first commit in the repository doesn't have parents.1 - business as usual - our commit changed something in the repository as it was during parent commit2 - when we have two different states of the repository we can unite them into one new state. And that state and that commit would have 2 parents.>2 - can happen when we unite more that 2 different states of the repository into one new state. It wouldn't be particularly relevant to our discussion, but it does exist.Now for a parent, each commit that diffs from it is called a child commit. Each parent commit can have any number of children commits.Branch is a reference (or pointer) to a commit. Here's how it looks:On this image can see the repository with two commits (grey circles), the second one is the head of the master branch. After we add more commits, our repository starts to look like this:That's the most simple case. One developer works on one change at a time. However, usually, there are many developers working simultaneously on different features and we need a commit tree to show what's going on in our repository.Commit treeLet's begin from the same starting point. Here's the repository with two commits:But now, two developers are working at the same time and to not interfere with each other they work in separate branches:After a while they need to unite changes made and for that they create a merge request (also called pull request) - which is exactly what it sounds like - its a request to unite two different states of the repository (in our case we want to merge develop branch into master branch) into one new state. After it's appropriately reviewed and approved our repository looks like this:And the development continues:Git 101 - SummaryMain concepts:Git is a non-linear, distributed version control system.Repository stores data and meta-information about the data.Commit is a fixed state of the repository.Branch is a reference to a commit. Merge request (also called pull request) - is a request to unite two different states of the repository into one new state.If you want to read more about Git, there are books available.Git flowsNow, that the reader is familiar with basic Git terms and concepts let's talk about how development part of software life-cycle can be managed using Git. There are a number of practices (called flows) which describe development process using Git, but we going to talk about two of them:GitHub flowGitLab flowGitHub flowGitHub flow is as easy as it gets. Here it is:Create a branch from the repository.Commit your changes to your new branchSend a pull request from your branch with your proposed changes to kick off a discussion.Commit more changes on your branch as needed. Your pull request will update automatically.Merge the pull request once the branch is ready to be merged.And there are several rules we need to follow:master branch is always deployable (and working!)There is no development going directly in the master branchDevelopment is going on in the feature branchesmaster == production* environment**You need to deploy to production as often as possible* Do not confuse with "Ensemble Productions", Here "Production" means LIVE.** Environment is a configured place where your code runs - could be a server, a VM, container even.Here's how it looks like:You can read more about GitHub flow here. There's also an illustrated guide.GitHub flow is good for small projects and to try it if you're starting out with Git flows. GitHub uses it, though, so it can be viable on big projects too.GitLab flowIf you're not ready to deploy on production right away, GitLab flow offers GitHub flow + environments. Here's how it works - you develop in feature branches, same as above, merge into master, same as above, but here's a twist: master equals only test environment. In addition to that, you have "Environment branches" which are linked to various other environments you might have.Usually, three environments exist (you can create more if you need it):Test environment == master branchPreProduction environment == preprod branchProduction environment == prod branchThe code that arrives into one of the environment branches should be moved into the corresponding environment immediately, it can be done:Automatically (we'll be at it in parts 2 and 3)Semi-automatically (same as automatically except a button authorizing the deployment should be pressed)ManuallyThe whole process goes like this:Feature is developed in feature branch.Feature branch is reviewed and merged into master branch.After a while (several features merged) master is merged into preprodAfter a while (user testing, etc.) preprod is merged into prod While we were merging and testing, several new features were developed and merged into master, so GOTO 3.Here's how it looks like:You can read more about GitLab flow here.ConclusionGit is a non-linear, distributed version control system.Git flow can be used as a guideline for software development cycle, there are several that you can choose from.LinksGit bookGitHub flowGitLab flowDriessen flow (more comprehensive flow, for comparison)Code for this articleDiscussion questionsDo you use git flow? Which one?How many environments do you have for an average project?What's nextIn the next part we will:Install GitLab.Talk about some recommended tweaks.Discuss GitLab Workflow (not to be confused with GitLab flow).Stay tuned. The article is considered as InterSystems Data Platform Best Practice.
Announcement
Evgeny Shvarov · Apr 17, 2018
Hi, Community!
We are pleased to invite you to the InterSystems UK Developers Community Meetup on 25th of June!
The UK Developers Community Meetup is an informal meeting of developers, engineers, and devops to discuss successes, best practices and share experience with InterSystems Data Platforms!
See and discuss the agenda below.
An excellent opportunity to meet and discuss new solutions with like-minded peers and to find out what's new in InterSystems technology.
The Meetup will take place on 25th of June from 4 p.m. to 6 p.m. at The Belfry, Sutton Coldfield with food and beverages supplied.
Your stories are very welcome!
Here is the current agenda for the event:
3:30 p.m. — Welcome Coffee
4:00 p.m. — Big Data Platform for Hybrid Transaction and Analytics Processing, by Jon Payne
4:30 p.m. — Development Models and Teams, by @John.Murray
5:00 p.m. — Reporting in DeepSee Web, by @Evgeny.Shvarov
5:30 p.m. — Automatically configure a customized development environment, by @Sergei.Shutov
The agenda is not finalized, so if you want to be a presenter please comment below.
The UK Developers Meetup precedes InterSystems Joined Up Health & Care event – the annual gathering for the healthcare community to discuss and collaborate ways in which to improve care. View the agenda for JUHC here.
Join the UK Meetup! Updated the agenda for the meetup - the new session by @Sergey.Shutov has been introduced: Automatically configure a customized development environment. The session covers the approach and demo of creating a private development environment from source control, and how changes can be automatically pushed downstream to build and test environments. Show the use of open source Git hooks, %Installer, and Atelier, with Jenkins and automated Unit Tests. Good, I hope make this program meetup in Asia (actually Korea) This will also cover use of containers for development environments Hi, Community members!It's just a reminder on a meetup we are having on 25th in UK, Birmingham!And! We have a time slot available! So if you want to tell about your solution or share your best practices with InterSytems Data Platform please contact me or comment to this post! Good, I hope make this program meetup in Brazil. We can. Request it in InterSystems Brazil office and we'll arrange it. We are starting in 5 minutes! Join! The right link is this one! Here are slides for @Evgeny.Shvarov session "Printing and Emailing InterSystems DeepSee Reports" Please find slides for my presentation here https://www.slideshare.net/secret/bnUsuAWXsZCKrp (Modern Development Environment)
Please find repository here https://github.com/logist/wduk18 - you will need to load your own IRIS container and change "FROM" statement in Dockerfile - see http://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=AFL_containers_deploy_repo for instructions. You'll also need license key because REST service will not work without it. Also don't forget to change docker storage driver if you want to run examples http://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=ADOCK_additional_driver
Article
Pete Greskoff · Jun 27, 2018
In this post, I am going to detail how to set up a mirror using SSL, including generating the certificates and keys via the Public Key Infrastructure built in to InterSystems IRIS Data Platform. I did a similar post in the past for Caché, so feel free to check that out here if you are not running InterSystems IRIS. Much like the original, the goal of this is to take you from new installations to a working mirror with SSL, including a primary, backup, and DR async member, along with a mirrored database. I will not go into security recommendations or restricting access to the files. This is meant to just simply get a mirror up and running. Example screenshots are taken on a 2018.1.1 version of IRIS, so yours may look slightly different.Step 1: Configure Certificate Authority (CA) ServerOn one of your instances (in my case the one that will be the first mirror member configured), go to the Management Portal and go to the [System Administration -> Security -> Public Key Infrastructure] page. Here you will ‘Configure local Certificate Authority server’.You can choose whatever File name root (this is the file name only, no path or extension) and Directory you want to have these files in. I’ll use ‘CA_Server’ as the File name root, and the directory will be my <install-dir>/mgr/CAServer/. This will avoid future confusion when the client keys and certificates are put into the <install-dir>/mgr/ folder, as I’ll be using my first mirror member as the CA Server. Go to the next page.
You will then need to enter a password, and I’ll use ‘server_password’ in my example. You can then assign attribute values for your Distinguished Name. I’ll set Country to ‘US’ and Common Name to ‘CASrv’. You can accept defaults for validity periods, leave the email section blank, and save.You should see a message about files getting generated (.cer, .key, and .srl) in the directory you configured.
Step 2: Generate Key/Certificate For First Mirror MemberAt this point, you need to generate the certificate and keys for the instance that will become your first mirror member. This time, go to the Management Portal where you will set up the first mirror member, and go to the [System Administration -> Security -> Public Key Infrastructure] page again (see screenshot above). You need to ‘Configure local Certificate Authority client’. For the ‘Certificate Authority server hostname’, you need to put either the machine name or IP address of the instance you used for step 1, and for the ‘Certificate Authority WebServer port number’ use that instance’s web server port (you can get this from the URL in that instance’s Management portal):Make sure you are using the port number for the instance you configured as the CA Server, not the one you are setting up as the client (though they could be the same). You can put your own name as the technical contact (the phone number and email are optional) and save. You should get a message “Certificate Authority client successfully configured.”
Now you should go to ‘Submit Certificate Signing Request to Certificate Authority server’. You’ll need a file name (I’m using ‘MachineA_client’) and password (‘MachineA_password’) as well as again setting values for a Distinguished Name (Country=’US’ and Common Name=’MachineA’). Note that for each certificate you make, at least one of these values must be different than what was entered for the CA certificate. Otherwise, you may run into failures at a later step.At this point, you’ll need to go to the machine you configured to be your CA Server. From the same page, you need to ‘Process pending Certificate Signing Requests’. You should see one like this:
You should process this request, leaving default values, and ‘Issue Certificate’. You’ll need to enter your CA Server password from step 1 (‘server_password’ for me).
Finally, you need to get the certificate. Back on the first mirror member machine, from the same page, go to ‘Get Certificate(s) from Certificate Authority server’, and click ‘Get’ like here:If this is not the same machine where you configured the CA Server, you’ll need to get a copy of the CA Server certificate (‘CA_Server.cer’) also on this machine. Click ‘Get Certificate Authority Certificate’. That’s the top left button in the image above.
Step 3: Configure The Mirror On First Mirror MemberFirst, start the ISCAgent per this documentation (and set it to start automatically on system startup if you don’t want to have to do this every time your machine reboots).Then, in the Management Portal, go to the [System Administration -> Configuration -> Mirror Settings -> Enable Mirror Service] page to enable the service (if it isn’t already enabled). Next, go to the ‘Create a Mirror’ page in the same menu.You will need to enter a mirror name (‘PKIMIRROR’ in my case). You should click ‘Set up SSL/TLS’, and then enter the information there. the first line is asking for that CA server certificate (CA_Server.cer). For ‘This server’s credentials’, you’ll need to enter the certificate and key that we generated in step 2. They will be in the <install>/mgr/ directory. You’ll also need to enter your password here (click the ‘Enter new password’ button as shown). This password is the one you chose in step 2 (‘MachineA_password’ for me). In my example, I am only allowing TLS v1.2 protocol as shown below.
For this example, I won’t use an arbiter or a Virtual IP, so you can un-check those boxes in the ‘Create Mirror’ page. We’ll accept the defaults for ‘Compression’, ‘Parralel Dejournaling’, ‘Mirror Member Name’, and ‘Mirror Agent Port’ (since I didn’t configure the ISCAgent to be on a different port), but I’m going to change the ‘Superserver Address’ to use an IP instead of a host name (personal preference). Just make sure that the other future mirror members are able to reach this machine at the address you choose. Once you save this, take a look at the mirror monitor [System Operation -> Mirror Monitor]. It should look something like this:Step 4: Generate Key/Certificate For Second Failover Mirror Member
This is the same process as step 2, but I’ll replace anything with ‘MachineA’ in the name with ‘MachineB’. As I mentioned before, make sure you change at least 1 of the fields in the Distinguished Name section from the CA certificate. You also need to be sure you get the correct certificate in the Get Certificate step, as you will see both client certificates.Step 5: Join Mirror as Failover MemberJust like you did for the first mirror member, you need to start the ISCAgent and enable the mirror service for this instance (refer to step 3 for details on how to do this). Then, you can join the mirror as a failover member at [System Administration -> Configuration -> Mirror Settings -> Join as Failover].You’ll need the ‘Mirror Name’, ‘Agent Address on Other System’ (the same as the one you configured as the Superserver address for the other member), and the instance name of the now-primary instance.
After you click ‘Next’, you should see a message indicating that the mirror requires SSL/TLS, so you should again use the ‘Set up SSL/TLS’ link. You’ll replace machine A’s files and password with machine B’s for this dialog.
Again, I’m only using TLSv1.2. Once you’ve saved that, you should be able to add information about this mirror member. Again, I’m going to change the hostnames to IP’s, but feel free to use any IP/hostname that the other member can contact this machine on. Note that the IP’s are the same for my members, as I have set this up with multiple instances on the same server.
Step 6: Authorize 2nd Failover Member on the Primary Member
Now we need to go back to the now primary instance where we created the mirror. From the [System Administration -> Configuration -> Mirror Settings -> Edit Mirror] page, you should see a box at the bottom titled ‘Pending New Members’ including the 2nd failover member that you just added. Check the box for that member and click Authorize (there should be a dialog popup to confirm).Now if you go back to [System Operation -> Mirror Monitor], it should look like this (similar on both instances):
If you see something else, wait a minute and refresh the page.
Step 7: Generate Key/Certificate for Async MemberThis is the same as step 2, but I’ll replace anything with ‘MachineA’ in the name with ‘MachineC’. As I mentioned before, make sure you change at least 1 of the fields in the Distinguished Name section from the CA certificate. Make sure you get the correct certificate in the ‘Get Certificate’ page, as you will see all 3 certificates.Step 8: Join Mirror as Async MemberThis is similar to step 5. The only difference is that you have the added option for an Async Member System Type (I will use Disaster Recovery, but you’re welcome to use one of the reporting options). You’ll again see a message about requiring SSL, and you’ll need to set that up similarly (MachineC instead of MachineB). Again, you’ll see a message after saving the configuration indicating that you should add this instance as an authorized async on the failover nodes.Step 9: Authorize Async Member on the Primary MemberFollow the same procedure as in step 6. The mirror monitor should now look like this:Step 10: Add a Mirrored Database
Having a mirror is no fun if you can’t mirror any data, so we may as well create a mirrored database. We will also create a namespace for this database. Go to your primary instance. First, go to [System Administration -> Configuration -> System Configuration -> Namespaces] and click ‘Create New Namespace’ from that page.Choose a name for your namespace, and we’ll need to click ‘Create New Database’ next to ‘Select an existing database for Globals’. You’ll need to enter a name and directory for this new database. On the next page, be sure to change the ‘Mirrored database?’ drop-down to yes (THIS IS ESSENTIAL). The mirror name will default to the database name you chose. You can change it if you wish. We will use the default setting for all other options for the database (you can change them if you want, but this database must be journaled, as it is mirrored). Once you finish that, you will return to the namespace creation page, where you should select this new database for both ‘Globals and ‘Routines’. You can accept the defaults for the other options (don’t copy the namespace from anywhere).
Repeat this process for the backup and async. Make sure to use the same mirror name for the database. Since it’s a newly created mirrored database, there is no need to take a backup of the file and restore onto the other members.
Congratulations, you now have a working mirror using SSL with 3 members sharing a mirrored database! One final look this time at the async’s mirror monitor: Other reference documentation:Create a mirrorCreate mirrored databaseCreate namespace and databaseEdit failover member (contains some information on adding SSL to an existing mirror)
Hi @Pete.Greskoff! Looks like @Lorenzo.Scalese contributed a zpm module for your solution! Thanks, Lorenzo! Hi @Pete.Greskoff @Evgeny.Shvarov This article is very helpful.I develop PKI-Script in order to do this without manual intervention.This lib is focused on PKI steps, for mirroring I prepare something else. Using the PKI to generate certificates is NOT supported for production systems, as documented here:
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=APKI#APKI_about
I can't stress that enough. It is provided for convenience for testing purposes. For production systems or test systems that require proper security, please use certificates/keys specified from the source determined by your security admins, and make sure the procedures they specify for safeguarding the keys are in place and adhered to.
Article
Evgeniy Potapov · Dec 19, 2023
In today's digital age, effective data management and accurate information analysis are becoming essential for successful enterprise operations. InterSystems IRIS Data Platform offers two critical tools, ARCHITECT and ANALYZER, developed to deliver convenient data management.
ANALYZER is a powerful tool available within the InterSystems IRIS platform to provide extensive data analysis and visualization capabilities. This tool allows users to create summary tables and charts to analyze data. It also lets you perform detailed studies and analyses based on available data.
One of the most significant features of ANALYZER is the ability to select the data subject area before starting the analysis. It ensures accurate and targeted data analysis, as the user can pick a specific data subject area from which to extract the required data for analysis.
ANALYZER features an intuitive interface and extensive customization options, making the data analysis process more flexible and efficient. Built-in autocomplete and preview functions greatly simplify the creation of analytical reports and improve data visualization.
Completing summary tables and charts in ANALYZER allows users to visualize data clearly and informatively, streamlining informed business decision-making and identifying critical trends and patterns in the data.
To start working with ANALYZER, proceed to the Management Portal page within the InterSystems IRIS platform. Once there, select the Analytics tab and then navigate to the Analyzer section. This step allows users to access a powerful data analysis tool that provides a wide range of features for data exploration and visualization.
Visiting the ANALYZER page opens up a wide range of options for data analysis and visualization. Here are some of the main functions and elements available on this page:
New / Open / Save / Save / Save As / Restore / Delete: These options enable you to create analysis projects, open existing ones, save current results, restore previous versions, and delete irrelevant data.
Auto-execute / Preview Mode: These functions allow you to automate tasks and view data in an easy-to-analyze format.
View: Users can choose between different data display options, such as a summary table, chart, or a combination of both, for more complete data analysis.
Change to a different Subject Area: This option allows users to select specific data subject areas from which to extract data for analysis.
Refresh and reset the contents of the dimension tree: This function ensures that the content of the dimension tree is refreshed and reset for accurate and up-to-date data analysis.
Add or edit a calculated member / Add or edit a named filter for this Subject Area / Add or edit a pivot variable for this Subject Area: These functions let users add and customize various parameters and elements for more precise data analysis.
Export current results / Export current results to printable PDF format: Users can export existing analysis results for further use in reports or presentations.
Set options for the pivot table / Configure PDF export for this pivot / Modify chart appearance / Define conditional formatting rules: These functions provide extensive customization and data visualization options for a more understandable and explicit analysis.
Now, let's take a closer look at the process of adding items to a summary table in ANALYZER:
Rows: This element allows users to add specific data as rows to a summary table for further analysis. Users can select key factors or categories to group data into rows and provide more detailed analysis.
Columns: This option allows users to add data to a summary table as columns for easy comparison and visualization. Users can pick the parameters or indicators to be presented as columns for more objective analysis.
Measures: This element lets users add specific indicators or metrics to the summary table to evaluate performance or results. Users can select various types of metrics, including sums, averages, medians, and others, for further data analysis.
Filters: This option authorizes users to apply different filters and conditions to the data in the summary table to refine the analysis. Users can select specific parameters or values to filter the data for more exact and targeted research.
These elements play a crucial role in data analysis and provide flexibility and accuracy in working with data in summary tables. Users can determine and customize these elements based on their needs and business requirements to achieve the best results.
In ANALYZER, you can customize parameters for the "Rows" and "Columns" items using the "AXIS OPTIONS" function. Check out the available parameters for adjusting the "Rows" and "Columns" axes below:
Caption: This option entitles you to set a caption for the Rows and Columns axes for more accurate and detailed identification of the data presented there.
Format: This option lets you customize the formatting and display styles of the data in the Rows and Columns axes for a more convenient and informative presentation.
Total Override: This parameter allows you to override the aggregation used to total the data in the "Rows" and "Columns" axes.
Cell Style: This parameter authorizes you to modify the cell style of the data displayed in the Rows and Columns axes for better visualization.
Member Options: This option permits you to filter, sort, and perform other operations on the items represented in the Rows and Columns axes for more proper and meticulous data analysis.
Drilldown Options: This option allows you to control the drilldown operation so that users can get more detailed information when needed.
When you configure these Rows and Columns axes settings, you get access to a flexible and intuitive way to work with data in summary tables, which ultimately lets users get the most out of their data analysis.
In ANALYZER, you can additionally customize parameters for the Measures element using the MEASURE OPTIONS function. Some of the available parameters for altering measures are presented below:
Place measures on: This option authorizes you to determine in which area (columns or rows) to place measures to make comparison and analysis of data easier.
Display measure headers: This option lets you control the display of measure headers in the summary table depending on the number of selected measures. You can decide when and how to display those headers to improve the data visualization.
Configuring these settings for metrics in ANALYZER delivers more flexible and objective data analysis, allowing users to work with various types of metrics and indicators more efficiently to maximize the benefits of data analysis.
Finally, ANALYZER provides the ability to create and customize advanced filters for more precise and targeted data filtering. Check out some of the available options for for filter modifications below:
Add Condition: This option entitles you to add specific conditions to filter data based on the criteria and parameters you specify. You can determine different requirements that must be met to filter data.
Empty: If the filter is blank, you can use the "Add Condition" function to add prerequisites to filter the data. It will result in a more authentic and thorough filtering to get the desired results.
With the "ADVANCED FILTER EDITOR" in ANALYZER, users can build complex filters containing different parameters and conditions to refine the analysis and get more factual and relevant results. It gives us greater flexibility and precision when working with data in the summary table, authorizing users to apply filters according to their specific data analysis requirements and needs.
The Save function lets you preserve the current state of the summary table and any settings applied to the data. When you save a summary table, you can also save the analysis results in a format that allows you to retrieve and exploit the data for future reports, presentations, or further analysis. Similarly, preserving data makes it possible for you to return to previously saved results quickly and conveniently, and continue analyzing or visualizing the data.
So, bottom line:
ANALYZER is a powerful data analysis tool on the InterSystems IRIS platform that provides a variety of data manipulation capabilities. It equips users with convenient and intuitive tools to create summary tables, configure analysis and data visualization parameters, and apply various filters and conditions to obtain specific results. Save and Restore features allow users to save progress and quickly return to previously performed analyses for further work. Analyzer is an essential tool for deeper and clearer data analysis, helping users make more informed decisions based on facts and figures.
Article
Evgeniy Potapov · Dec 5, 2023
In today's digital age, effective data management and accurate information analysis are becoming essential for successful enterprise operations. InterSystems IRIS Data Platform offers two critical tools designed to provide convenient data management: ARCHITECT and ANALYZER.
ARCHITECT: ARCHITECT is a powerful tool created for developing and managing applications on the InterSystems IRIS platform. A vital feature of ARCHITECT is the ability to produce and customize complex data models. It allows users to handle the data structure with flexibility to meet the specific requirements of their applications. ARCHITECT simplifies the development process by providing a user-friendly interface for dealing with business logic and application functionality, as well as adapting to changing business needs. A crucial part of working with ARCHITECT is creating and customizing complex data models, so let's take a look at step-by-step instructions on how to do it.
In order to start using the ARCHITECT functionality, you need to go to the corresponding control panel. To accomplish that, go to the IRIS management section and select the "Analytics" tab. Then, proceed to the "Architect" section.
When in the Architect menu, click the "New" tab at the top of the screen.
It will open the "CREATE A NEW DATA MODEL DEFINITION" menu.
Definition Type: This field selects the type of data model definition. In the context of ARCHITECT, it can be "Cube" or "Subject Area"."Cube" is typically used to define multidimensional data, whereas "Subject Area" describes only a narrow portion of data.
If the "Cube" type is selected, you can fill in the following fields as shown below:
Cube Name:
In this mandatory field, you should specify a unique name for the data cube you want to create. This name will be used to identify the data cube in the system.
Display Name:
Here, you can appoint a name that will be displayed in the user interface for this data cube. This name can be more descriptive and precise than the cube name to give the user a better understanding.
Cube Source:
This section specifies the data source for the cube. This part is not clear to me. Try the following: "The "Class Cube" is selected, and a "Source Class" is specified here, both of which must already exist in the system. This class provides the data to fill the cube.
Class Name for the Cube:
In this compulsory field, you should mention the class name used to create the data cube. In addition to that name, you must include the package name to identify the class correctly.
Class Description:
This field allows you to enter a description or context for the data cube class you are creating, making it easier to understand its purpose and content. This description can come in handy for organizing and structuring data in the system.
If the "Subject Area" type is selected, the next fields can be populated as follows:
Subject Area Name:
In this obligatory field, you should state a unique name for the created subject area.This name will be utilized to identify this subject area in the system.
Display Name:
Here, you can select a name to display in the user interface for this subject area. This name can be more descriptive and straightforward than the subject area name since we generally need it for user convenience.
Base Cube:
This field is required to be completed since it specifies an existing cube that will be employed as the base cube for this subject area. The data from this cube will be available within this subject area.
Filter:
It is an MDX expression that filters the given subject area. It allows you to restrict the available data for particular conditions or criteria.
Class Name for the Subject Area:
This mandatory field specifies the class name used to create the subject area. In addition to this name, you must include the package name to identify the class in a proper manner.
Class Description:
This field lets you enter a description or context for the subject area class being created, simplifying the understanding of its purpose and content. This description can be beneficial for arranging and structuring data in the system.
When you have selected Definition Type, go to the "Open" tab on the ARCHITECT main menu.
Depending on the Definition Type you have chosen, the cube may be available in two tabs in ARCHITECT:
If the "Cube" type was picked, the created cube will be available in the "Cubes" tab.
If the "Subject Area" type was selected, the created subject area will be available in the "Subject Areas" tab.
Once the cube is opened, the following fields will be displayed:
Source Class:
This field indicates the source class associated with the data used to assemble this cube. In our case, it will be "Community.Member".
Model Elements:
This section lists all the items associated with the desired cube. Elements might include Measures, Dimensions, Listings, Listing Fields, Calculated Members, Named Sets, Relationships, and Expressions.
Add Element:
This button authorizes you to add a new element to the designated cube for further analysis.
Undo:
This function is employed to undo the last action or change made as a part of editing this cube.
Expand All:
This feature expands all items in the list for easy viewing and access to details.
Collapse All:
This function collapses all items in the list for effortless navigation and content review.
Reorder:
This action entitles you to alter the order of elements in the cube for uncomplicated display and data analysis.
To add a new element to the cube, click "Add Element", and the following menu "ADD ELEMENT TO CUBE" will appear:
Cube Name:
It is the name of the cube to which you will add a new element.
Enter New Element Name:
It is the field where you should enter a new name for the data item you want to add.
Select an element to add to this cube definition:
In this area, you pick the type of element you want to add. Options may include "Measure", "Data Dimension", "Time Dimension", "Age Dimension", "iKnow Dimension", "Shared Dimension", "Hierarchy", "Level", "Property", "Listing", "ListingField", "Calculated Member", "Calculated Dimension", "Named Set", "Relationship" and "Expression".
Data dimensions specify how to group data by values other than time:
In this part, I will explain how data dimensions help group data into specific values other than temporal ones.
After successfully adding the desired element to the cube, it is vital to perform the cube compilation and assembly process to update and optimize the data. Cube compilation ensures that all changes and new elements are integrated into the cube structure in a proper way, while assembly secures performance optimization and data updates.
It is crucial to remember that the build will not be available until the compilation is completed successfully. It guarantees that the build process is based on accurate and up-to-date data processed and prepared during compilation.
When the compilation and build are complete, it is vital to click the "Save" tab to ensure that any alterations done to the cube structure have been saved successfully. It will guarantee that the latest changes and updates made within the cube are now available for future use and analysis.
ARCHITECT is an integral part of the InterSystems IRIS platform, which provides extensive capabilities for developing and managing data models. It equips users with unique flexibility to create and modify complex data models, allowing them to customize the data structure to meet the requirements of a specific application. The compile-and-build system enables efficient data updating and optimization, securing high performance and accuracy across various business scenarios.
With ARCHITECT, users can modify cube elements, including measurements, relationships, expressions, and others, and perform further analyses based on the updated data. It makes it easy to adapt applications to changing business requirements and respond quickly to a company's growing needs.
The save functionality allows users to preserve all modifications and customizations, ensuring reliability and convenience when working with data. ARCHITECT is central to successful data management and provides a wide range of tools for producing and optimizing data structures to facilitate efficient data analysis and informed business decisions.
Announcement
Ronnie Hershkovitz · Feb 18
Hi Community,
We're pleased to invite you to the upcoming webinar in Hebrew:
👉 Hebrew Webinar: GenAI + RAG - Leveraging Intersystems IRIS as your Vector DB👈
📅 Date & time: February 26th, 3:00 PM IDT
Have you ever gone to the public library and asked the librarian for your shopping list? Probably not.
That would be like akin to asking a Large Language Model (LLM) a question based on a matter internal to your organization. It's only trained on public data. Using Retrieval Augmented Generation (RAG) we can give the LLM the proper context to answer matters that are relevant to your organization's data. All we need is a Vector DB. Lucky for you, this is one of IRIS's newest features. This webinar will include a theoretical overview, technical details, and a live demo, with a Q&A period at the end.
Presenter: @Ariel.Glikman, Sales Engineer InterSystems Israel
➡️ Register today!
Yay! InterSystems Israel webinars are back 🤩
Announcement
Evgeny Shvarov · Mar 17
Hi Developers!
Here're the technology bonuses for the InterSystems AI Programming Contest: Vector Search, GenAI and AI Agents that will give you extra points in the voting:
Agent AI solution - 5
Vector Search usage - 4
Embedded Python - 3
LLM AI or LangChain usage: Chat GPT, Bard, and others - 3
IntegratedML usage - 3
Docker container usage - 2
ZPM Package deployment - 2
Online Demo - 2
Implement InterSystems Community Idea - 4
Find a bug in Vector Search, or Integrated ML, or Embedded Python - 2
First Article on Developer Community - 2
Second Article On DC - 1
First Time Contribution - 3
Video on YouTube - 3
Suggest a new idea - 1
See the details below.
AI Agent solution -5 points
Build an AI agent solution leveraging any of the popular AI agent enabling platforms such as Zapier, Make, N8N, Pydentic or/and InterSystems IRIS Interoperability. Learn more in this post.
Feel free to use any different AI Agent enabling platform, but please confirm in advance here in the comments.
Vector Search - 4 points
Starting from 2024.1 release InterSystems IRIS contains a new technology vector search that allows to build vectors over InterSystems IRIS data and perform search of already indexed vectors. Use it in your solution and collect 5 bonus points. Here is the demo project that leverages it.
Embedded Python - 3 points
Use Embedded Python in your application and collect 3 extra points. You'll need at least InterSystems IRIS 2021.2 for it. Embedded Python template and examples.
LLM AI or LangChain usage: Chat GPT, Gemini, Mistral, and others - 3 points
Collect 3 bonus expert points for building a solution that uses LangChain libs or Large Language Models (LLM) such as ChatGPT, Bard and other AI engines like PaLM, LLaMA and more. AutoGPT usage counts too.
A few examples already could be found in Open Exchange: iris-openai, chatGPT telegram bot.
Here is an article with langchain usage example.
IntegratedML usage - 3 points
Use IntegratedML SQL extension of InterSystems IRIS and IRIS Cloud SQL and collect 3 extra bonus points. Here is an IntegratedML demo template.
Docker container usage - 2 points
The application gets a 'Docker container' bonus if it uses InterSystems IRIS running in a docker container. Here is the simplest template to start from.
ZPM Package deployment - 2 points
You can collect the bonus if you build and publish the ZPM(InterSystems Package Manager) package for your Full-Stack application so it could be deployed with:
zpm "install your-multi-model-solution"
command on IRIS with ZPM client installed.
ZPM client. Documentation.
Online Demo of your project - 2 pointsCollect 2 more bonus points if you provision your project to the cloud as an online demo. You can do it on your own or you can use this template - here is an Example. Here is the video on how to use it.
Implement Community Opportunity Idea - 4 points
Implement any idea from the InterSystems Community Ideas portal which has the "Community Opportunity" status. This will give you 4 additional bonus points.
Find a bug in Vector Search, IntegratedML or Embedded Python - 2 pointsWe want the broader adoption of InterSystems Vector Search, IntegratedML and Embedded Python, so we encourage you to report the bugs you will face during the development of your Python application with IRIS in order to fix it. Please submit the bugs you find for Vector Search, IntegratedML, and Embedded Python and how to reproduce it. You can collect 2 bonus points for the first reproducible bug and 1 point for the second bug for every technology. E.g. if you find 2 bugs in vector search and one bug in Embedded Python, you can collect 2+1+2 = 5 points.
Article on Developer Community - 2 points
Write a brand new article on Developer Community that describes the features of your project and how to work with it. Collect 2 points for the article.
The Second article on Developer Community - 1 point
You can collect one more bonus point for the second article or the translation regarding the application. The 3rd and more will not bring more points but the attention will all be yours.
First Time Contribution - 3 points
Collect 3 bonus points if you participate in InterSystems Open Exchange contests for the first time!
Video on YouTube - 3 points
Make the Youtube video that demonstrates your product in action and collect 3 bonus points per each.
Suggest a new idea - 1 point
Suggest a new idea regarding Vector Search, Generative AI or Machine Learning to get 1 point per person (not for each idea).
The list of bonuses is subject to change. Stay tuned!
Good luck in the competition!
Announcement
Celeste Canzano · Feb 19
Hello IRIS community,
InterSystems Certification is developing a certification exam for InterSystems IRIS Developer professionals, and if you match the exam candidate description given below, we would like you to beta test the exam. The exam will be available for beta testing on March 5, 2025, and the beta testing must be completed by April 20, 2025.
What are my responsibilities as a beta tester?
You must schedule and take the exam by April 20, 2025. The exam will be administered in an online proctored environment free of charge (the standard fee of $150 per exam is waived for all beta testers). The InterSystems Certification team will then perform a careful statistical analysis of all beta test data to set a passing score for the exam. The analysis of the beta test results will take 6-8 weeks, and once the passing score is established, you will receive an email notification from InterSystems Certification informing you of the results. If your score on the exam is at or above the passing score, you will have earned the certification!
Note: Beta test scores are completely confidential.
Interested in participating? Read the Exam Details and Instructions below:
Exam Details
Exam title: InterSystems IRIS Developer Professional
Candidate description: A back-end software developer who:
writes and executes efficient, scalable, maintainable, and secure code on (or adjacent to) InterSystems IRIS using best practices for the development life cycle,
effectively communicates development needs to systems and operations teams (e.g., database architecture strategy),
integrates InterSystems IRIS with modern development practices and patterns, and
is familiar with the different data models and modes of access for InterSystems IRIS (ObjectScript, Python, SQL, JDBC/ODBC, REST, language gateways, etc.)
Number of questions: 62
Time allotted to take exam: 2 hours
Recommended preparation: Review the content below before taking the exam.
Classroom Training
Developing with InterSystems Objects and SQL (classroom, 5 days)
Online Learning:
Getting Started with InterSystems IRIS for Coders (program, 20h 30m)
Using SQL in InterSystems IRIS (learning path, 3h 45m)
Managing InterSystems IRIS for Developers (learning path, 2h 30m)
Analyzing Data with InterSystems IRIS BI (learning path, varies)
Deploying and Testing InterSystems Products Using CI/CD Pipelines (course, 25m)
Recommended practical experience: At least 2 years of experience developing with InterSystems IRIS and a basic understanding of ObjectScript is recommended.
Exam practice questions
A set of practice questions is provided here to help familiarize candidates with question formats and approaches.
Exam format
The questions are presented in two formats: multiple choice and multiple response. Access to InterSystems IRIS Documentation will be available during the exam.
DISCLAIMER: Please note this exam has a 2-hour time limit. While InterSystems documentation will be available during the exam, candidates will not have time to search the documentation for every question. Thus, completing the recommended preparation before taking the exam, and searching the documentation only when absolutely necessary during the exam, are both strongly encouraged!
System requirements for beta testing
Working camera & microphone
Dual-core CPU
At least 2 GB available of RAM memory
At least 500 MB of available disk space
Minimum internet speed:
Download - 500kb/s
Upload - 500kb/s
Exam topics and content
The exam contains questions that cover the areas for the stated role as shown in the exam topics chart immediately below. All questions are based on InterSystems IRIS v2024.1+.
Topic
Subtopic
Knowledge, skills, and abilities
1. Best practices: Architecture
1.1 Determines database storage strategy in InterSystems IRIS
Determines which databases should be included in a namespace
Recommends database architecture based on expected data growth
Structures data to support global mappings
Identifies implications of mirroring on application performance and availability
Identifies implications of configuration settings when designing for scale (buffers, locks, process memory)
Identifies implications of IRIS upgrades on database architecture
Identifies implications of security requirements on database architecture
Identifies costs and benefits of using InterSystems interoperability functionality
Identifies benefits and tradeoffs for using InterSystems IRIS BI to augment usage of object and relational models
Identifies secure REST API design best practices
1.2. Determines data structures
Differentiates between registered object, serial object, and persistent classes
Determines indexes to add/update to improve performance
Describes relationship between globals, objects, and SQL
Determines when streams are the appropriate data type
Describes InterSystems IRIS support for JSON and XML
1.3. Plans data lifecycle
Evaluates strategies for data storage and retrieval (e.g., MDX, SQL, object)
Manages data life cycles (aka CRUD)
Describes expected application performance as a function of data volumes, users, and processes
2. Best practices: Development lifecycle
2.1 Uses recommended development tools and workflows with InterSystems IRIS
Uses Visual Studio Code to connect to InterSystems IRIS and develop client-side and server-side code
Uses InterSystems IRIS debugging tools (e.g., uses debugger in VS Code)
Identifies components required in Compose files used for container development
Enumerates available development tools (e.g., %SYS.MONLBL, ^PROFILE, and ^TRACE)
Describes options for automatically documenting code
Chooses background execution strategy
2.2 Integrates InterSystems IRIS with CI/CD pipelines
Describes deployment options for InterSystems IRIS (e.g., containers vs InterSystems IRIS installer)
Manages changes to CPF file to support continuous deployment
Uses the %UnitTest framework to write and run unit tests
Runs integration tests to confirm expectations in other applications
Runs system checks to check functional and non-functional requirements at production scale
Identifies implications of promoting changes
2.3 Uses source control with InterSystems IRIS
Describes options for integrating InterSystems IRIS with source control systems
Mitigates effects of importing an updated class/schema definition
3. Best practices: Data retrieval
3.1 Uses Python with InterSystems IRIS
Identifies Embedded Python capabilities in InterSystems IRIS
Describes features of different options for using Python with InterSystems IRIS (e.g., Embedded, Native API, etc.)
3.2 Connects to InterSystems IRIS
Configures JDBC/ODBC connections to InterSystems IRIS
3.3. Uses SQL with InterSystems IRIS
Differentiates between embedded SQL and dynamic SQL
Leverages IRIS-specific SQL features (e.g., implicit join, JSON)
Interprets query plans
Identifies automatically collected statistics via SQL Statement Index
Evaluates strategies for table statistics gathering (e.g., import, tune, representative data)
Evaluates SQL security strategies
3.4 Creates REST services
Creates REST services and differentiates between implementation options
Describes API monitoring and control features available in InterSystems API Manager
Secures REST services
Documents REST Services
4. Best practices: Code
4.1 Writes defensive code
Chooses strategy for error handling
Diagnoses and troubleshoots system performance and code execution performance
Manages and monitors process memory
Manages processes (including background processes)
Describes general system limits in IRIS (e.g., max string vs stream, # of properties)
4.2 Writes secure code
Implements database and data element encryption
Connects securely to external systems
Prevents SQL injection attacks (e.g., sanitizing, concatenating vs parameterizing)
Prevents remote code execution
Leverages InterSystems IRIS security model
4.3 Ensures data integrity
Differentiates between journaling behavior inside vs outside transactions
Minimizes requirements for journal volumes and performance
Manages transactions
Enumerates causes for automatic transaction rollbacks
4.4 Implements concurrency controls
Describes functionality of locking mechanisms with respect to stateful and stateless applications
Follows best practices when using locks
Chooses between row locks and table locks
Instructions:
Please review the following instructions for scheduling and buying an exam:
From our exam store, log in with your InterSystems Single Sign-On (SSO) account.
If necessary, please register for an account.
Select InterSystems IRIS Developer Professional - Beta and click Get Started.
Verify system compatibility as instructed. The Safe Exam Browser download requires administrative privileges on your device.
Run the setup test to ensure the device satisfies the exam requirements.
Schedule your exam – this must be done before checking out. The exam must be taken at least 24 hours after, but within 30 days, of scheduling the exam.
Review the InterSystems Certification Program Agreement.
Confirm your appointment. You will receive an email from Certiverse with your exam appointment details.
You can access your reservations and history through the Exam Dashboard available through the MY EXAMS menu.
Below are important considerations that we recommend to optimize your testing experience:
Read the Taking InterSystems Exams and Exam FAQs pages to learn about the test-taking experience.
Read the InterSystems Certification Exam Policies.
On the day of your exam, log in to Certiverse at least 10 minutes before your scheduled time, launch the exam under MY EXAMS, and wait for the proctor to connect.
Please have your valid government ID ready for identification. The proctor will walk you through the process of securing your room and releasing the exam to you.
You may cancel or reschedule your appointment without penalty as long as the action is taken at least 24 hours in advance of your appointment. The voucher code will reactivate and you can use it to reschedule the exam.
Please contact certification@intersystems.com if you have any questions or need assistance, and we encourage you to share any feedback about the exam, whether positive or negative.
Announcement
Thomas Dyar · Oct 21, 2020
GA releases are now published for the 2020.3 version of InterSystems IRIS, IRIS for Health, with IntegratedML!
This is the first InterSystems IRIS release that includes IntegratedML, a new feature that brings "best of breed" machine learning to analysts and developers via simple and intuitive SQL syntax. Developers can now easily train and deploy powerful predictive models from within IRIS, right where their data lives. Documentation for IntegratedML is available as a User Guide. Virtual Summit 2020 features a number of sessions and an Experience Lab featuring IntegratedML, see overview here. See also content about IntegratedML on the Learning Services IntegratedML Resource Guide.
For this release:
Standard and Community Edition containers are available from the InterSystems Container Registry (ICR, documented here)
Community Edition containers are also available from Docker Hub
Kits (and container tarballs) are available from the WRC Software Distribution site
NOTE: Full installation kits are provided for a subset of server platforms on the WRC, which will give customers who do not use containers the option to use IntegratedML now, with the option to upgrade to the 2021.1 Extended Maintenance release.
The build number for these releases is 2020.3.0.302.0.
Community Edition containers can be pulled from ICR using the following commands:
docker pull containers.intersystems.com/intersystems/iris-ml-community:2020.3.0.302.0
docker pull containers.intersystems.com/intersystems/irishealth-ml-community:2020.3.0.302.0
and if you want to use IntegratedML images with ZPM included use the following images:
intersystemsdc/iris-ml-community:2020.3.0.302.0-zpm
Which you can run as:
docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/iris-ml-community:2020.3.0.302.0-zpm
Article
Murray Oldfield · Nov 18, 2019
The following steps show you how to display a sample list of metrics available from the `/api/monitor` service.
In the last post, I gave an overview of the service that exposes IRIS metrics in Prometheus format. The post shows how to set up and run [IRIS preview release 2019.4](https://community.intersystems.com/post/intersystems-iris-and-iris-health-20194-preview-published) in a container and then list the metrics.
This post assumes you have Docker installed. If not, go and do that now for your platform :)
### Step 1. Download and run the IRIS preview in docker
Follow the download instructions at [Preview Distributions](https://wrc.intersystems.com/wrc/coDistPreview.csp "Preview Distributions") to download the **Preview Licence Key** and an **IRIS Docker image**. For the example, I have chosen **InterSystems IRIS for Health 2019.4**.
Follow the instructions at [First Look InterSystems Products in Docker Containers](https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=AFL_containers "Run IRIS in a container instructions"). If you are familiar with containers, jump to the section titled: **Download the InterSystems IRIS Docker Image**.
The following terminal output illustrates the processes I used to load the docker image. The docker load command may take a couple of minutes to run;
$ pwd
/Users/myhome/Downloads/iris_2019.4
$ ls
InterSystems IRIS for Health (Container)_2019.4.0_Docker(Ubuntu)_12-31-2019.ISCkey irishealth-2019.4.0.379.0-docker.tar
$ docker load -i irishealth-2019.4.0.379.0-docker.tar
762d8e1a6054: Loading layer [==================================================>] 91.39MB/91.39MB
e45cfbc98a50: Loading layer [==================================================>] 15.87kB/15.87kB
d60e01b37e74: Loading layer [==================================================>] 12.29kB/12.29kB
b57c79f4a9f3: Loading layer [==================================================>] 3.072kB/3.072kB
b11f1f11664d: Loading layer [==================================================>] 73.73MB/73.73MB
22202f62822e: Loading layer [==================================================>] 2.656GB/2.656GB
50457c8fa41f: Loading layer [==================================================>] 14.5MB/14.5MB
bc4f7221d76a: Loading layer [==================================================>] 2.048kB/2.048kB
4db3eda3ff8f: Loading layer [==================================================>] 1.491MB/1.491MB
Loaded image: intersystems/irishealth:2019.4.0.379.0
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
intersystems/irishealth 2019.4.0.379.0 975a976ad1f4 3 weeks ago 2.83GB
For simplicity copy the key file to a folder location you will use for persistent storage and rename to `iris.key`;
$ mkdir -p /Users/myhome/iris/20194
$ cp 'InterSystems IRIS for Health (Container)_2019.4.0_Docker(Ubuntu)_12-31-2019.ISCkey' /Users/myhome/iris/20194/iris.key
$ cd /Users/myhome/iris/20194
$ ls
iris.key
Start IRIS using the folder you created for persistent storage;
$ docker run --name iris --init --detach --publish 52773:52773 --volume `pwd`:/external intersystems/irishealth:2019.4.0.379.0 --key /external/iris.key
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
009e52c121f0 intersystems/irishealth:2019.4.0.379.0 "/iris-main --key /e…" About a minute ago Up About a minute (healthy) 0.0.0.0:52773->52773/tcp iris
Cool! You can now connect to the [System Management Portal](http://localhost:52773/csp/sys/%25CSP.Portal.Home.zen "Home SMP") on the running container. I used login/password _SuperUser/SYS_; you will be prompted to change the password first time.
Navigate to Web Applications. `System > Security Management > Web Applications`
You will see a Web Application: `/api/monitor` this is the service exposing IRIS metrics.
>**You do not have to do anything to return metrics, it just works.**
### Step 2. Preview metrics
In later posts, we will _scrape_ this endpoint with Prometheus or SAM to collect metrics at set intervals. But for now, let us see the full list of metrics returned for this instance. A simple way, for example on Linux and OSX, is to issue an HTTP GET using the `curl` command. For example; on my (pretty much inactive) container the list starts with;
$ curl localhost:52773/api/monitor/metrics
:
:
iris_cpu_usage 0
iris_csp_activity{id="127.0.0.1:52773"} 56
iris_csp_actual_connections{id="127.0.0.1:52773"} 8
iris_csp_gateway_latency{id="127.0.0.1:52773"} .588
iris_csp_in_use_connections{id="127.0.0.1:52773"} 1
iris_csp_private_connections{id="127.0.0.1:52773"} 0
iris_csp_sessions 1
iris_cache_efficiency 35.565
:
:
And so on. The list can be very long on a production system. I have dumped the full list at the end of the post.
Another useful way is to use the [Postman application](https://www.getpostman.com "POSTMAN"), but there are other ways. Assuming you have installed Postman for your platform, you can issue an HTTP GET and see the metrics returned.
## Summary
That is all for now. In the next post, I will start with collecting the data in _Prometheus_ and look at a sample _Grafana_ dashboard.
### Full list from preview container
A production system will have more metrics available. As you can see from some of the labels, for example; `{id="IRISLOCALDATA"}` there are some metrics that are per-database or for CPU by process type `{id="CSPDMN"}`.
iris_cpu_pct{id="CSPDMN"} 0
iris_cpu_pct{id="CSPSRV"} 0
iris_cpu_pct{id="ECPWorker"} 0
iris_cpu_pct{id="GARCOL"} 0
iris_cpu_pct{id="JRNDMN"} 0
iris_cpu_pct{id="LICENSESRV"} 0
iris_cpu_pct{id="WDSLAVE"} 0
iris_cpu_pct{id="WRTDMN"} 0
iris_cpu_usage 0
iris_csp_activity{id="127.0.0.1:52773"} 57
iris_csp_actual_connections{id="127.0.0.1:52773"} 8
iris_csp_gateway_latency{id="127.0.0.1:52773"} .574
iris_csp_in_use_connections{id="127.0.0.1:52773"} 1
iris_csp_private_connections{id="127.0.0.1:52773"} 0
iris_csp_sessions 1
iris_cache_efficiency 35.850
iris_db_expansion_size_mb{id="ENSLIB"} 0
iris_db_expansion_size_mb{id="HSCUSTOM"} 0
iris_db_expansion_size_mb{id="HSLIB"} 0
iris_db_expansion_size_mb{id="HSSYS"} 0
iris_db_expansion_size_mb{id="IRISAUDIT"} 0
iris_db_expansion_size_mb{id="IRISLOCALDATA"} 0
iris_db_expansion_size_mb{id="IRISSYS"} 0
iris_db_expansion_size_mb{id="IRISTEMP"} 0
iris_db_free_space{id="ENSLIB"} .055
iris_db_free_space{id="HSCUSTOM"} 2.3
iris_db_free_space{id="HSLIB"} 113
iris_db_free_space{id="HSSYS"} 9.2
iris_db_free_space{id="IRISAUDIT"} .094
iris_db_free_space{id="IRISLOCALDATA"} .34
iris_db_free_space{id="IRISSYS"} 6.2
iris_db_free_space{id="IRISTEMP"} 20
iris_db_latency{id="ENSLIB"} 0.030
iris_db_latency{id="HSCUSTOM"} 0.146
iris_db_latency{id="HSLIB"} 0.027
iris_db_latency{id="HSSYS"} 0.018
iris_db_latency{id="IRISAUDIT"} 0.017
iris_db_latency{id="IRISSYS"} 0.020
iris_db_latency{id="IRISTEMP"} 0.021
iris_db_max_size_mb{id="ENSLIB"} 0
iris_db_max_size_mb{id="HSCUSTOM"} 0
iris_db_max_size_mb{id="HSLIB"} 0
iris_db_max_size_mb{id="HSSYS"} 0
iris_db_max_size_mb{id="IRISAUDIT"} 0
iris_db_max_size_mb{id="IRISLOCALDATA"} 0
iris_db_max_size_mb{id="IRISSYS"} 0
iris_db_max_size_mb{id="IRISTEMP"} 0
iris_db_size_mb{id="HSLIB",dir="/usr/irissys/mgr/hslib/"} 1321
iris_db_size_mb{id="HSSYS",dir="/usr/irissys/mgr/hssys/"} 21
iris_db_size_mb{id="ENSLIB",dir="/usr/irissys/mgr/enslib/"} 209
iris_db_size_mb{id="IRISSYS",dir="/usr/irissys/mgr/"} 113
iris_db_size_mb{id="HSCUSTOM",dir="/usr/irissys/mgr/HSCUSTOM/"} 11
iris_db_size_mb{id="IRISTEMP",dir="/usr/irissys/mgr/iristemp/"} 21
iris_db_size_mb{id="IRISAUDIT",dir="/usr/irissys/mgr/irisaudit/"} 1
iris_db_size_mb{id="IRISLOCALDATA",dir="/usr/irissys/mgr/irislocaldata/"} 1
iris_directory_space{id="HSLIB",dir="/usr/irissys/mgr/hslib/"} 53818
iris_directory_space{id="HSSYS",dir="/usr/irissys/mgr/hssys/"} 53818
iris_directory_space{id="ENSLIB",dir="/usr/irissys/mgr/enslib/"} 53818
iris_directory_space{id="IRISSYS",dir="/usr/irissys/mgr/"} 53818
iris_directory_space{id="HSCUSTOM",dir="/usr/irissys/mgr/HSCUSTOM/"} 53818
iris_directory_space{id="IRISTEMP",dir="/usr/irissys/mgr/iristemp/"} 53818
iris_directory_space{id="IRISAUDIT",dir="/usr/irissys/mgr/irisaudit/"} 53818
iris_disk_percent_full{id="HSLIB",dir="/usr/irissys/mgr/hslib/"} 10.03
iris_disk_percent_full{id="HSSYS",dir="/usr/irissys/mgr/hssys/"} 10.03
iris_disk_percent_full{id="ENSLIB",dir="/usr/irissys/mgr/enslib/"} 10.03
iris_disk_percent_full{id="IRISSYS",dir="/usr/irissys/mgr/"} 10.03
iris_disk_percent_full{id="HSCUSTOM",dir="/usr/irissys/mgr/HSCUSTOM/"} 10.03
iris_disk_percent_full{id="IRISTEMP",dir="/usr/irissys/mgr/iristemp/"} 10.03
iris_disk_percent_full{id="IRISAUDIT",dir="/usr/irissys/mgr/irisaudit/"} 10.03
iris_ecp_conn 0
iris_ecp_conn_max 2
iris_ecp_connections 0
iris_ecp_latency 0
iris_ecps_conn 0
iris_ecps_conn_max 1
iris_glo_a_seize_per_sec 0
iris_glo_n_seize_per_sec 0
iris_glo_ref_per_sec 7
iris_glo_ref_rem_per_sec 0
iris_glo_seize_per_sec 0
iris_glo_update_per_sec 2
iris_glo_update_rem_per_sec 0
iris_journal_size 2496
iris_journal_space 50751.18
iris_jrn_block_per_sec 0
iris_jrn_entry_per_sec 0
iris_jrn_free_space{id="WIJ",dir="default"} 50751.18
iris_jrn_free_space{id="primary",dir="/usr/irissys/mgr/journal/"} 50751.18
iris_jrn_free_space{id="secondary",dir="/usr/irissys/mgr/journal/"} 50751.18
iris_jrn_size{id="WIJ"} 100
iris_jrn_size{id="primary"} 2
iris_jrn_size{id="secondary"} 0
iris_license_available 31
iris_license_consumed 1
iris_license_percent_used 3
iris_log_reads_per_sec 5
iris_obj_a_seize_per_sec 0
iris_obj_del_per_sec 0
iris_obj_hit_per_sec 2
iris_obj_load_per_sec 0
iris_obj_miss_per_sec 0
iris_obj_new_per_sec 0
iris_obj_seize_per_sec 0
iris_page_space_per_cent_used 0
iris_phys_mem_per_cent_used 95
iris_phys_reads_per_sec 0
iris_phys_writes_per_sec 0
iris_process_count 29
iris_rtn_a_seize_per_sec 0
iris_rtn_call_local_per_sec 10
iris_rtn_call_miss_per_sec 0
iris_rtn_call_remote_per_sec 0
iris_rtn_load_per_sec 0
iris_rtn_load_rem_per_sec 0
iris_rtn_seize_per_sec 0
iris_sam_get_db_sensors_seconds .000838
iris_sam_get_jrn_sensors_seconds .001024
iris_system_alerts 0
iris_system_alerts_new 0
iris_system_state 0
iris_trans_open_count 0
iris_trans_open_secs 0
iris_trans_open_secs_max 0
iris_wd_buffer_redirty 0
iris_wd_buffer_write 0
iris_wd_cycle_time 0
iris_wd_proc_in_global 0
iris_wd_size_write 0
iris_wd_sleep 10002
iris_wd_temp_queue 42
iris_wd_temp_write 0
iris_wdwij_time 0
iris_wd_write_time 0
iris_wij_writes_per_sec 0
Hi Murray!
This is excellent, love this work and glad its making its way into the api.
For some reason though, I am unable to add this as a direct Prometheus datasource in Grafana and wondering if there is a model or version pre-requisite to Grafana ?
I can see the metrics with curl, wget, postman, and a browser et al... but when I add the datasource to grafana it fails the test.
Aany ideas ? Great article!
I have some questions:
Are there any Interoperability metrics?
How do I add my own custom metrics?
Hi Ron, I should have been clearer. The metrics are in a format to be consumed by Prometheus (or SAM). Once in Prometheus they go into a database that Grafana connects to as a Prometheus datasource. You want to do it this way to get the full functionality of Prometheus Queries + Grafana visualisation. We did try using a connector directly to IRIS but that really limits the functionality (was SimpleJSON). I will be publishing some example Grafana templates specific to IRIS soon. But the Link here to Mikhail's post has an example of connecting to Grafana near the end. If you mean are there metrics for Ensemble, HealthShare, etc. Then no, not at the moment. However, the roadmap is there for this.
You can add custom metrics though; IRIS Documentation. See section "Create Application Metrics".
This will be very powerful when you start to combine telemetry from all the services that make up an application; from the OS, IRIS, and the application. 💡 The article is considered as InterSystems Data Platform Best Practice. Hi, you have some tests on 2020.1 ?
i have some strange issue, around 40 seconds response from a host with no charge server Hi, I have not tested on 2020.1. Are you saying there is no change in any of the metrics after 40 seconds on a busy system? no, metrics URL response after 40 seconds.
the systems have some databases, but is not in production or in use. Open Source IRIS AWS CloudWatch integration https://openexchange.intersystems.com/package/CloudWatch-IRIS use the same MONITOR REST API, along with messages.log (former cconsole.log) as a data source. The iris_db_latency metric measures latency to the database using a random read per documentation. Is it possible that this random read could be either a physical or logical read or is it always one or the other? That could make a big difference in how the metric is used. Thanks.
Announcement
Celeste Canzano · Jun 27
Hello InterSystems EHR community,
InterSystems Certification is currently developing a certification exam for InterSystems EHR Reports specialists, and if you match the exam candidate description given below, we would like you to beta test the exam! The exam will be available for beta testing starting June 30, 2025.
Please note, only candidates who have taken the TrakCare Reporting course are eligible to take the beta. Interested in the beta but haven't completed the course? It's not too late - simply complete the online portion of the course before August 25th. Please see Required Training under Exam Details below for more information.
Beta testing will be completed September 1, 2025.
What are my responsibilities as a beta tester?
You will schedule and take the exam by September 1st. The exam will be administered in an online proctored environment free of charge (the standard fee of $150 per exam is waived for all beta testers). The InterSystems Certification team will then perform a careful statistical analysis of all beta test data to set a passing score for the exam. The analysis of the beta test results will take 6-8 weeks, and once the passing score is established, you will receive an email notification from InterSystems Certification informing you of the results. If your score on the exam is at or above the passing score, you will have earned the certification!
Note: Beta test scores are completely confidential.
Interested in participating? Read the Exam Details below.
Exam Details
Exam title: InterSystems EHR Reports Specialist
Candidate description: An IT specialist who
Uses Logi Report Designer to design and author InterSystems Reports
Sets up, tests, and supports InterSystems Reports in InterSystems EHR
Works with stored procedure developers
Knows how to create and edit report layouts in InterSystems EHR
Required Training:
Completion of the online portion of the TrakCare Reporting course is required to be eligible to take this exam.
The online portion of the TrakCare Reporting course is available here to InterSystems employees. If you are an InterSystems customer and would like to take the course, then please contact your account manager.
Number of questions: 50
Time allotted to take exam: 2 hours
Recommended preparation: Complete available InterSystems EHR learning content:
Online Learning:
TrakCare MEUI Essentials (online course, 40 mins)
TrakCare MEUI Layout Editor (online course, 4 hours).
Online Documentation:
Logi Report Designer Documentation
Recommended practical experience:
At least 3 months full-time experience with creating reports using Logi Report Designer along with basic knowledge of InterSystems EHR is recommended.
Exam practice questions
A set of practice questions is provided here to familiarize candidates with question formats and approaches.
Exam format
The questions are presented in two formats: multiple choice and multiple response.
System requirements for beta testing
Working camera & microphone
Dual-core CPU
At least 2 GB available of RAM memory
At least 500 MB of available disk space
Minimum internet speed:
Download - 500kb/s
Upload - 500kb/s
Exam topics and content
The exam contains questions that cover the areas for the stated role as shown in the exam topics chart immediately below.
Topic
Subtopic
Knowledge, skills, and abilities
1. Creates InterSystems Reports using Logi Report Designer within InterSystems EHR
1.1 Describes what the specification is saying
Recalls what data sources and procedures are, and how to access the sources of data
Identifies what parameters are used from the specification
Distinguishes between different page report component types (e.g. cross tabs, banded objects, normal tables)
1.2 Identifies the components of InterSystems Reports
Distinguishes between catalogues and reports
Recalls the features of a catalogue
Catalogues connections and terms
Accesses the catalogue manager in the designer
Identifies which data source types are used in reporting
Identifies the data source connection and how to modify it
Identify what is required to use a JDBC connection
Recalls what a stored procedure is
Recalls when and why to update a stored procedure
Distinguishes between different data sources and their use cases
Recalls the importance of binding parameters
Manages catalogues using reference entities
Recalls how to change the SQL type of a database field (e.g. dates)
Identifies how to reuse sub-reports
Recalls the different use cases for sub-reports
Describe how to use parameter within a sub-report
Recalls how to configure the parameters that the sub-report requires
Recalls how to link a field on a row to filter sub-reports
Recalls the potential impact of updating stored procedures on the settings
1.3 Uses Logi Report Designer to design and present data
Distinguishes between the different formats of reports
Determines when and how to use different kinds of page report component types
Recalls the meaning of each band and where they appear (e.g. page header vs banded page header)
Recalls how to add groups and work with single vs. multiple groups
Differentiates between the types of summaries
Uses tools to manage, organize, and group data and pages including effectively using page breaks
Identifies when to use formulas
Uses formulas to format data and tables
Determines ho to best work with images including using dynamic images
Uses sub-reports effectively
Inserts standard page headers and footers into the report
Recalls how to embed fonts into the report
Applies correct formatting, localization, and languages
2. Integrates InterSystems reporting within InterSystems EHR
2.1 Understands InterSystems EHR report architecture
Recalls how to setup a report manager entry
Recalls how many user-inputted parameters can be used in InterSystems EHR
Recalls how to setup menu for a report and how to add menu to a header
Recalls what a security group is and adds menus to security group access
Configure InterSystems EHR layout webcommon.report
Differentiates between different types of layout fields
3. Supports InterSystems Reports
3.1 Verifies printing setup
Debug using menu or preview button
Tests the report by making sure it runs as expected
Demonstrates how to run reports with different combinations of parameters
Tests report performance with a big data set
Identifies error types
3.2 Uses print history
Identifies use cases for the print history feature
Recalls the steps to retry printing after a failed print
Uses print to history to verify parameters are correctly passed to the parameters in the stored procedure
Recalls how to identify a report was successfully previewed or if it encountered errors
Interested in participating? Eligible candidates who have completed the TrakCare Reporting course are encouraged to schedule and take the exam via our exam delivery platform. Candidates who are interested in participating in the beta but do not have the required training are encouraged to complete the online portion of the TrakCare Reporting course before August 25th.
If you have any questions, please contact certification@intersystems.com.
Announcement
Yann de Cambourg · Jun 3, 2022
About the job
The ideal candidate will be responsible for conceptualizing and executing clear, quality code to develop the best software. You will test your code, identify errors, and iterate to ensure quality code. You will also support our customers and partners by troubleshooting any of their software issues.
Responsibilities
Detect and troubleshoot software issues
Write clear quality code for software and applications and perform test reviews
Develop, implement, and test APIs
Provide input on software development projects
Application at yann.decambourg@synodis.fr - www.synodis.fr
Qualifications
French Speaking
Comfort using programming languages and relational databases
Strong debugging and troubleshooting skills
3+ years' of development experience on Intersystems IRIS or Intersystems Ensemble
Discussion
Yuri Marx · Dec 31, 2021
For me the best moments were:
1 - Global Masters WON the 2021 Influitive BAMMIE Award for Most Passionate Community
2 - Tech Article contests
3 - InterSystems Programming Contests
4 - 10,000+ DC members
5 - Partner directory and business services
6 - 500+ applications on OEX
7 - Open Virtual Summits
8 - Prizes from GM points
9 - Free online learning courses
10 - Discord channels
11 - Innovations from IRIS Data Platform
12 - Multilanguage communities, including portuguese, spanish and chinese community
13 - Advent of Code So cool! Thanks for sharing, Yuri! 🤟 Wow, @Yuri.Gomes!
Thanks for the highlights!
This was a wonderful year - and mostly because of you, InterSystems Developer Community members!
Thank you! Looking forward to seeing how will be 2022! I agree 100%!
Question
Ankur Shah · Jan 4, 2022
Hi Team,
I want implement build/release pipeline for InterSystems IRIS rest API with VSTS without docker container or other tool.
Can you please provide step by step guide for same.
Thanks,
Ankur Shah
Most of the CI/CD processes are now based on Container's way. And do it without Docker makes this process much more complex. And It's not quite clear what do you want to achieve.
And in any way, this task is quite complex, and very much depends on what kind of application you have, how you build it right now, in some cases major OS, and even when some other languages and technologies are used for application. You may contact me directly, I can help with this, I have experience and knowledge on this. We are using angular as front end and Intersystem IRIS as a backend. We created a CI/CD pipeline for angular project without docker container with VSTS. Same way we want to implement CI/CD pipeline for Intersytem IRIS.
Goal is to move our IRIS code from stage server to production server with the help of CI/CD pipeline. Moreover, we don't have any idea on docker and not sure what additional infrastructure required for used docker container. Hi.
I implement CI/CD pipeline of IRIS in AWS without container!I use CodeCommit what is git service and CodeDeploy what is deploy service.
When source code(cls files) was pushed to CodeCommit, CodeDeploy executes to pull source files from CodeCommit, and deploy to application server.Application server is installed IRIS, and use Interoperability for monitor deploy files.
When Interoperability detects files, executes $SYSTEM.OBJ.DeletePackage(path) and $SYSTEM.OBJ.ImportDir(path, "*.cls;*.mac;*.int;*.inc;*.dfi", flag). Hi,
By mistake I have accepted your answer. Can you please help me on this. It would be great if you can provide some video or tutorial.
Thanks,
Ankur Shah VSTS supports Git, I would recommend using it for version control.
As for CI/CD pipeline check this series of articles.
Might be a bit late to the party, but we use studio project exports in one of our project to create build artifacts, mainly because we are working with customers that do not support containers or other methods of deployment.
Here is the snippet:
ClassMethod CreateStudioExport() As %Status
{
#Dim rSC As %Status
#Dim tSE As %Exception.StatusException
#Dim tProject as %Studio.Project
Try {
set tRelevantFiles = ..GetAllRelevantFiles()
set tProject = ##class(%Studio.Project).%New()
set tProject.Name = "My Studio Export"
set tIterator = tRelevantFiles.%GetIterator()
while tIterator.%GetNext(.key , .classToAdd ) {
write "Adding "_classToAdd_" to project export",!
$$$ThrowOnError(tProject.AddItem(classToAdd))
}
$$$ThrowOnError(tProject.%Save())
zwrite tProject
$$$ThrowOnError(tProject.Export("/opt/app/studio-project.xml", "ck", 0, .errorLog, "UTF-8"))
Set rSC = $$$OK
} Catch tSE {
zwrite errorLog
Set rSC = tSE.AsStatus()
Quit
}
Quit rSC
}
ClassMethod GetAllRelevantFiles() As %DynamicArray
{
set tt=##class(%SYS.Python).Import("files")
set string = tt."get_all_cls"("/opt/app/src/src")
return ##class(%DynamicArray).%FromJSON(string)
}
Here is the python script:
import os
import json # Used to gather relevant files during a buildpipeline step!
def normalize_file_path(file, directory):
# Remove the first part of the directory to normalize the class name
class_name = file[directory.__len__():].replace("\\", ".").replace("/", ".")
if class_name.startswith("."):
class_name = class_name[1:]
return class_name
def is_relevant_file(file):
file_lower = file.lower()
return file_lower.endswith(".cls") \
or file_lower.endswith(".inc") \
or file_lower.endswith(".gbl") \
or file_lower.endswith(".csp") \
or file_lower.endswith(".lut") \
or file_lower.endswith(".hl7")
def get_all_cls(directory):
all_files = [val for sublist in [[os.path.join(i[0], j) for j in i[2]] for i in os.walk(directory)] for val in sublist]
all_relevant_files = list(filter(is_relevant_file, all_files))
normalized = list(map(lambda file: normalize_file_path(file, directory), all_relevant_files))
print(normalized)
return json.dumps(normalized)
It is all rather hacky, and you probably have to use the snippets I provided as basis, and implement stuff yourself.
What we do is:
Spin up a docker container with python enabled in the buildpipeline that has the source files mounted to /opt/app/src
Execute the CreateStudioExport() method in said docker container
Copy the newly created studio export to the build pipeline host
Tag the studio export as artifact and upload it to a file storage
Maybe this helps! Let me know if you have questions!
Announcement
Anastasia Dyubaylo · Feb 8, 2023
Community webinars are back!
And we're thrilled to invite you to the webinar of George James Software, partners of InterSystems:
👉 "Demo of Deltanji: source control tailored for InterSystems IRIS" 👈
Join this webinar to learn how the Deltanji source control can seamlessly integrate into your development lifecycle and see a demonstration.
🗓️ Date & Time: Thursday, February 23, 4 pm GMT | 5 pm CET | 11 am ET
🗣️ Speakers from George James Software:
@George.James, CEO
@John.Murray, Senior Product Engineer
@Laurel.James, Marketing and Business Development Manager
👉 What will be discussed during the webinar:
How the source control integrates with your system is imperative in ensuring it works seamlessly behind the scenes without interruption. Deltanji source control, by George James Software, understands the internal workings of InterSystems IRIS and provides a solution that can seamlessly handle its unique needs. It has client integrations for VS Code, Studio, Management Portal, and Management Portal Productions.
This demo will show how Deltanji goes beyond the traditional CI/CD pipeline, automating the project lifecycle from development through to deployment, making it the perfect source control companion for organizations with continually evolving systems. It will also include a demo of the new Production component driver, which enables highly granular management of interoperability Productions with tight integration into the Management Portal.
Using Deltanji will improve the quality of both the development process and the overall health of a system - ensuring developers are working on the correct code and preventing regressions and other problems that can result in poor code quality and increased costs. Deltanji has been widely adopted by software consultants, large international organizations, and everyone in-between who works with InterSystems environments.
Sounds very interesting and useful!
Don't miss this opportunity to learn more about Deltanji source control and its use in InterSystems projects!
>> REGISTER HERE << Hi Devs,
Don't miss the upcoming webinar with George James Software!
Already 43 registered people. 😎
Registration continues >> click here <<
Hurry, the webinar starts in 3 days! 🔥
Developers,
The webinar will take place tomorrow at 4 pm GMT | 5 pm CET | 11 am ET!
Don't miss this opportunity to learn how the Deltanji source control can seamlessly integrate into your development lifecycle and see a demonstration.
>> Register here <<
Hey Community,
The webinar will start in 10 mins! Please join us in Zoom.
Or enjoy watching the live stream on YouTube.
See you ;) Hey everyone!
The recording of this webinar is available on DC YouTube:
▶️ [Webinar] Demo of Deltanji: source control tailored for InterSystems IRIS
Big applause to our awesome speakers @George.James, @John.Murray, and @Laurel.James.
And thanks everyone for joining! Thanks for having us!