Clear filter
Article
Evgeny Shvarov · Jan 17, 2020
Hi Developers!
Recently we published on Docker Hub images for InterSystems IRIS Community Edition and InterSystems IRIS Community for Health containers.
What is that?
There is a repository that publishes it, and in fact, it is the same container IRIS Community Edition containers you have on official InterSystems listing which have the pre-loaded ObjectScript Package Manager (ZPM) client.
So if you run this container with IRIS CE or IRIC CE for Health you can immediately start using ZPM and install packages from Community Registry or any others.
What does this mean for you?
It means, that anyone can deploy any of your InterSystems ObjectScript application in 3 commands:
run IRIS container;
open terminal;
install your application as ZPM package.
It is safe, fast and cross-platform.
It's really handy if you want to test a new interesting ZPM package and not harm any of your systems.
Suppose, you have docker-desktop installed. You can run the image, which wiil pull the latest container if you don't have it locally:
$ docker run --name iris-ce -d --publish 52773:52773 intersystemsdc/iris-community:2019.4.0.383.0-zpm-dev
Or the following for InterSystems IRIS for Health:
$ docker run --name iris-ce -d --publish 52773:52773 intersystemsdc/irishealth-community:2019.4.0.383.0-zpm-dev
Open terminal to it:
$ docker exec -it iris-ce iris session iris
Node: e87717c3d95d, Instance: IRIS
USER>
Install ZPM module:
USER>zpm
zpm: USER>install objectscript-math
[objectscript-math] Reload START
[objectscript-math] Reload SUCCESS
[objectscript-math] Module object refreshed.
[objectscript-math] Validate START
[objectscript-math] Validate SUCCESS
[objectscript-math] Compile START
[objectscript-math] Compile SUCCESS
[objectscript-math] Activate START
[objectscript-math] Configure START
[objectscript-math] Configure SUCCESS
[objectscript-math] Activate SUCCESS
zpm: USER>q
USER>w ##class(Math.Math).LeastCommonMultiple(134,382)
25594
USER>
Happy coding with ObjectScript and ZPM!
As soon as docker pull command is not necessary I omitted it from the text, but you have it here just in case:
For InterSystems IRIS:
docker pull intersystemsdc/iris-community:2019.4.0.383.0-zpm-dev
For InterSystems IRIS for Health is:
docker pull intersystemsdc/irishealth-community:2019.4.0.383.0-zpm-dev
Announcement
Anastasia Dyubaylo · Oct 17, 2019
Hi Developers,
New Coding Talk, recorded by @Evgeny.Shvarov, is available on InterSystems Developers YouTube:
🎯 Creating REST API with InterSystems IRIS, ObjectScript and Docker
In this video you will learn how to create REST API with InterSystems IRIS, ObjectScript from scratch using GitHub template, VSCode, and Docker container.
Please check the additional links:
GitHub Template
ObjectScript Docker Template App on Open Exchange
Feel free to ask your questions in the comments to this post.
Enjoy watching the video! 👍🏼
Announcement
Anastasia Dyubaylo · Oct 22, 2019
Hi Community,
We're pleased to invite you to the Europe's biggest data science gathering called Data Natives Conference 2019 on 25-26 November in Berlin, Germany! Join us and learn how InterSystems technology can support your AI & ML initiatives to help you shape our world! 🔥
What's more?
Take part in @Benjamin.DeBoe's keynote "From data swamps to clean data lakes" in which he will portray best practices for sustainable AI/ML. And visit InterSystems booth on the 1st floor and discuss real-world use cases and state-of-the-art tools with our experts!
So, remember:
⏱ Time: November 25-26, 2019
📍Venue: Kühlhaus Berlin, Berlin, Germany
✅ Registration: GET YOUR TICKET HERE
We look forward to welcoming you in Berlin!
Article
Evgeny Shvarov · Nov 18, 2019
Hi Developers!
Recently we announced two new challenges on Global Masters: 'Bugs Bounty' and 'Pull Requests'.
And we are getting a lot of submits to the challenges which are not the thing we are expecting there. So I hope this post will give some shine to this quest.
'Bugs Bounty'
Ok! What are we expecting from 'Bugs bounty'?
There are a lot of Open Exchange solutions that come with public open-source repositories on Github: project and repo, another project and the repo, another one and its repo, and many more on Open Exchange.
The idea of the challenge is to install the solution or the tool which has Github repo, test it and if you find the bug submit an issue. E.g. to the repo of this project.
And then submit the link to the issue to the 'Bugs Bounty' challenge on Global Masters - we will check the link and send you the points.
If repo maintainer close the issue send it to us again and get even more points!
That's it! What is the next one?
'Pull Request'
This is a somewhat related challenge to 'Bugs Bounty' - this could be a fix to the bug you find with the previous challenges. Let me explain it a bit.
We invite you not only to find bugs but also to fix it! And the way how you can do it with Github are Pull Requests or PR.
How to make a PR? You fork the repo, clone it to your local machine, fix the bug, commit and push changes to your fork repo and then create a pull request. Check the video about Github Pull Requests and the video on How to Make Github Pull Requests with InterSystems IRIS.
Once you created you can submit the link to the pull request into the challenge and collect your GM points. Check the example of the PR.
Moreover! If repo maintainer accepts your pull request you get even more points!
And you can fix not only bugs you find - you can 'pull request' for any issue listed in the repo you know how to fix.
And this could be not only the bug but enhancement.
I hope this helps in understanding and we are looking for new issues posted and pull requests submitted!
Article
Eduard Lebedyuk · Sep 26, 2022
Welcome to the next chapter of [my CI/CD series](https://community.intersystems.com/post/continuous-delivery-your-intersystems-solution-using-gitlab-index), where we discuss possible approaches toward software development with InterSystems technologies and GitLab.
Today, let's talk about interoperability.
# Issue
When you have an active interoperability production, you have two separate process flows: a working production that processes messages and a CI/CD process flow that updates code, production configuration and [system default settings](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=ECONFIG_other_default_settings).
Clearly, CI/CD process affects interoperability. But questions are:
- What exactly happens during an update?
- What do we need to do to minimize or eliminate production downtime during an update?
# Terminology
- Business Host (BH) - one configurable element of Interoperability Production: Business Service (BS), Business Process (BP, BPL), or Business Operation (BO).
- Business Host Job (Job) - InterSystems IRIS job that runs Business Host code and is managed by Interoperability production.
- Production - interconnected collection of Business Hosts.
- System Default Settings (SDS) - values that are specific to the environment where InterSystems IRIS is installed.
- Active Message - a request which is currently being processed by one Business Host Job. One Business Host Job can have a maximum of one Active Message. Business Host Job, which does not have an Active Message, is idle.
# What's going on?
Let's start with the Production Lifecycle.
## Production Start
First of all, Production can be started. Only one production per namespace can run simultaneously, and in general (unless you really know what and why you're doing it), only one production should be run per namespace, ever. Switching back and forth in one namespace between two or more different productions is not recommended. Starting production starts all enabled Business Hosts defined in the production. Failure of some Business Hosts to start does not affect Production start.
Tips:
- Start the production from the System Management Portal or by calling: `##class(Ens.Director).StartProduction("ProductionName")`
- Execute arbitrary code on Production start (before any Business Host Job is started) by implementing an `OnStart` method
- Production start is an auditable event. You can always see who and when did that in the Audit Log.
## Production Update
After Production has been started, [Ens.Director](https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=ENSLIB&CLASSNAME=Ens.Director) continuously monitors running production. Two production states exist: _target state_, defined in production class, and System Default Settings; and _running state_ - currently running jobs with settings applied when the jobs were created. If desired and current states are identical, everything is good, but production could (and should) be updated if there's a difference. Usually, you see that as a red `Update` button on the Production Configuration page in the System Management Portal.
Updating production means an attempt to get the current Production state to match the target Production state.
When you run `##class(Ens.Director).UpdateProduction(timeout=10, force=0)` To Update the production, it does the following for each Business Host:
1. Compares active settings to production/SDS/class settings
2. If, and only if (1) shows a mismatch, the Business Host would be marked as out-of-date and requiring an update.
After running this for each Business Host, `UpdateProduction` builds the set of changes:
- Business Hosts to stop
- Business Hosts to start
- Production settings to update
And after that, applies them.
This way, “updating” settings without changing anything results in no production downtime.
Tips:
- Update the production from the System Management Portal or by calling: `##class(Ens.Director).UpdateProduction(timeout=10, force=0)`
- Default System Management Portal update timeout is 10 seconds. If you know that processing your messages takes more than that, call `Ens.Director:UpdateProduction` with a larger timeout.
- Update Timeout is a production setting, and you can change it to a larger value. This setting applies to the System Management Portal.
## Code Update
`UpdateProduction` DOES NOT UPDATE the BHs with out-of-date code. This is a safety-oriented behavior, but if you want to automatically update all running BHs if the underlying code changes, follow these steps:
First, load and compile like this:
```objectscript
do $system.OBJ.LoadDir(dir, "", .err, 1, .load)
do $system.OBJ.CompileList(load, "curk", .errCompile, .listCompiled)
```
Now, `listCompiled` would contain all items which were actually compiled (use [git diffs](https://github.com/intersystems-ru/GitLab/blob/master/isc/git/Diff.cls) to minimize loaded set) due to the `u` flag. Use this `listCompiled` to get a $lb of all classes which were compiled:
```objectscript
set classList = ""
set class = $o(listCompiled(""))
while class'="" {
set classList = classList _ $lb($p(class, ".", 1, *-1))
set class=$o(listCompiled(class))
}
```
And after that, calculate a list of BHs which need a restart:
```sql
SELECT %DLIST(Name) bhList
FROM Ens_Config.Item
WHERE 1=1
AND Enabled = 1
AND Production = :production
AND ClassName %INLIST :classList
```
Finally, after obtaining `bhList` stop and start affected hosts:
```objectscript
for stop = 1, 0 {
for i=1:1:$ll(bhList) {
set host = $lg(bhList, i)
set sc = ##class(Ens.Director).TempStopConfigItem(host, stop, 0)
}
set sc = ##class(Ens.Director).UpdateProduction()
}
```
## Production Stop
Productions can be stopped, which means sending a request to all Business Host Jobs to shut down (safely, after they are done with their active messages, if any).
Tips:
- Stop the production from the System Management Portal or by calling: `##class(Ens.Director).StopProduction(timeout=10, force=0)`
- Default System Management Portal stop timeout it 120 seconds. If you know that processing your messages takes more than that, call `Ens.Director:StopProduction` with a larger timeout.
- Shutdown Timeout is a production setting. You can change that to a larger value. This setting applies to the System Management Portal.
- Execute arbitrary code on Production stop by implementing an `OnStop` method
- Production stop is an auditable event, you can always see who and when did that in the Audit Log.
The important thing here is that Production is a sum total of the Business Hosts:
- Starting production means starting all enabled Business Hosts.
- Stopping production means stopping all running Business Hosts.
- Updating production means calculating a subset of Business Hosts which are out of date, so they are first stopped and immediately after that started again. Additionally, a newly added Business Host is only started, and a Business Host deleted from production is just stopped.
That brings us to the Business Hosts lifecycle.
## Business Host Start
Business Hosts are composed of identical Business Hosts Jobs (according to a Pool Size setting value). Starting a Business Host means starting all Business Hosts Jobs. They are started in parallel.
Individual Business Host Job starts like this:
1. Interoperability JOBs a new process that would become a Business Host Job.
2. The new process registers as an Interoperability job.
3. Business Host code and Adapter code is loaded into process memory.
4. Settings related to a Business Host and Adapter are loaded into memory. The order of precedence is:
a. Production Settings (overrides System Default and Class Settings).
b. System Default Settings (overrides Class Settings).
c. Class Settings.
5. Job is ready and starts accepting messages.
After (4) is done, the Job can’t change settings or code, so when you import new/same code and new/same systems default settings, it does not affect currently running Interoperability jobs.
## Business Host Stop
Stopping a Business Host Job means:
1. Interoperability orders Job to stop accepting any more messages/inputs.
2. If there’s an active message, Business Host Job has timeout seconds to process it (by completing it - finishing `OnMessage` method for BO, `OnProcessInput` for BS, state `S` method for BPL BPs, and `On*` method for BPs).
3. If an active message has not been processed till the timeout and `force=0`, production update fails for that Business Host (and you’ll see a red Update button in the SMP).
4. Stop succeeds if anything on this list is true:
- No active message
- Active message was processed before the `timeout`
- Active message was not processed before the timeout BUT `force=1`
5. Job is deregistered with Interoperability and halts.
## Business Host Update
Business host update means stopping currently running Jobs for the Business Host and starting new Jobs.
## Business Rules, Routing Rules, and DTLs
All Business Hosts immediately start using new versions of Business Rules, Routing Rules, and DTLs as they become available. A restart of a Business Host is not required in this situation.
# Offline updates
Sometimes, however, Production updates require downtime of individual Business Hosts.
## Rules depend on new code
Consider the situation. You have a current Routing Rule X which routes messages to either Business Process A or B based on arbitrary criteria.
In a new commit, you add, simultaneously:
- Business Process C
- A new version of Routing Rule X, which routes messages to A, B, or C.
In this scenario, you can't just load the rule first and update the production second. Because the newly compiled rule would immediately start routing messages to Business Process C, which InterSystems IRIS might not have yet compiled, or Interoperability did not yet Update to use.
In this case, you need to disable the Business Host with a Routing Rule, update the code, update production and enable the Business Host again.
Notes:
- If you update a production using [produciton deployment file](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=EGDV_deploying) it would automatically disable/enable all affected BHs.
- For InProc invoked hosts, the compilation invalidates the cache of the particular host held by the caller.
## Dependencies between Business Hosts
Dependencies between Business Hosts are critical. Imagine you have Business Processes A and B, where A sends messages to B.
In a new commit, you add, simultaneously:
- A new version of Process A, which sets a new property X in a request to B
- A new version of Process B which can process a new property X
In this scenario, we MUST update Process B first and A second. You can do this in one of two ways:
- Disable Business Hosts for the duration of the update
- Split the update into two: first, update Process B only, and after that, in a separate update, start sending messages to it from Process A.
A more challenging variation on this theme, where new versions of Processes A and B are incompatible with old versions, requires Business Host downtime.
## Queues
If you know that after the update, a Business Host will not be able to process old messages, you need to guarantee that the Business Host Queue is empty before the update. To do that, disable all Business Hosts that send messages to the Business Host and wait till its queue becomes empty.
## State change in BPL Business Processes
First, a little intro into how BPL BPs work. After you compile a BPL BP, two classes get created into the package with the same name as a full BPL class name:
- `Thread1` class contains methods S1, S2, ... SN, which correspond to activities within BPL
- `Context` class has all context variables and also the next state which BPL would execute (i.e., `S5`)
Also BPL class is persistent and stores requests currently being processed.
BPL works by executing `S` methods in a `Thread` class and correspondingly updating the BPL class table, `Context` table, and `Thread1` table where one message "being processed" is one row in a BPL table. After the request is processed, BPL deletes the BPL, `Context`, and `Thread` entries. Since BPL BPs are asynchronous, one BPL job can simultaneously process many requests by saving information between `S` calls and switching between different requests.
For example, BPL processed one request till it got to a `sync` activity - waiting for an answer from BO. It would save the current context to disk, with `%NextState` property (in `Thread1` class) set to response activity `S` method, and work on other requests until BO answers. After BO answers, BPL would load Context into memory and execute the method corresponding to a state saved in `%NextState` property.
Now, what happens when we update the BPL?
First, we need to check that at least one of the two conditions is satisfied:
- During the update, the Context table is empty, meaning no active messages are being worked on.
- The New States are the same as the old States, or new States are added after the old States.
If at least one condition is satisfied, we are good to go. There are either no pre-update requests for post-update BPL to process, or States are added at the end, meaning old requests can also go there (assuming that pre-update requests are compatible with post-update BPL activities and processing).
But what if you have active requests in processing and BPL changes state order? Ideally, if you can wait, disable BPL callers and wait till the Queue is empty. Validate that the Context table is also empty. Remember that the Queue shows only unprocessed requests, and the Context table stores requests which are being worked on, so you can have a situation where a very busy BPL shows zero Queue size, and that's normal.
After that, disable the BPL, perform the update and enable all previously disabled Business Hosts.
If that's not possible (usually in a case where there is a very long BPL, i.e., I remember updating one that took around a week to process a request, or the update window is too short), use [BPL versioning](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=EBPLR_process).
Alternatively, you can write an update script. In this update script, map old next states to new next states and run it on `Thread1` table so that updated BPL can process old requests. BPL, of course, must be disabled for the duration of the update.
That said, it's an extremely rare situation, and usually, you don't have to do this, but if you ever need to do that, that's how.
# Conclusion
Interoperability implements a sophisticated algorithm to minimize the number of actions required to actualize Production after the underlying code change. Call UpdateProduction with a safe timeout on every SDS update. For every code update, you need to decide on an update strategy.
Minimizing the amount of compiled code by using [git diffs](https://github.com/intersystems-ru/GitLab/blob/master/isc/git/Diff.cls) helps with the compilation time, but "updating" the code with itself and recompiling it or "updating" the settings with the same values does not trigger or require a Production Update.
Updating and compiling Business Rules, Routing Rules, and DTLs makes them immediately accessible without a Production Update.
Finally, Production Update is a safe operation and usually does not require downtime.
# Links
- [Ens.Director](https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=ENSLIB&CLASSNAME=Ens.Director)
- [Building git diffs](https://github.com/intersystems-ru/GitLab/blob/master/isc/git/Diff.cls)
- [System Default Settings](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=ECONFIG_other_default_settings)
Author would like to thank @James.MacKeith, @Dmitry.Zasypkin, and @Regilo.Souza for their invaluable help with this article.
Article
Developer Community Admin · Apr 13, 2023
What is InterSystems IRIS?
InterSystems IRIS is a high-performance data platform designed for developing and deploying mission-critical applications. It is a unified data platform that combines transaction processing, analytics, and machine learning in a single product.
InterSystems IRIS provides a comprehensive set of data management and development tools that enable developers to build, integrate, and deploy applications with ease. It supports a wide range of data models, including relational, object-oriented, hierarchical, and document-based models, and provides a powerful set of APIs for accessing data.
InterSystems IRIS is used in a variety of industries, including healthcare, finance, logistics, and more, to power critical applications such as electronic health records, financial trading systems, and supply chain management platforms. It is known for its scalability, reliability, and ease of use, and is used by some of the world's largest organizations to manage their most important data-driven applications.
What type of database is InterSystems IRIS?
InterSystems IRIS is a multi-model database, which means that it supports multiple data models including relational, object-oriented, and document-based models. It is designed to be highly flexible and adaptable, allowing developers to choose the data model that best fits their application's requirements.
InterSystems IRIS supports standard SQL for relational data management, and it also provides advanced indexing capabilities and query optimization to improve performance. In addition, it supports NoSQL document-oriented data storage, allowing developers to work with unstructured and semi-structured data. The object-oriented data model in InterSystems IRIS allows developers to work with complex data structures and build object-oriented applications.
The multi-model architecture of InterSystems IRIS provides developers with the flexibility to work with different types of data in a single database, simplifying application development and management. This makes it a popular choice for building high-performance, data-driven applications in a variety of industries.
What is the InterSystems IRIS database?
InterSystems IRIS is a high-performance database management system (DBMS) that is designed to handle a wide variety of data management tasks. It is developed by InterSystems Corporation, a software company that specializes in providing data management, interoperability, and analytics solutions to businesses and organizations around the world.
InterSystems IRIS is a powerful and flexible database platform that can handle both structured and unstructured data, and can be used for a variety of applications, including transaction processing, analytics, and machine learning. It provides a rich set of features and tools for managing data, including support for SQL, object-oriented data modeling, multi-dimensional data analysis, and integrated development and deployment tools.
One of the key features of InterSystems IRIS is its ability to handle large amounts of data with high performance and scalability. It uses advanced caching and indexing techniques to optimize data access, and can be configured to work with a wide range of hardware configurations and operating systems.
InterSystems IRIS also includes advanced security features, such as role-based access control, encryption, and auditing, to ensure the confidentiality, integrity, and availability of data.
Overall, InterSystems IRIS is a powerful and flexible database platform that can help businesses and organizations manage their data more effectively and efficiently.
What is InterSystems IRIS HealthShare?
InterSystems IRIS HealthShare is a healthcare-specific platform that builds on top of InterSystems IRIS database and integration engine to provide a comprehensive solution for healthcare organizations. It is designed to enable healthcare organizations to securely and efficiently share patient data across different systems and applications, while also providing advanced analytics and insights to improve patient care.
InterSystems IRIS HealthShare includes a wide range of features and tools, including:
Health Information Exchange (HIE) capabilities that enable healthcare organizations to securely exchange patient data across different systems and providers.
Master Patient Index (MPI) functionality that ensures accurate patient identification and record matching, even in the face of incomplete or inconsistent data.
Clinical Viewer and Patient Portal that enable patients and clinicians to view and interact with patient data in a secure and intuitive way.
Analytics and Business Intelligence tools that enable healthcare organizations to analyze patient data and identify patterns and trends that can improve patient outcomes and drive operational efficiencies.
Interoperability capabilities that enable healthcare organizations to connect to and exchange data with a wide range of external systems and devices.
Overall, InterSystems IRIS HealthShare is a powerful and flexible platform that can help healthcare organizations improve the quality of patient care while also reducing costs and improving operational efficiency.
How is InterSystems IRIS data stored?
InterSystems IRIS stores data using a hierarchical, multi-dimensional data model. Each element of it is called a Global. A Global is a persistent, hierarchical data structure that can be thought of as a collection of nodes that are organized into a tree-like structure. Each node in the tree is identified by a unique path, which is formed by concatenating a series of labels, separated by caret (^) characters.
A Global can store a wide variety of data types, including strings, numbers, and binary data, and can be accessed using a variety of programming languages and APIs, including SQL, object-oriented programming, and Web services.
InterSystems IRIS also provides a flexible and scalable indexing system that enables efficient retrieval of data from Globals. The indexing system allows developers to define custom indexes on specific attributes of the data, which can be used to quickly retrieve subsets of data that meet specific criteria.
In addition to Globals, InterSystems IRIS also supports other data storage mechanisms, including relational tables, multidimensional arrays, and JSON documents. Relational tables are based on the SQL standard and provide a structured, tabular way to store data. Multidimensional arrays are used to store data that is organized into matrices or cubes, while JSON documents are used to store unstructured or semi-structured data.
Overall, InterSystems IRIS provides a flexible and powerful data storage system that can handle a wide variety of data types and data models, making it well-suited for a wide range of applications and use cases.
Is InterSystems IRIS a programming language?
InterSystems IRIS is not a programming language itself, but it provides support for a variety of programming languages and APIs. Some of the programming languages that are supported by InterSystems IRIS include:
ObjectScript: InterSystems' proprietary programming language, which is used for developing applications that run on the InterSystems IRIS platform.
SQL: InterSystems IRIS provides full support for the SQL programming language, which can be used to interact with relational data stored in InterSystems IRIS.
Java and .NET: InterSystems IRIS provides support for both the Java and .NET programming languages, which can be used to develop applications that interact with InterSystems IRIS.
REST and SOAP APIs: InterSystems IRIS provides support for both RESTful and SOAP-based APIs, which can be used to develop Web services that interact with InterSystems IRIS.
Node.js: InterSystems IRIS also provides support for Node.js, a popular JavaScript runtime environment, which can be used to develop server-side applications that interact with InterSystems IRIS.
Python: InteSystems provides support for Python programming language, which can be used as one of the languages to develop applications that run on the InterSystems IRIS platform and as an API.
Overall, while InterSystems IRIS is not a programming language in and of itself, it provides a wide range of tools and APIs that enable developers to build and deploy applications using a variety of programming languages and frameworks.
Which model is best for InterSystems IRIS data?
InterSystems IRIS supports a variety of data models, including hierarchical, relational, multidimensional, and document-based (JSON). The best model for your specific use case will depend on a variety of factors, including the nature of the data, the types of queries and analysis you need to perform, and the overall architecture of your application. Here are some general guidelines for choosing the best data model for your InterSystems IRIS implementation:
Hierarchical Model: If your data has a hierarchical structure, such as patient records in a healthcare application or parts and subassemblies in a manufacturing system, a hierarchical model may be the best choice. Hierarchical models are optimized for fast traversal of tree-like structures and can provide excellent performance for certain types of queries and updates.
Relational Model: If your data is highly structured and requires complex queries or joins, a relational model may be the best choice. Relational databases are well-suited for handling large amounts of structured data and provide powerful querying and reporting capabilities.
Multidimensional Model: If your data is organized into matrices or cubes, such as financial data or scientific data, a multidimensional model may be the best choice. Multidimensional databases are optimized for fast querying and analysis of complex data structures.
Document-Based Model: If your data is unstructured or semi-structured, such as social media posts or log files, a document-based model may be the best choice. Document databases are optimized for storing and querying unstructured data and can provide excellent performance for certain types of queries.
What is InterSystems IRIS famous for?
InterSystems IRIS is famous for several reasons, including:
High Performance: InterSystems IRIS is known for its high performance and scalability, making it a popular choice for data-intensive applications that require fast and reliable data access.
Integration Capabilities: InterSystems IRIS provides powerful integration capabilities, allowing it to connect to and exchange data with a wide range of external systems and applications. This makes it an ideal choice for organizations that need to integrate data from multiple sources or build complex data-driven applications.
Flexibility: InterSystems IRIS supports a wide range of data models, programming languages, and APIs, giving developers the flexibility to choose the tools and approaches that work best for their specific needs.
Healthcare Focus: InterSystems IRIS is widely used in the healthcare industry, where it is known for its powerful clinical data management capabilities and support for industry-standard data exchange formats.
Developer Community: InterSystems has a large and active developer community, which provides support, resources, and best practices for building and deploying applications using InterSystems IRIS.
Overall, InterSystems IRIS has earned a reputation as a powerful and flexible data management platform that can support a wide range of use cases and industries, making it a popular choice for organizations around the world.
Article
Muhammad Waseem · Apr 17, 2023
Hi Community,In this article, I will introduce my application iris-mlm-explainer
This web application connects to InterSystems Cloud SQL to create, train, validate, and predict ML models, make Predictions and display a dashboard of all the trained models with an explanation of the workings of a fitted machine learning model. The dashboard provides interactive plots on model performance, feature importances, feature contributions to individual predictions, partial dependence plots, SHAP (interaction) values, visualization of individual decision trees, etc.
Prerequisites
You should have an account with InterSystems Cloud SQL
You should have Git installed locally.
You should have Python3 installed locally.
Getting Started
We will follow the below steps to create and view the explainer dashboard of a model :
Step 1 : Close/git Pull the repo
Step 2 : Login to InterSystems Cloud SQL Service Portal
Step 2.1 : Add and Manage Files
Step 2.2 : Import DDL and data files
Step 2.3 : Create Model
Step 2.4 : Train Model
Step 2.5 : Validate Model
Step 3 : Activate Python virtual environment
Step 4 : Run Web Application for prediction
Step 5 : Explore the Explainer dashboard
Step 1 : Close/git Pull the repo
So Let us start with the first Step
Create a folder and Clone/git pull the repo into any local directory
git clone https://github.com/mwaseem75/iris-mlm-explainer.git
Step 2 : Log in to InterSystems Cloud SQL Service Portal
Log in to InterSystems Cloud Service Portal
Select the running deployment
Step 2.1 : Add and Manage Files
Click on Add and Manage Files
Repo contains USA_Housing_tables_DDL.sql(DDL to create tables), USA_Housing_train.csv(training data), and USA_Housing_validate.csv(For validation) files under the datasets folder. Select the upload button to add these files.
Setp 2.2 : Import DDL and data files
Click on Import files, then click on DDL or DML statement(s) radio button, then click the next button
Click on Intersystems IRIS radio button and then click on the next button
Select USA_Housing_tables_DDL.sql file and then press the import files button
Click on Import from the confirmation dialog to create the table
Click on SQL Query tools to verify that tables are created
Import data files
Click on Import files, then click on CSV data radio button, then click the next button
Select USA_Housing_train.csv file and click the next button
Select USA_Housing_train.csv file from the dropdown list, check import rows as a header row and Field names in header row match column names in selected table and click import files
click on import in the confirmation dialog
Make sure 4000 rows are updated
Do the same steps to import USA_Housing_validate.csv file which contains 1500 records
Setp 2.3 : Create Model
Click on IntegratedM tools and select Create Panel.
Enter USAHousingPriceModel in the Model Name field, Select usa_housing_train table and Price in the field to predict dropdown. Click on create model button to create the model
Setp 2.4 : Train Model
select Train Panel, Select USAHousingPriceModel from the model to train dropdownlist and enter USAHousingPriceModel_t1 in the train model name field
Model will be trained once Run Status completed
Setp 2.5 : Validate Model
Select Validate Panel, Select USAHousingPriceModel_t1 from Trained model to validate dropdownlist, select usa_houseing_validate from Table to validate model from dropdownlist and click on validate model button
Click on show validation metrics to view metrics
Click on the chart icon to view the Prediction VS Actual graph
Step 3 : Activate Python virtual environment
The repository already contains a python virtual environment folder (venv) along with all the required libraries.
All we need is to just activate the environmentOn Unix or MacOS:
$ source venv/bin/activate
On Windows:
venv\scripts\activate
Step 4 : Set InterSystems Cloud SQL connection parameters
Repo contains config.py file. Just open and set the parametersPut same values used in InterSystems Cloud SQL
Step 4 : Run Web Application for prediction
Run the below command in the virtual environment to start our main application
python app.py
Navigate to http://127.0.0.1:5000/ to run the application
Enter Age of house, No of rooms, No of bedroom and Area population to get the prediction
Step 5 : Explore the Explainer dashboard
Finally, run the below command in virtual environment to start our main application
python expdash.py
Navigate to http://localhost:8050/ to run the application
Application will list all the trained models along with our USAHousingPriceModel. Navigate to "go to dashboard" hyperlink to view model explainer
Feature Importances. Which features had the biggest impact?
Quantitative metrics for model performance, How close is the predicted value to the observed?
Prediction and How has each feature contributed to the prediction?
Adjust the feature values to change the prediction
Shap Summary, Ordering features by shap values
Interactions Summary, Ordering features by shap interaction value
Decision Trees, Displaying individual decision trees inside Random Forest
Thanks I love it
Article
Evgeny Shvarov · Feb 12, 2023
Hi Developers!
As you know InterSystems IRIS Interoperability solutions contain different elements of the solution, such as: production, business rule, business process, data transformation, record mapper. And sometimes we can create and modify these elements with UI tools. And of course we need a handy and robust way to source-control the changes made with UI tools.
For a long time this was a manual (export class, element, global, etc) or cumbersome settings procedure, so the saved time with source-control UI automation was competing with lost time to setup and maintain the settings.
Now the problem doesn't exist any more. With two approaches: package first development and usage of IPM package git-source-control by @Timothy.Leavitt .
The details are below!
Disclaimer: this relates to a client-side approach of development, when the elements of the Interoperability production are the files in the repository.
So, this article will not be long at all, as the solution is fantastically simple.
I suppose you develop with docker and once you build the dev environment docker image with IRIS you load the solution as an IPM module. This is called "Package first" development and there is the related video and article. The basic idea is that dev-environment docker image with iris gets the solution loaded as package, as it is being deployed on a client's server.
To make a package first dev environment for your solution add a module.xml into the repository, describe all the elements of it and call "zpm load "repository/folder" command at a building phase of docker image.
I can demonstrate the idea with the example template: IRIS Interoperability template and its module.xml. Here is how the package is being loaded during docker build:
zpm "load /home/irisowner/irisdev/ -v":1:1
the source.
See the following two lines placed before loading the package source control. Because of it source control starts working automatically for ALL the interoperability elements in the package and will export it in a proper folders in a proper format:
zpm "install git-source-control"
do ##class(%Studio.SourceControl.Interface).SourceControlClassSet("SourceControl.Git.Extension")
the source
How is it possible?
Since recently git-source-control app supports IPM pakcages for source control that are loaded in a dev mode. It reads the folder to export, and imports the structure of sources from module.xml. @Timothy.Leavitt can give provide more details.
If we check in terminal the list of IPM modules after the environment is built we can see that loaded module is indeed in dev mode:
USER>zpm
=============================================================================
|| Welcome to the Package Manager Shell (ZPM). ||
|| Enter q/quit to exit the shell. Enter ?/help to view available commands ||
=============================================================================
zpm:USER>list
git-source-control 2.1.0
interoperability-sample 0.1.2 (DeveloperMode)
sslclient 1.0.1
zpm:USER>
Let's try?
I cloned this repository, opened in VSCode and built the image. And below I test Interoperability UI and source control. I make a change in UI and it immediately appear in the sources and diffs:
It works! That's it!
As a conclusion, what is needed to let you have source control for Interoperability UI elements in your project:
1. Add two lines in iris.script while building docker image:
zpm "install git-source-control"
do ##class(%Studio.SourceControl.Interface).SourceControlClassSet("SourceControl.Git.Extension")
And load your solution as a module after that, e.g. like here:
zpm "load /home/irisowner/irisdev/ -v":1:1
2. Or you can start a new one by creating repository from Interoperability template.
Thanks for reading! Comments and feedback are welcome! If I understand correctly, the trick is to load the "src/gbl/SYS.xml" file that holds the configuration of git-source-control module.
If we create a new file like a BP or a DTL, we still have to add it to the source control module with the UI.
Then, if we do so, we have to update the "src/gbl/SYS.xml" file with the new file.
Am I right ? Global is not needed With this approach nothing is needed: no global and all the elements of production are already “in source-control”. no manual adding via UI needed too. @Timothy.Leavitt, help me, but it looks like git-source-control gets the elements from module.xml
if you have this classes or packages in module.xml, they are in source-control. @Evgeny.Shvarov you're correct that no further configuration is needed - although if you want to commit direct from the IDE / Web UI you should set up the username/email for attribution.
At a technical level, see: https://github.com/intersystems/git-source-control/blob/main/cls/SourceControl/Git/PackageManagerContext.cls
git-source-control doesn't reference module.xml directly; there's a method in IPM to get the package to which a given "InternalName" (e.g., Foo.Bar.CLS) belongs, so it calls that. Great! So, @Guillaume.Rongier7183 , make sure IPM module.xml includes the Interoperability element, and no any other settings needed. Great write-up (thank you for the animated GIFs!) This makes it so simple for people to get started - appreciate the write-up Thank you, Ben!
Announcement
Larry Finlayson · Feb 13, 2023
Using InterSystems Embedded Analytics March 6-10, 2023 – Virtual 9:00am-5:00pm US-Eastern Time (EST)
This course is offered only a few times each year so take advantage of this session!
This five-day course teaches developers and business intelligence users how to embed real-time analytics capabilities in their applications using InterSystems IRIS Business Intelligence and Natural Language Processing.
The course presents the basics of building data models from transactional data using the Architect, exploring those models and building pivot tables and charts using the Analyzer, as well as creating dashboards for presenting pivot tables, meters, and other interactive widgets.
The course also covers securing models, tools and elements (dashboards, pivot tables, etc.) using the InterSystems security infrastructure.
Additionally, the course presents topics such as customizing the User Portal, troubleshooting and deployment.
This course is applicable for users of InterSystems IRIS Data Platform, InterSystems IRIS for Health, and DeepSee.
Register Here
Article
Eduard Lebedyuk · Mar 1, 2018
Everybody has a testing environment. Some people are lucky enough to have a totally separate environment to run production in. -- Unknown.In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:Git 101Git flow (development process)GitLab installationGitLab WorkFlowGitLab CI/CDCI/CD with containersThis first part deals with the cornerstone of modern software development - Git version control system and various Git flows.Git 101While the main topic we're going to discuss is software development in general and how GitLab can enable us in that endeavor, Git, or rather several underlying high-level concepts in the Git design that are important for the better understanding of the later concepts.That said, Git is a version control system, based on these ideas (there are many more, that's the most important ones):Non-linear development means that while our software is released consequentially from version 1 to 2 to 3, under the table the move from version 1 to 2 is done in parallel - several developers develop a number of features/bug-fixes simultaneously.Distributed development means that developer is independent from one central server or other developers and can develop in his own environment easily.Merging - previous two ideas bring us to the situation where many different versions of truth exist simultaneously and we need to unite them back into one complete state.Now, I'm not saying that Git invented these concepts. No. Rather Git made them easy and popular and that, coupled with several related innovations, i.e. infrastructure as code/containerization changed software development.Core git termsRepository is a project that stores data and meta-information about the data."Physically" repository is a directory on a disk.Repository stores files and directories.Repository also stores a complete history of changes for each file.Repository can be stored:Locally, on your own computerRemotely on a remote serverBut there's no particular difference between local and remote repositories from the git point of view.Commit is a fixed state of the repository. Obviously, if each commit stored the full state of the repository, our repository would grow very big very fast. That's why a commit stores a diff which is a difference between the current commit and its parent commit.Different commits can have a different number of parents:0 - the first commit in the repository doesn't have parents.1 - business as usual - our commit changed something in the repository as it was during parent commit2 - when we have two different states of the repository we can unite them into one new state. And that state and that commit would have 2 parents.>2 - can happen when we unite more that 2 different states of the repository into one new state. It wouldn't be particularly relevant to our discussion, but it does exist.Now for a parent, each commit that diffs from it is called a child commit. Each parent commit can have any number of children commits.Branch is a reference (or pointer) to a commit. Here's how it looks:On this image can see the repository with two commits (grey circles), the second one is the head of the master branch. After we add more commits, our repository starts to look like this:That's the most simple case. One developer works on one change at a time. However, usually, there are many developers working simultaneously on different features and we need a commit tree to show what's going on in our repository.Commit treeLet's begin from the same starting point. Here's the repository with two commits:But now, two developers are working at the same time and to not interfere with each other they work in separate branches:After a while they need to unite changes made and for that they create a merge request (also called pull request) - which is exactly what it sounds like - its a request to unite two different states of the repository (in our case we want to merge develop branch into master branch) into one new state. After it's appropriately reviewed and approved our repository looks like this:And the development continues:Git 101 - SummaryMain concepts:Git is a non-linear, distributed version control system.Repository stores data and meta-information about the data.Commit is a fixed state of the repository.Branch is a reference to a commit. Merge request (also called pull request) - is a request to unite two different states of the repository into one new state.If you want to read more about Git, there are books available.Git flowsNow, that the reader is familiar with basic Git terms and concepts let's talk about how development part of software life-cycle can be managed using Git. There are a number of practices (called flows) which describe development process using Git, but we going to talk about two of them:GitHub flowGitLab flowGitHub flowGitHub flow is as easy as it gets. Here it is:Create a branch from the repository.Commit your changes to your new branchSend a pull request from your branch with your proposed changes to kick off a discussion.Commit more changes on your branch as needed. Your pull request will update automatically.Merge the pull request once the branch is ready to be merged.And there are several rules we need to follow:master branch is always deployable (and working!)There is no development going directly in the master branchDevelopment is going on in the feature branchesmaster == production* environment**You need to deploy to production as often as possible* Do not confuse with "Ensemble Productions", Here "Production" means LIVE.** Environment is a configured place where your code runs - could be a server, a VM, container even.Here's how it looks like:You can read more about GitHub flow here. There's also an illustrated guide.GitHub flow is good for small projects and to try it if you're starting out with Git flows. GitHub uses it, though, so it can be viable on big projects too.GitLab flowIf you're not ready to deploy on production right away, GitLab flow offers GitHub flow + environments. Here's how it works - you develop in feature branches, same as above, merge into master, same as above, but here's a twist: master equals only test environment. In addition to that, you have "Environment branches" which are linked to various other environments you might have.Usually, three environments exist (you can create more if you need it):Test environment == master branchPreProduction environment == preprod branchProduction environment == prod branchThe code that arrives into one of the environment branches should be moved into the corresponding environment immediately, it can be done:Automatically (we'll be at it in parts 2 and 3)Semi-automatically (same as automatically except a button authorizing the deployment should be pressed)ManuallyThe whole process goes like this:Feature is developed in feature branch.Feature branch is reviewed and merged into master branch.After a while (several features merged) master is merged into preprodAfter a while (user testing, etc.) preprod is merged into prod While we were merging and testing, several new features were developed and merged into master, so GOTO 3.Here's how it looks like:You can read more about GitLab flow here.ConclusionGit is a non-linear, distributed version control system.Git flow can be used as a guideline for software development cycle, there are several that you can choose from.LinksGit bookGitHub flowGitLab flowDriessen flow (more comprehensive flow, for comparison)Code for this articleDiscussion questionsDo you use git flow? Which one?How many environments do you have for an average project?What's nextIn the next part we will:Install GitLab.Talk about some recommended tweaks.Discuss GitLab Workflow (not to be confused with GitLab flow).Stay tuned. The article is considered as InterSystems Data Platform Best Practice.
Announcement
Evgeny Shvarov · Apr 17, 2018
Hi, Community!
We are pleased to invite you to the InterSystems UK Developers Community Meetup on 25th of June!
The UK Developers Community Meetup is an informal meeting of developers, engineers, and devops to discuss successes, best practices and share experience with InterSystems Data Platforms!
See and discuss the agenda below.
An excellent opportunity to meet and discuss new solutions with like-minded peers and to find out what's new in InterSystems technology.
The Meetup will take place on 25th of June from 4 p.m. to 6 p.m. at The Belfry, Sutton Coldfield with food and beverages supplied.
Your stories are very welcome!
Here is the current agenda for the event:
3:30 p.m. — Welcome Coffee
4:00 p.m. — Big Data Platform for Hybrid Transaction and Analytics Processing, by Jon Payne
4:30 p.m. — Development Models and Teams, by @John.Murray
5:00 p.m. — Reporting in DeepSee Web, by @Evgeny.Shvarov
5:30 p.m. — Automatically configure a customized development environment, by @Sergei.Shutov
The agenda is not finalized, so if you want to be a presenter please comment below.
The UK Developers Meetup precedes InterSystems Joined Up Health & Care event – the annual gathering for the healthcare community to discuss and collaborate ways in which to improve care. View the agenda for JUHC here.
Join the UK Meetup! Updated the agenda for the meetup - the new session by @Sergey.Shutov has been introduced: Automatically configure a customized development environment. The session covers the approach and demo of creating a private development environment from source control, and how changes can be automatically pushed downstream to build and test environments. Show the use of open source Git hooks, %Installer, and Atelier, with Jenkins and automated Unit Tests. Good, I hope make this program meetup in Asia (actually Korea) This will also cover use of containers for development environments Hi, Community members!It's just a reminder on a meetup we are having on 25th in UK, Birmingham!And! We have a time slot available! So if you want to tell about your solution or share your best practices with InterSytems Data Platform please contact me or comment to this post! Good, I hope make this program meetup in Brazil. We can. Request it in InterSystems Brazil office and we'll arrange it. We are starting in 5 minutes! Join! The right link is this one! Here are slides for @Evgeny.Shvarov session "Printing and Emailing InterSystems DeepSee Reports" Please find slides for my presentation here https://www.slideshare.net/secret/bnUsuAWXsZCKrp (Modern Development Environment)
Please find repository here https://github.com/logist/wduk18 - you will need to load your own IRIS container and change "FROM" statement in Dockerfile - see http://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=AFL_containers_deploy_repo for instructions. You'll also need license key because REST service will not work without it. Also don't forget to change docker storage driver if you want to run examples http://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=ADOCK_additional_driver
Article
Pete Greskoff · Jun 27, 2018
In this post, I am going to detail how to set up a mirror using SSL, including generating the certificates and keys via the Public Key Infrastructure built in to InterSystems IRIS Data Platform. I did a similar post in the past for Caché, so feel free to check that out here if you are not running InterSystems IRIS. Much like the original, the goal of this is to take you from new installations to a working mirror with SSL, including a primary, backup, and DR async member, along with a mirrored database. I will not go into security recommendations or restricting access to the files. This is meant to just simply get a mirror up and running. Example screenshots are taken on a 2018.1.1 version of IRIS, so yours may look slightly different.Step 1: Configure Certificate Authority (CA) ServerOn one of your instances (in my case the one that will be the first mirror member configured), go to the Management Portal and go to the [System Administration -> Security -> Public Key Infrastructure] page. Here you will ‘Configure local Certificate Authority server’.You can choose whatever File name root (this is the file name only, no path or extension) and Directory you want to have these files in. I’ll use ‘CA_Server’ as the File name root, and the directory will be my <install-dir>/mgr/CAServer/. This will avoid future confusion when the client keys and certificates are put into the <install-dir>/mgr/ folder, as I’ll be using my first mirror member as the CA Server. Go to the next page.
You will then need to enter a password, and I’ll use ‘server_password’ in my example. You can then assign attribute values for your Distinguished Name. I’ll set Country to ‘US’ and Common Name to ‘CASrv’. You can accept defaults for validity periods, leave the email section blank, and save.You should see a message about files getting generated (.cer, .key, and .srl) in the directory you configured.
Step 2: Generate Key/Certificate For First Mirror MemberAt this point, you need to generate the certificate and keys for the instance that will become your first mirror member. This time, go to the Management Portal where you will set up the first mirror member, and go to the [System Administration -> Security -> Public Key Infrastructure] page again (see screenshot above). You need to ‘Configure local Certificate Authority client’. For the ‘Certificate Authority server hostname’, you need to put either the machine name or IP address of the instance you used for step 1, and for the ‘Certificate Authority WebServer port number’ use that instance’s web server port (you can get this from the URL in that instance’s Management portal):Make sure you are using the port number for the instance you configured as the CA Server, not the one you are setting up as the client (though they could be the same). You can put your own name as the technical contact (the phone number and email are optional) and save. You should get a message “Certificate Authority client successfully configured.”
Now you should go to ‘Submit Certificate Signing Request to Certificate Authority server’. You’ll need a file name (I’m using ‘MachineA_client’) and password (‘MachineA_password’) as well as again setting values for a Distinguished Name (Country=’US’ and Common Name=’MachineA’). Note that for each certificate you make, at least one of these values must be different than what was entered for the CA certificate. Otherwise, you may run into failures at a later step.At this point, you’ll need to go to the machine you configured to be your CA Server. From the same page, you need to ‘Process pending Certificate Signing Requests’. You should see one like this:
You should process this request, leaving default values, and ‘Issue Certificate’. You’ll need to enter your CA Server password from step 1 (‘server_password’ for me).
Finally, you need to get the certificate. Back on the first mirror member machine, from the same page, go to ‘Get Certificate(s) from Certificate Authority server’, and click ‘Get’ like here:If this is not the same machine where you configured the CA Server, you’ll need to get a copy of the CA Server certificate (‘CA_Server.cer’) also on this machine. Click ‘Get Certificate Authority Certificate’. That’s the top left button in the image above.
Step 3: Configure The Mirror On First Mirror MemberFirst, start the ISCAgent per this documentation (and set it to start automatically on system startup if you don’t want to have to do this every time your machine reboots).Then, in the Management Portal, go to the [System Administration -> Configuration -> Mirror Settings -> Enable Mirror Service] page to enable the service (if it isn’t already enabled). Next, go to the ‘Create a Mirror’ page in the same menu.You will need to enter a mirror name (‘PKIMIRROR’ in my case). You should click ‘Set up SSL/TLS’, and then enter the information there. the first line is asking for that CA server certificate (CA_Server.cer). For ‘This server’s credentials’, you’ll need to enter the certificate and key that we generated in step 2. They will be in the <install>/mgr/ directory. You’ll also need to enter your password here (click the ‘Enter new password’ button as shown). This password is the one you chose in step 2 (‘MachineA_password’ for me). In my example, I am only allowing TLS v1.2 protocol as shown below.
For this example, I won’t use an arbiter or a Virtual IP, so you can un-check those boxes in the ‘Create Mirror’ page. We’ll accept the defaults for ‘Compression’, ‘Parralel Dejournaling’, ‘Mirror Member Name’, and ‘Mirror Agent Port’ (since I didn’t configure the ISCAgent to be on a different port), but I’m going to change the ‘Superserver Address’ to use an IP instead of a host name (personal preference). Just make sure that the other future mirror members are able to reach this machine at the address you choose. Once you save this, take a look at the mirror monitor [System Operation -> Mirror Monitor]. It should look something like this:Step 4: Generate Key/Certificate For Second Failover Mirror Member
This is the same process as step 2, but I’ll replace anything with ‘MachineA’ in the name with ‘MachineB’. As I mentioned before, make sure you change at least 1 of the fields in the Distinguished Name section from the CA certificate. You also need to be sure you get the correct certificate in the Get Certificate step, as you will see both client certificates.Step 5: Join Mirror as Failover MemberJust like you did for the first mirror member, you need to start the ISCAgent and enable the mirror service for this instance (refer to step 3 for details on how to do this). Then, you can join the mirror as a failover member at [System Administration -> Configuration -> Mirror Settings -> Join as Failover].You’ll need the ‘Mirror Name’, ‘Agent Address on Other System’ (the same as the one you configured as the Superserver address for the other member), and the instance name of the now-primary instance.
After you click ‘Next’, you should see a message indicating that the mirror requires SSL/TLS, so you should again use the ‘Set up SSL/TLS’ link. You’ll replace machine A’s files and password with machine B’s for this dialog.
Again, I’m only using TLSv1.2. Once you’ve saved that, you should be able to add information about this mirror member. Again, I’m going to change the hostnames to IP’s, but feel free to use any IP/hostname that the other member can contact this machine on. Note that the IP’s are the same for my members, as I have set this up with multiple instances on the same server.
Step 6: Authorize 2nd Failover Member on the Primary Member
Now we need to go back to the now primary instance where we created the mirror. From the [System Administration -> Configuration -> Mirror Settings -> Edit Mirror] page, you should see a box at the bottom titled ‘Pending New Members’ including the 2nd failover member that you just added. Check the box for that member and click Authorize (there should be a dialog popup to confirm).Now if you go back to [System Operation -> Mirror Monitor], it should look like this (similar on both instances):
If you see something else, wait a minute and refresh the page.
Step 7: Generate Key/Certificate for Async MemberThis is the same as step 2, but I’ll replace anything with ‘MachineA’ in the name with ‘MachineC’. As I mentioned before, make sure you change at least 1 of the fields in the Distinguished Name section from the CA certificate. Make sure you get the correct certificate in the ‘Get Certificate’ page, as you will see all 3 certificates.Step 8: Join Mirror as Async MemberThis is similar to step 5. The only difference is that you have the added option for an Async Member System Type (I will use Disaster Recovery, but you’re welcome to use one of the reporting options). You’ll again see a message about requiring SSL, and you’ll need to set that up similarly (MachineC instead of MachineB). Again, you’ll see a message after saving the configuration indicating that you should add this instance as an authorized async on the failover nodes.Step 9: Authorize Async Member on the Primary MemberFollow the same procedure as in step 6. The mirror monitor should now look like this:Step 10: Add a Mirrored Database
Having a mirror is no fun if you can’t mirror any data, so we may as well create a mirrored database. We will also create a namespace for this database. Go to your primary instance. First, go to [System Administration -> Configuration -> System Configuration -> Namespaces] and click ‘Create New Namespace’ from that page.Choose a name for your namespace, and we’ll need to click ‘Create New Database’ next to ‘Select an existing database for Globals’. You’ll need to enter a name and directory for this new database. On the next page, be sure to change the ‘Mirrored database?’ drop-down to yes (THIS IS ESSENTIAL). The mirror name will default to the database name you chose. You can change it if you wish. We will use the default setting for all other options for the database (you can change them if you want, but this database must be journaled, as it is mirrored). Once you finish that, you will return to the namespace creation page, where you should select this new database for both ‘Globals and ‘Routines’. You can accept the defaults for the other options (don’t copy the namespace from anywhere).
Repeat this process for the backup and async. Make sure to use the same mirror name for the database. Since it’s a newly created mirrored database, there is no need to take a backup of the file and restore onto the other members.
Congratulations, you now have a working mirror using SSL with 3 members sharing a mirrored database! One final look this time at the async’s mirror monitor: Other reference documentation:Create a mirrorCreate mirrored databaseCreate namespace and databaseEdit failover member (contains some information on adding SSL to an existing mirror)
Hi @Pete.Greskoff! Looks like @Lorenzo.Scalese contributed a zpm module for your solution! Thanks, Lorenzo! Hi @Pete.Greskoff @Evgeny.Shvarov This article is very helpful.I develop PKI-Script in order to do this without manual intervention.This lib is focused on PKI steps, for mirroring I prepare something else. Using the PKI to generate certificates is NOT supported for production systems, as documented here:
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=APKI#APKI_about
I can't stress that enough. It is provided for convenience for testing purposes. For production systems or test systems that require proper security, please use certificates/keys specified from the source determined by your security admins, and make sure the procedures they specify for safeguarding the keys are in place and adhered to.
Article
Evgeniy Potapov · Dec 19, 2023
In today's digital age, effective data management and accurate information analysis are becoming essential for successful enterprise operations. InterSystems IRIS Data Platform offers two critical tools, ARCHITECT and ANALYZER, developed to deliver convenient data management.
ANALYZER is a powerful tool available within the InterSystems IRIS platform to provide extensive data analysis and visualization capabilities. This tool allows users to create summary tables and charts to analyze data. It also lets you perform detailed studies and analyses based on available data.
One of the most significant features of ANALYZER is the ability to select the data subject area before starting the analysis. It ensures accurate and targeted data analysis, as the user can pick a specific data subject area from which to extract the required data for analysis.
ANALYZER features an intuitive interface and extensive customization options, making the data analysis process more flexible and efficient. Built-in autocomplete and preview functions greatly simplify the creation of analytical reports and improve data visualization.
Completing summary tables and charts in ANALYZER allows users to visualize data clearly and informatively, streamlining informed business decision-making and identifying critical trends and patterns in the data.
To start working with ANALYZER, proceed to the Management Portal page within the InterSystems IRIS platform. Once there, select the Analytics tab and then navigate to the Analyzer section. This step allows users to access a powerful data analysis tool that provides a wide range of features for data exploration and visualization.
Visiting the ANALYZER page opens up a wide range of options for data analysis and visualization. Here are some of the main functions and elements available on this page:
New / Open / Save / Save / Save As / Restore / Delete: These options enable you to create analysis projects, open existing ones, save current results, restore previous versions, and delete irrelevant data.
Auto-execute / Preview Mode: These functions allow you to automate tasks and view data in an easy-to-analyze format.
View: Users can choose between different data display options, such as a summary table, chart, or a combination of both, for more complete data analysis.
Change to a different Subject Area: This option allows users to select specific data subject areas from which to extract data for analysis.
Refresh and reset the contents of the dimension tree: This function ensures that the content of the dimension tree is refreshed and reset for accurate and up-to-date data analysis.
Add or edit a calculated member / Add or edit a named filter for this Subject Area / Add or edit a pivot variable for this Subject Area: These functions let users add and customize various parameters and elements for more precise data analysis.
Export current results / Export current results to printable PDF format: Users can export existing analysis results for further use in reports or presentations.
Set options for the pivot table / Configure PDF export for this pivot / Modify chart appearance / Define conditional formatting rules: These functions provide extensive customization and data visualization options for a more understandable and explicit analysis.
Now, let's take a closer look at the process of adding items to a summary table in ANALYZER:
Rows: This element allows users to add specific data as rows to a summary table for further analysis. Users can select key factors or categories to group data into rows and provide more detailed analysis.
Columns: This option allows users to add data to a summary table as columns for easy comparison and visualization. Users can pick the parameters or indicators to be presented as columns for more objective analysis.
Measures: This element lets users add specific indicators or metrics to the summary table to evaluate performance or results. Users can select various types of metrics, including sums, averages, medians, and others, for further data analysis.
Filters: This option authorizes users to apply different filters and conditions to the data in the summary table to refine the analysis. Users can select specific parameters or values to filter the data for more exact and targeted research.
These elements play a crucial role in data analysis and provide flexibility and accuracy in working with data in summary tables. Users can determine and customize these elements based on their needs and business requirements to achieve the best results.
In ANALYZER, you can customize parameters for the "Rows" and "Columns" items using the "AXIS OPTIONS" function. Check out the available parameters for adjusting the "Rows" and "Columns" axes below:
Caption: This option entitles you to set a caption for the Rows and Columns axes for more accurate and detailed identification of the data presented there.
Format: This option lets you customize the formatting and display styles of the data in the Rows and Columns axes for a more convenient and informative presentation.
Total Override: This parameter allows you to override the aggregation used to total the data in the "Rows" and "Columns" axes.
Cell Style: This parameter authorizes you to modify the cell style of the data displayed in the Rows and Columns axes for better visualization.
Member Options: This option permits you to filter, sort, and perform other operations on the items represented in the Rows and Columns axes for more proper and meticulous data analysis.
Drilldown Options: This option allows you to control the drilldown operation so that users can get more detailed information when needed.
When you configure these Rows and Columns axes settings, you get access to a flexible and intuitive way to work with data in summary tables, which ultimately lets users get the most out of their data analysis.
In ANALYZER, you can additionally customize parameters for the Measures element using the MEASURE OPTIONS function. Some of the available parameters for altering measures are presented below:
Place measures on: This option authorizes you to determine in which area (columns or rows) to place measures to make comparison and analysis of data easier.
Display measure headers: This option lets you control the display of measure headers in the summary table depending on the number of selected measures. You can decide when and how to display those headers to improve the data visualization.
Configuring these settings for metrics in ANALYZER delivers more flexible and objective data analysis, allowing users to work with various types of metrics and indicators more efficiently to maximize the benefits of data analysis.
Finally, ANALYZER provides the ability to create and customize advanced filters for more precise and targeted data filtering. Check out some of the available options for for filter modifications below:
Add Condition: This option entitles you to add specific conditions to filter data based on the criteria and parameters you specify. You can determine different requirements that must be met to filter data.
Empty: If the filter is blank, you can use the "Add Condition" function to add prerequisites to filter the data. It will result in a more authentic and thorough filtering to get the desired results.
With the "ADVANCED FILTER EDITOR" in ANALYZER, users can build complex filters containing different parameters and conditions to refine the analysis and get more factual and relevant results. It gives us greater flexibility and precision when working with data in the summary table, authorizing users to apply filters according to their specific data analysis requirements and needs.
The Save function lets you preserve the current state of the summary table and any settings applied to the data. When you save a summary table, you can also save the analysis results in a format that allows you to retrieve and exploit the data for future reports, presentations, or further analysis. Similarly, preserving data makes it possible for you to return to previously saved results quickly and conveniently, and continue analyzing or visualizing the data.
So, bottom line:
ANALYZER is a powerful data analysis tool on the InterSystems IRIS platform that provides a variety of data manipulation capabilities. It equips users with convenient and intuitive tools to create summary tables, configure analysis and data visualization parameters, and apply various filters and conditions to obtain specific results. Save and Restore features allow users to save progress and quickly return to previously performed analyses for further work. Analyzer is an essential tool for deeper and clearer data analysis, helping users make more informed decisions based on facts and figures.
Article
Evgeniy Potapov · Dec 5, 2023
In today's digital age, effective data management and accurate information analysis are becoming essential for successful enterprise operations. InterSystems IRIS Data Platform offers two critical tools designed to provide convenient data management: ARCHITECT and ANALYZER.
ARCHITECT: ARCHITECT is a powerful tool created for developing and managing applications on the InterSystems IRIS platform. A vital feature of ARCHITECT is the ability to produce and customize complex data models. It allows users to handle the data structure with flexibility to meet the specific requirements of their applications. ARCHITECT simplifies the development process by providing a user-friendly interface for dealing with business logic and application functionality, as well as adapting to changing business needs. A crucial part of working with ARCHITECT is creating and customizing complex data models, so let's take a look at step-by-step instructions on how to do it.
In order to start using the ARCHITECT functionality, you need to go to the corresponding control panel. To accomplish that, go to the IRIS management section and select the "Analytics" tab. Then, proceed to the "Architect" section.
When in the Architect menu, click the "New" tab at the top of the screen.
It will open the "CREATE A NEW DATA MODEL DEFINITION" menu.
Definition Type: This field selects the type of data model definition. In the context of ARCHITECT, it can be "Cube" or "Subject Area"."Cube" is typically used to define multidimensional data, whereas "Subject Area" describes only a narrow portion of data.
If the "Cube" type is selected, you can fill in the following fields as shown below:
Cube Name:
In this mandatory field, you should specify a unique name for the data cube you want to create. This name will be used to identify the data cube in the system.
Display Name:
Here, you can appoint a name that will be displayed in the user interface for this data cube. This name can be more descriptive and precise than the cube name to give the user a better understanding.
Cube Source:
This section specifies the data source for the cube. This part is not clear to me. Try the following: "The "Class Cube" is selected, and a "Source Class" is specified here, both of which must already exist in the system. This class provides the data to fill the cube.
Class Name for the Cube:
In this compulsory field, you should mention the class name used to create the data cube. In addition to that name, you must include the package name to identify the class correctly.
Class Description:
This field allows you to enter a description or context for the data cube class you are creating, making it easier to understand its purpose and content. This description can come in handy for organizing and structuring data in the system.
If the "Subject Area" type is selected, the next fields can be populated as follows:
Subject Area Name:
In this obligatory field, you should state a unique name for the created subject area.This name will be utilized to identify this subject area in the system.
Display Name:
Here, you can select a name to display in the user interface for this subject area. This name can be more descriptive and straightforward than the subject area name since we generally need it for user convenience.
Base Cube:
This field is required to be completed since it specifies an existing cube that will be employed as the base cube for this subject area. The data from this cube will be available within this subject area.
Filter:
It is an MDX expression that filters the given subject area. It allows you to restrict the available data for particular conditions or criteria.
Class Name for the Subject Area:
This mandatory field specifies the class name used to create the subject area. In addition to this name, you must include the package name to identify the class in a proper manner.
Class Description:
This field lets you enter a description or context for the subject area class being created, simplifying the understanding of its purpose and content. This description can be beneficial for arranging and structuring data in the system.
When you have selected Definition Type, go to the "Open" tab on the ARCHITECT main menu.
Depending on the Definition Type you have chosen, the cube may be available in two tabs in ARCHITECT:
If the "Cube" type was picked, the created cube will be available in the "Cubes" tab.
If the "Subject Area" type was selected, the created subject area will be available in the "Subject Areas" tab.
Once the cube is opened, the following fields will be displayed:
Source Class:
This field indicates the source class associated with the data used to assemble this cube. In our case, it will be "Community.Member".
Model Elements:
This section lists all the items associated with the desired cube. Elements might include Measures, Dimensions, Listings, Listing Fields, Calculated Members, Named Sets, Relationships, and Expressions.
Add Element:
This button authorizes you to add a new element to the designated cube for further analysis.
Undo:
This function is employed to undo the last action or change made as a part of editing this cube.
Expand All:
This feature expands all items in the list for easy viewing and access to details.
Collapse All:
This function collapses all items in the list for effortless navigation and content review.
Reorder:
This action entitles you to alter the order of elements in the cube for uncomplicated display and data analysis.
To add a new element to the cube, click "Add Element", and the following menu "ADD ELEMENT TO CUBE" will appear:
Cube Name:
It is the name of the cube to which you will add a new element.
Enter New Element Name:
It is the field where you should enter a new name for the data item you want to add.
Select an element to add to this cube definition:
In this area, you pick the type of element you want to add. Options may include "Measure", "Data Dimension", "Time Dimension", "Age Dimension", "iKnow Dimension", "Shared Dimension", "Hierarchy", "Level", "Property", "Listing", "ListingField", "Calculated Member", "Calculated Dimension", "Named Set", "Relationship" and "Expression".
Data dimensions specify how to group data by values other than time:
In this part, I will explain how data dimensions help group data into specific values other than temporal ones.
After successfully adding the desired element to the cube, it is vital to perform the cube compilation and assembly process to update and optimize the data. Cube compilation ensures that all changes and new elements are integrated into the cube structure in a proper way, while assembly secures performance optimization and data updates.
It is crucial to remember that the build will not be available until the compilation is completed successfully. It guarantees that the build process is based on accurate and up-to-date data processed and prepared during compilation.
When the compilation and build are complete, it is vital to click the "Save" tab to ensure that any alterations done to the cube structure have been saved successfully. It will guarantee that the latest changes and updates made within the cube are now available for future use and analysis.
ARCHITECT is an integral part of the InterSystems IRIS platform, which provides extensive capabilities for developing and managing data models. It equips users with unique flexibility to create and modify complex data models, allowing them to customize the data structure to meet the requirements of a specific application. The compile-and-build system enables efficient data updating and optimization, securing high performance and accuracy across various business scenarios.
With ARCHITECT, users can modify cube elements, including measurements, relationships, expressions, and others, and perform further analyses based on the updated data. It makes it easy to adapt applications to changing business requirements and respond quickly to a company's growing needs.
The save functionality allows users to preserve all modifications and customizations, ensuring reliability and convenience when working with data. ARCHITECT is central to successful data management and provides a wide range of tools for producing and optimizing data structures to facilitate efficient data analysis and informed business decisions.
Announcement
Ronnie Hershkovitz · Feb 18
Hi Community,
We're pleased to invite you to the upcoming webinar in Hebrew:
👉 Hebrew Webinar: GenAI + RAG - Leveraging Intersystems IRIS as your Vector DB👈
📅 Date & time: February 26th, 3:00 PM IDT
Have you ever gone to the public library and asked the librarian for your shopping list? Probably not.
That would be like akin to asking a Large Language Model (LLM) a question based on a matter internal to your organization. It's only trained on public data. Using Retrieval Augmented Generation (RAG) we can give the LLM the proper context to answer matters that are relevant to your organization's data. All we need is a Vector DB. Lucky for you, this is one of IRIS's newest features. This webinar will include a theoretical overview, technical details, and a live demo, with a Q&A period at the end.
Presenter: @Ariel.Glikman, Sales Engineer InterSystems Israel
➡️ Register today!
Yay! InterSystems Israel webinars are back 🤩