Hi Developers!

Recently we published on Docker Hub images for InterSystems IRIS Community Edition and InterSystems IRIS Community for Health containers.

What is that?

There is a repository that publishes it, and in fact, it is the same container IRIS Community Edition containers you have on official InterSystems listing which have the pre-loaded ObjectScript Package Manager (ZPM) client.

So if you run this container with IRIS CE or IRIC CE for Health you can immediately start using ZPM and install packages from Community Registry or any others.

What does this mean for you?

It means, that anyone can deploy any of your InterSystems ObjectScript application in 3 commands:

  • run IRIS container;
  • open terminal;
  • install your application as ZPM package.

It is safe, fast and cross-platform.

It's really handy if you want to test a new interesting ZPM package and not harm any of your systems.

Suppose, you have docker-desktop installed.  You can run the image, which wiil pull the latest container if you don't have it locally:

1 1
0 202

I'm always on the lookout for tools that make the development and testing of my interfaces more efficient. A couple of years ago I came across HL7 Spy, from Inner Harbour Software. It quickly became my go-to tool for running message comparison reports for interface engine migrations, message statistics gathering, and troubleshooting message receipt and delivery. It also offered enhanced functionality for things like fetching messages via sftp that other tools don't provide.

I've recently been working with HL7 Spy's author, Jon Reis, to enable support for fetching messages directly from the Ensemble message store. Its SQL Loader feature now has native Caché/IRIS support, and I've contributed a small server-side class to support the extraction of messages using it.

1 3
1 600

¡Hi everybody!

As you likely are aware, the new version of InterSystems IRIS for Health (I4H) it's already available in Docker Hub. It's the Community version and is free and fully functional. There have been comments about it in other articles and posts,... so today I won't add anything about features. Here I want to explore "the mistery about the disappearance, or better, absence of our persistent data when we run a container with the durable option"  (I didn't find a terrifying font to emphasize the thriller... post editor is not terrific for styling smiley ) .

2 0
2 309

Some time ago I developed an application that tackled a familarial problem faced by many developers when required to update multiple UAT or PRODUCTION sites with the latest Software patches that have been developed and tested on your DEV server and now need to be deployed to multiple sites running that software.

 

In principle the solution works as follows:

1) Prepare an XML export of affected classes/routines/csp pages/hl7 definitions et al

2) Optionally create a global export of any new globals or changes to existing globals

1 1
0 222

The following steps show you how to display a sample list of metrics available from the /api/monitor service.

In the last post, I gave an overview of the service that exposes IRIS metrics in Prometheus format. The post shows how to set up and run IRIS preview release 2019.4 in a container and then list the metrics.


This post assumes you have Docker installed. If not, go and do that now for your platform :)

13 9
4 767

Released with no formal announcement in IRIS preview release 2019.4 is the /api/monitor service exposing IRIS metrics in Prometheus format. Big news for anyone wanting to use IRIS metrics as part of their monitoring and alerting solution. The API is a component of the new IRIS System Alerting and Monitoring (SAM) solution that will be released in an upcoming version of IRIS.

10 1
6 1,188

Loading your IRIS Data to your Google Cloud Big Query Data Warehouse and keeping it current can be a hassle with bulky Commercial Third Party Off The Shelf ETL platforms, but made dead simple using the iris2bq utility.

Let's say IRIS is contributing to workload for a Hospital system, routing DICOM images, ingesting HL7 messages,  posting FHIR resources, or pushing CCDA's to next provider in a transition of care.  Natively, IRIS persists these objects in various stages of the pipeline via the nature of the business processes and anything you included along the way.  Lets send that up to Google Big Query to augment and compliment the rest of our Data Warehouse data and ETL (Extract Transform Load) or ELT (Extract Load Transform) to our hearts desire.

A reference architecture diagram may be worth a thousand words, but 3 bullet points may work out a little bit better:

  • It exports the data from IRIS into DataFrames
  • It saves them into GCS as .avro to keep the schema along the data: this will avoid to specify/create the BigQuery table schema beforehands.
  • It starts BigQuery jobs to import those .avro into the respective BigQuery tables you specify.

 

5 3
0 595
Article
Alberto Fuentes · Oct 23, 2019 2m read
Unit Tests for Data Transforms

Would you like to be sure your data transforms work as expected with a single command? And what about writing unit tests for your data transforms in a quick and simple way? 

When talking about interoperability, there are usually a lot of data transforms involved. Those data transforms are used to convert data between different systems or applications in your code, so they are running a very important job.

4 2
1 447

Every developer has made the mistake of accidentally leaving temporary debug code in place when they meant to remove it after debugging is complete.  The great thing about writing in ObjectScript is that there is a way to make temporary code be truly temporary and automatically self-destruct!   This can also be done in such a way that the code has no change of making it into your source control stream, which can be helpful as well.

3 1
0 311
Article
Guillaume Rongier · Jul 4, 2019 1m read
Install EnsDemo on IRIS

Has you may know, EnsDemo from Ensemble are not available anymore on IRIS.

This is a good thing, Iris is cloud oriented, it must be light, fast. Now the new way of sharing samples or modules is through git, continuous integration and OpenExchange.

But, in some cases you want to go back to your good old samples from EnsDemo to get inspiration or best practices.

Good news, there is a git for that :

4 1
2 657

This is more for my memory that anything else but I thought I'd share it because it often comes up in comments, but is not in the InterSystems documentation. 

There is a wonderful utility called ^REDEBUG that increases the level of logging going into mgr\cconsole.log. 

You activate it by

a) start terminal/login

b) zn "%SYS"

c) do ^REDEBUG

6 0
0 724

PHP, from the beginning of its time, is renowned (and criticized) for supporting integration with a lot of libraries, as well as with almost all the DB existing on the market. However, for some mysterious reasons, it did not support hierarchical databases on the globals.

Globals are structures for storing hierarchical information. They are somewhat similar to key-value database with the only difference being that the key can be multi-level:

2 1
4 566

Here is an ObjectScript snippet which lets to create database, namespace and a web application for InterSystems IRIS:

    set currentNS = $namespace

    zn "%SYS"

    write "Create DB ...",!
    set dbName="testDB"
    set dbProperties("Directory") = "/InterSystems/IRIS/mgr/testDB"
    set status=##Class(Config.Databases).Create(dbName,.dbProperties)
    write:'status $system.Status.DisplayError(status)
    write "DB """_dbName_""" was created!",!!


    write "Create namespace ...",!
    set nsName="testNS"
    //DB for globals
    set nsProperties("Globals") = dbName
    //DB for routines
    set nsProperties("Routines") = dbName
    set status=##Class(Config.Namespaces).Create(nsName,.nsProperties)
    write:'status $system.Status.DisplayError(status)
    write "Namespace """_nsName_""" was created!",!!


    write "Create web application ...",!
    set webName = "/csp/testApplication"
    set webProperties("NameSpace") = nsName
    set webProperties("Enabled") = $$$YES
    set webProperties("IsNameSpaceDefault") = $$$YES
    set webProperties("CSPZENEnabled") = $$$YES
    set webProperties("DeepSeeEnabled") = $$$YES
    set webProperties("AutheEnabled") = $$$AutheCache
    set status = ##class(Security.Applications).Create(webName, .webProperties)
    write:'status $system.Status.DisplayError(status)
    write "Web application """webName""" was created!",!

    zn currentNS

4 3
4 936
Article
Guillaume Rongier · Apr 9, 2019 3m read
IRIS/Ensemble as an ETL

IRIS and Ensemble are designed to act as an ESB/EAI. This mean they are build to process lots of small messages.

But some times, in real life we have to use them as ETL. The down side is not that they can't do so, but it can take a long time to process millions of row at once.

To improve performance, I have created a new SQLOutboundAdaptor who only works with JDBC.

BatchSqlOutboundAdapter

Extend EnsLib.SQL.OutboundAdapter to add batch batch and fetch support on JDBC connection.

3 7
2 937

The Amazon Web Services (AWS) Cloud provides a broad set of infrastructure services, such as compute resources, storage options, and networking that are delivered as a utility: on-demand, available in seconds, with pay-as-you-go pricing. New services can be provisioned quickly, without upfront capital expense. This allows enterprises, start-ups, small and medium-sized businesses, and customers in the public sector to access the building blocks they need to respond quickly to changing business requirements.

 

Updated: 2-Apr, 2021 

13 2
6 4,057

I am often asked by customers, vendors or internal teams to explain CPU capacity planning for large production databases running on VMware vSphere.

In summary there are a few simple best practices to follow for sizing CPU for large production databases:

  • Plan for one vCPU per physical CPU core.
  • Consider NUMA and ideally size VMs to keep CPU and memory local to a NUMA node.
  • Right-size virtual machines. Add vCPUs only when needed.

Generally this leads to a couple of common questions:

5 6
0 5,478

Database systems have very specific backup requirements that in enterprise deployments require forethought and planning. For database systems, the operational goal of a backup solution is to create a copy of the data in a state that is equivalent to when application is shut down gracefully.  Application consistent backups meet these requirements and Caché provides a set of APIs that facilitate the integration with external solutions to achieve this level of backup consistency.

1 7
2 2,223

Note (June 2019): A lot has changed, for the latest details go here

Note (Sept 2018): There have been big changes since this post first appeared, I suggest using the Docker Container version, the project and details for running as a container are still in the same place  published on GitHub so you can download, run - and modify if you need to.

9 5
2 1,618

In this post I show strategies for backing up Caché using External Backup with examples of integrating with snapshot based solutions. The majority of solutions I see today are deployed on Linux on VMware so a lot of the post shows how solutions integrate VMware snapshot technology as examples.

18 23
4 8,113

This post provides guidelines for configuration, system sizing and capacity planning when deploying Caché 2015 and later on a VMware ESXi 5.5 and later environment.

11 1
5 5,189

Index

This is a list of all the posts in the data platforms capacity planning and performance series in order. Also a general list of my other posts. I will update as new posts in the series are added.


You will notice that I wrote some of the posts before IRIS was released and refer to Caché. I will revisit the posts over time, but in the meantime; Generally, the advice for configuration is the same for Caché and IRIS. Some command names may have changed; the most obvious example is that anywhere you see ^pButtons command, you can replace it with ^SystemPerformance.

Capacity Planning and Performance Series

Generally posts build on previous, but you can also just dive in to subjects that look interesting.


14 0
4 4,821

One of the great availability and scaling features of Caché is Enterprise Cache Protocol (ECP). With consideration during application development distributed processing using ECP allows a scale out architecture for Caché applications. Application processing can scale to very high rates from a single application server to the processing power of up to 255 application servers with no application changes.

9 6
2 2,157

++Update: August 2, 2018

This article provides a reference architecture as a sample for providing robust performing and highly available applications based on InterSystems Technologies that are applicable to Caché, Ensemble, HealthShare, TrakCare, and associated embedded technologies such as DeepSee, iKnow, Zen and Zen Mojo.

Azure has two different deployment models for creating and working with resources: Azure Classic and Azure Resource Manager. The information detailed in this article is based on the Azure Resource Manager model (ARM).

11 4
0 10,392

Myself and the other Technology Architects often have to explain to customers and vendors Caché IO requirements and the way that Caché applications will use storage systems. The following tables are useful when explaining typical Caché IO profile and requirements for a transactional database application with customers and vendors.  The original tables were created by Mark Bolinsky.

In future posts I will be discussing more about storage IO so am also posting these tables now as a reference for those articles. 

9 7
2 2,158

This post will show you an approach to size shared memory requirements for database applications running on InterSystems data platforms including global and routine buffers, gmheap, and locksize as well as some performance tips you should consider when configuring servers and when virtualizing Caché applications. As ever when I talk about Caché I mean all the data platform (Ensemble, HealthShare, iKnow and Caché).


A list of other posts in this series is here

26 3
6 8,299

This week I am going to look at CPU, one of the primary hardware food groups :) A customer asked me to advise on the following scenario; Their production servers are approaching end of life and its time for a hardware refresh. They are also thinking of consolidating servers by virtualising and want to right-size capacity either bare-metal or virtualized. Today we will look at CPU, in later posts I will explain the approach for right-sizing other key food groups - memory and IO.

So the questions are:

14 10
2 4,100