If you're deploying to more than one environment/region/cloud/customer, you will inevitably encounter the issue of configuration management.

While all (or just several) of your deployments can share the same source code, some parts, such as configuration (settings, passwords) differ from deployment to deployment and must be managed somehow.

In this article, I will try to offer several tips on that topic. This article talks mainly about container deployments.

3 4
0 218
Article
Evgeny Shvarov · Aug 13, 2021 4m read
Building Analytics Solution with IRIS

Hi developers!

How to build an analytics solution with InterSystems IRIS?

To begin with, let's agree on the points of what is the analytics solution - and this could be a very wide topic. Let's limit the set of solutions to those you can present in the Analytics contest.

There are three kinds of analytics solutions that we will review here: monitoringinteractive analytics, and reporting

Monitoring

The typical monitoring solution consists of an online dashboard with KPIs that are being actively updated.

The key use case is of monitoring is to visually observe the KPI of fresh data every moment to react in case of an emergency.

Interactive Analytics

This solution supposes a set of interactive dashboards with filters and drill-downs.

The key use case is to explore the data with filters and drill-downs making business decisions upon graph and table data visualization.

Reporting

Reporting solution provides a set of static (usually) reports in a form of HTML or pdf documents that deliver the data in graph and text form in a predesigned form and could be sent via email.

The typical use case of a reporting system is to obtain reports on a given period that will illustrate the status of the product, process, service, sales, etc that is crucial for the business.

How InterSystems products could be used to build such solutions? Let's discuss this below!

3 0
1 218

This is the third post of a series explaining how to create an end-to-end Machine Learning system.

Training a Machine Learning Model

When you work with machine learning is common to hear this work: training. Do you what training mean in a ML Pipeline?
Training could mean all the development process of a machine learning model OR the specific point in all development process
that uses training data and results in a machine learning model.

4 10
1 218

GlobalSummit too close now, so many people going to be there from so many companies. I'm sure that somebody already uses Docker or even Kubernetes in their work, I do. And would like to share my experience and thoughts about what could be better. And want to hear other people about their experience, how you use Docker, what issues have you faced and how did you solve it. I think InterSystems will help us to find time and place when we could do it, and hope @Luca Ravazzolo will join us.

0 3
0 217

Introduction

This document is intended to provide a survey of various High Availability (HA) strategies that can be used in conjunction with InterSystems Caché, Ensemble, and HealthShare Foundation. This document also provides an overview of the various types of system outages that can occur, as well as how each strategy would handle a given outage, with the goal of helping you choose the right strategy for your specific deployment.

The strategies surveyed in this document are based on three different HA technologies:

0 0
0 215

If you create your own Language Extensions to Object you mostly have to find the
proper %ZLANGC00 or %ZLANGV00 or %ZLANGF00 and add the extensions manually.
A few utilities do it already automatically (ZPM, ZME, ..)
This utility allows you to add your extensions programmatically. (eg. at first run, or during installation)
I found this quite useful for my Docker-based demos as it all happens at the start time.

It is related to these articles

5 0
0 214

Requirement: Transform source XML message to target JSON.

step1: First create json equivalent xml from any online tool.

step2: use add in  studio utility create persistent classes from json equivalent xml and compile it , now target persistent classes are ready

step3: Do the above step for source xml or xsd and generate persistent classes for source and compile it

step4: complete the DTL by selecting root node from source, target persistent classes and complete the mappings and compile it

5 0
0 214

I recently started to study interoperability and I found the official documentation very helpful in understanding how it works, though I still had some trouble implementing it myself. With the help I got from my coworkers, I managed to create a Demo of a system and learn through practice. Because of that, I decided to write this to help others with "getting their hands dirty" and share the help I got.

3 3
0 212

About regulations

Personal data privacy regulations have become an indispensable requirement for projects dealing with personal data. The compliance with these laws is based on 4 principles:

4 0
2 212
Article
David Loveluck · Sep 28, 2015 1m read
DTL TechFAQ

Ensemble is based on message flow, and a data transformation is a way to convert from one message type to another. DTL (Data Transformation Language) adds a layer to this - it provides a graphical way to do the conversion. This is really helpful because most of the time, people with domain-specific knowledge may not have extensive coding skills. However, you always have the ability to do some coding, so if you need or want to, this is available.

DTL has several components: the data transformation engine, the language itself, and the DTL editor.

0 0
0 211

Hi,

I have wrote an article about how to install the intersystems cache driver in a Docker container, and then deploy it using Azure Functions: 

How to run a (Python) Azure Function as a Docker container & Deploy it using Bicep | Victor Sanner

This might be useful to others, especially the dockerfile which I have copied below. This builds a debian docker container and installs the Intersystems Cache driver, which python can then use :)

2 0
1 211

EHR (Electronic Health Record) systems are modeled in a proprietary format/structure and are not based on market models such as FHIR or HL7. Some of these systems can interoperate data in a proprietary format for FHIR and other market models, but others can not. InterSystems has two platforms that can interoperate proprietary formats for market ones: InterSystems HealthShare Connect and InterSystems IRIS for Health.

6 0
3 211

QEWD is assumed by most people to only integrate with IRIS (or Cache) via a connection through IRIS's high-performance C interface.  This requires QEWD (and its Node.js environment) to be installed and configured on the same machine as IRIS.

I'm frequently asked if QEWD can run on a separate server (or servers), and access IRIS (or Cache) over a network connection.  The answer is yes it can, but the information on how to set it up in this way has been admittedly a bit tricky to discover.

0 0
0 210

Presenter: André Cerri
Task: Use third-party visualization tools to present your DeepSee data
Approach: Use DeepSee REST services to access DeepSee data from third-party tools
 

Come see examples of how you can use popular 3rd party data visualization tools to access your DeepSee data.

 

Content related to this session, including slides, video and additional learning content can be found here.

0 0
0 210

Introduction

To overcome the performance limitations of traditional relational databases, applications - ranging from those running on a single machine to large, interconnected grids - often use in-memory databases to accelerate data access. While in-memory databases and caching products increase throughput, they suffer from a number of limitations including lack of support for large data sets, excessive hardware requirements, and limits on scalability.

0 0
0 209