Search

Clear filter
Announcement
Jamie Kantor · Nov 6, 2020

InterSystems Certification Survey - Please give us your thoughts!

Hello, everyone, Now that Virtual Summit 2020 is over, we'd like to remind everyone that we would still love to hear your input on Certification topics via a very brief survey - less than 8 minutes. You can even indicate if you would like to be a Subject Matter Expert for us in the future and what kinds of incentives you'd like to receive to do so. Please let us know your thoughts: https://www.surveymonkey.com/r/2ZVHNPS Thanks, Jamie Kantor, Certification Manager
Announcement
Anastasia Dyubaylo · Jan 19, 2021

New Video: Best Applications of InterSystems Programming Contest Series

Hey Developers, Our next community session from Virtual Summit 2020 is already on InterSystems Developers YouTube: 🏆 Best Applications of InterSystems Programming Contest Series 🏆 Learn about the winning applications from a series of online contests for InterSystems developers. Developers will share their experiences of participating in the InterSystems coding marathon and show demos of their winning projects. Winning projects:⬇️ IRIS-History-Monitor⬇️ IRIS4Health-FHIR-Analytics⬇️ SQL-Builder⬇️ BlocksExplorer⬇️ IRIS-ML-Suite Presenters: 🗣 @Anastasia.Dyubaylo, Community Manager, InterSystems 🗣 @Henrique.GonçalvesDias, System Management Specialist / Database Administrator, Sao Paulo Federal Court🗣 @José.Pereira, Business Intelligence Developer, Shift Consultoria e Sistemas Ltda🗣 @Henry.HamonPereira, System Analyst, BPlus Tecnologia🗣 @Dmitry.Maslennikov, Co-founder, CTO and Developer Advocate, CaretDev Corp🗣 @Renato.Banzai, Machine Learning Engineer Coordinator, Itaú Unibanco So, big applause for all the speakers! Great developers! Great apps! We hope you enjoyed our series of online contests in 2020. Stay tuned for the next contests this year! 😉 Enjoy!
Article
Evgeny Shvarov · Jan 24, 2021

Deploying InterSystems IRIS Data Using ZPM Package Manager

Hi developers! Often we need to deploy some data along with code pieces of the application. And for InterSystems IRIS developers the question could sound: "How can I deploy the data I have in globals?" Here I want to suggest to you one of the approaches - deploying global data using the ZPM package manager. Exporting Globals Data Suppose you have the IRIS database server where you have the global which you want to deploy. The ZPM package manager can deploy files so you need to export the global into a file and build the package with this file. ZPM can deploy globals in XML format, so we need to export a global in an XML file first. E.g. if the global you need to export has the name "DataD" the following command in the IRIS terminal will export the global DataD in XML file: d $System.OBJ.Export("DataD.GBL","/irisrun/repo/data/DataD.xml") How the Resource Looks Like To build the package with a global we should introduce certain resource element in module XML like: <Resource Name="DataD.GBL"/> See the example in documentation. Notice this resource element will look for the DataD.XML file, not DataD.GBL as you could expect. And ZPM will look for the DataD.XML file in the /gbl folder inside the folder listed in <SourceRoot> element. Example Here is a sample repository iris-dataset-countries which contains a package that deploys global with the data on different countries. Here is the module XML: <?xml version="1.0" encoding="UTF-8"?> <Export generator="Cache" version="25"> <Document name="dataset-countries.ZPM"> <Module> <Name>dataset-countries</Name> <Description>Module imports the data of Country passengers in dc.data.Tinanic class</Description> <Version>1.0.0</Version> <Packaging>module</Packaging> <SourcesRoot>src</SourcesRoot> <Resource Name="dc.data.Country.CLS"/> <Resource Name="dc.data.CountryD.GBL"/> </Module> </Document> </Export> And we could see the resource: <Resource Name="dc.data.CountryD.GBL"/> Which is located in /src/gbl/dc.data.CountryD.XML file in the repository: So when ZPM loads the module into IRIS it imports the global according to the module.xml. You can test install the global (and the class for it to make queries) with: USER>zpm "install dataset-countries" Or you are welcome to play with global packaging with the Countries or Titanic datasets.
Announcement
Anastasia Dyubaylo · Feb 2, 2021

InterSystems Multi-Model Contest: Give Us Your Feedback!

Hey developers, As you know, the InterSystems Multi-Model contest has already ended. And now we want to get feedback from those developers who were unable to participate. Please answer some questions to help us improve our contests! 👉🏼 Quick survey: InterSystems Multi-Model Contest Survey Or please share your thoughts in the comments to this post!
Article
Alexey Maslov · Oct 20, 2020

Parallel Processing of Multi-Model Data in InterSystems IRIS and Caché

As we all well know, InterSystems IRIS has an extensive range of tools for improving the scalability of application systems. In particular, much has been done to facilitate the parallel processing of data, including the use of parallelism in SQL query processing and the most attention-grabbing feature of IRIS: sharding. However, many mature developments that started back in Caché and have been carried over into IRIS actively use the multi-model features of this DBMS, which are understood as allowing the coexistence of different data models within a single database. For example, the [HIS qMS](https://openexchange.intersystems.com/package/HIS-qMS) database contains both semantic relational (electronic medical records) as well as traditional relational (interaction with PACS) and hierarchical data models (laboratory data and integration with other systems). Most of the listed models are implemented using [SP.ARM](https://openexchange.intersystems.com/company/SP-ARM)'s qWORD tool (a mini-DBMS that is based on direct access to globals). Therefore, unfortunately, it is not possible to use the new capabilities of parallel query processing for scaling, since these queries do not use IRIS SQL access. Meanwhile, as the size of the database grows, most of the problems inherent to large relational databases become right for non-relational ones. So, this is a major reason why we are interested in parallel data processing as one of the tools that can be used for scaling. In this article, I would like to discuss those aspects of parallel data processing that I have been dealing with over the years when solving tasks that are rarely mentioned in discussions of Big Data. I am going to be focusing on the technological transformation of databases, or, rather, technologies for transforming databases. It's no secret that the data model, storage architecture, and software and hardware platform are usually chosen at the early stages of system development, often when the project is still far from completion. However, some time will pass, and it is quite often the case when several years after the system was deployed, the data needs to be migrated for one reason or another. Here are just a few of the commonly encountered tasks (all examples are taken from real life): 1. A company is planning to go international, and its database with 8-bit encoding must be converted to Unicode. 2. An outdated server is being replaced with a new one, but it is either impossible to seamlessly transfer journals between servers (using Mirroring or Shadowing IRIS system features) due to licensing restrictions or lack of capabilities to meet existing needs, such as, for example, when you are trying to solve task (1). 3. You find that you need to change the distribution of globals among databases, for example, moving a large global with images to a separate database. You might be wondering what is so difficult about these scenarios. All you need to do is stop the old system, export the data, and then import it into the new system. But if you are dealing with a database that is several hundred gigabytes (or even several terabytes) in size, and your system is running in 7x24 mode, you won't be able to solve any of the mentioned tasks using the standard IRIS tools. ## Basic approaches to task parallelization ### "Vertical" parallelization Suppose that you could break a task down into several component tasks. If you are lucky, you find out that you can solve some of them in parallel. For example, - Preparing data for a report (calculations, data aggregation…) - Applying style rules - Printing out reports can all be performed at the same time for several reports: one report is still in the preparation phase, another is already being printed out at the same time, etc. This approach is nothing new. It has been used ever since the advent of batch data processing, that is, for the last 60 years. However, even though it is not a new concept, it is still quite useful. However, you will only realize a noticeable acceleration effect when all subtasks have a comparable execution time, and this is not always the case. ### "Horizontal" parallelization When the order of operations for solving a task consists of iterations that can be performed in an arbitrary order, they can be performed at the same time. For example: - Contextual search in the global: - You can split the global into sub-globals ($order by the first index). - Search separately in each of them. - Assemble the search results. - Transferring the global to another server via a socket or ECP: - Break the global into parts. - Pass each of them separately. Common features of these tasks: - Identical processing in subtasks (down to sharing the same parameters), - The correctness of the final result does not depend on the order of execution of these subtasks, - There is a weak connection between the subtasks and the "parent" task only at the level of result reporting, where any required postprocessing is not a resource-intensive operation. These simple examples suggest that horizontal parallelism is natural for data transformation tasks, and indeed, it is. In what follows, we will mainly focus on this type of parallel processing. ## "Horizontal" parallelization ### One of the approaches: MapReduce MapReduce is a distributed computing model that was introduced by [Google](https://ru.bmstu.wiki/Google_Inc.). It is also used to execute such operations to process large amounts of information at the same time. Popular open source implementations are built on a combination of [Apache Hadoop](https://ru.bmstu.wiki/Apache_Hadoop) ([https://ru.bmstu.wiki/Apache\_Hadoop](https://ru.bmstu.wiki/Apache_Hadoop)) and [Mahout](https://ru.bmstu.wiki/Apache_Mahout)[https://ru.bmstu.wiki/Apache\_Mahout](https://ru.bmstu.wiki/Apache_Mahout). Basic steps of the model: [Map](https://ru.bmstu.wiki/index.php?title=Map&action=edit&redlink=1) (distribution of tasks between handlers), the actual processing, and [Reduce](https://ru.bmstu.wiki/index.php?title=Reduce&action=edit&redlink=1) (combining the processing results). For the interested reader who would like to know more, I can recommend the series of articles by Timur Safin on his approach to creating the MapReduce tool in IRIS/Caché, which starts with [Caché MapReduce - an Introduction to BigData and the MapReduce Concept (Part I)](https://community.intersystems.com/post/cach%C3%A9-mapreduce-introduction-bigdata-and-mapreduce-concept). Note that due to the "innate ability" of IRIS to write data to the database quickly, the Reduce step, as a rule, turns out to be trivial, as in the [distributed version of WordCount](https://community.intersystems.com/post/cach%C3%A9-mapreduce-putting-it-all-together-%E2%80%93-wordcount-example-part-iii#comment-14196). When dealing with database transformation tasks, it may turn out to be completely unnecessary. For example, if you used parallel handlers to move a large global to a separate database, then we do not need anything else. ### How many servers? The creators of parallel computing models, such as MapReduce, usually extend it to several servers, the so-called data processing nodes, but in database transformation tasks, one such node is usually sufficient. The fact of the matter is that it does not make sense to connect several processing nodes (for example, via the Enterprise Cache Protocol (ECP)), since the CPU load required for data transformation is relatively small, which cannot be said about the amount of data involved in processing. In this case, the initial data is used once, which means that you should not expect any performance gain from distributed caching. Experience has shown that it is often convenient to use two servers whose roles are asymmetric. To simplify the picture somewhat: - The source database is mounted on one server (*Source DB*). - The converted database is mounted on the second server (*Destination DB*). - Horizontal parallel data processing is configured only on one of these servers; the operating processes on this server are the *master* processes to - the processes running on the second server, which are, in turn, the *slave* processes; when you use ECP, these are the DBMS system processes (ECPSrvR, ECPSrvW, and ECPWork), and when using a socket-oriented data transfer mechanism, these are the child processes of TCP connections. We can say that this approach to distributing tasks combines horizontal parallelism (which is used to distribute the load within the master server) with vertical parallelism (which is used to distribute "responsibilities" between the master and slave servers). ## Tasks and tools Let us consider the most general task of transforming a database: transferring all or part of the data from the source database to the destination database while possibly performing some kind of re-encoding of globals (this can be a change of encoding, change of collation, etc.). In this case, the old and new databases are local on different database servers. Let us list the subtasks to be solved by the architect and developer: 1. Distribution of roles between servers. 2. Choice of data transmission mechanism. 3. Choice of the strategy for transferring globals. 4. Choice of the tool for distributing tasks among several processes. Let's take a bird's eye view of them. ### Distribution of roles between servers As you are already well familiar with, even if IRIS is being installed with Unicode support, it is also able to mount 8-bit databases (both local and remote). However, the opposite is not true: the 8-bit version of IRIS will not work with a Unicode database, and there will be inevitable <WIDE CHAR> errors if you try to do so. This must be taken into account when deciding which of the servers – the source or the destination – will be the master if the character encoding is changed during data transformation. However, it is impossible here to decide on an ultimate solution without considering the next task, which is the ### Choice of data transmission mechanism You can choose from the following options here: 1. If the license and DBMS versions on both servers allow the ECP to be used, consider ECP as a transport. 2. If not, then the simplest solution is to deal with both databases (the source and destination) locally on the destination system. To do this, the source database file must be copied to the appropriate server via any available file transport, which, of course, will take additional time (for copying the database file over the network) and space (to store a copy of the database file). 3. In order to avoid wasting time copying the file (at least), you can implement your mechanism for exchanging data between the server processes via a TCP socket. This approach may be useful if: - The ECP cannot be used for some reason, e.g. due to the incompatibility of the DBMS versions serving the source and destination databases (for example, the source DBMS is of a very legacy version), - Or: It is impossible to stop users from working on the source system, and therefore the data modification in the source database that occurs in the process of being transferred must be reflected in the destination database. My priorities when choosing an approach are pretty evident: if the ECP is available and the source database remains static while it is transferred – 1, if the ECP is not available, but the database is still static – 2, if the source database is modified – 3. If we combine these considerations with master server choice, then we can produce the following possibility matrix: | **Is the source DB static during transmission?** | **Is the ECP protocol available?** | **Location of the DB source** | **Master system** | | --- | --- | --- | --- | | Yes | Yes | Remote on the target system | Target | | Yes | No | Local (copy) on the target system | Target | | No | It does not matter, as we will use our mechanism for transferring data over TCP sockets. | Local (original) on the source system | Source| ### Choice of the strategy for transferring globals At first glance, it might seem that you can simply pass globals one by one by reading the Global Directory. However, the sizes of globals in the same database can vary greatly: I recently encountered a situation when globals in a production database ranged in size between 1 MB and 600 GB. Let's imagine that we have nWorkers worker processes at our disposal, and there is at least one global ^Big for which it is true: Size(^Big) > (Summary Size of All ^Globals) / nWorkers Then, no matter how successfully the task of transferring the remaining globals is distributed between the working processes, the task that ends up being assigned the transfer of the ^Big global will remain busy for the remaining allocated part of the time and will probably only finish its task long after the other processes finish processing the rest of the globals. You can improve the situation by pre-ordering the globals by size and starting the processing with the largest ones first, but in cases where the size of ^Big deviates significantly from the average value for all globals (which is a typical case for the MIS qMS database): Size(^Big) >> (Summary Size of All ^Globals) / nWorkers this strategy will not help you much, since it inevitably leads to a delay of many hours. Hence, you cannot avoid splitting large global into parts to allow its processing using several parallel processes. I wish to emphasize that this task (number 3 in my list) turned out to be the most difficult among others being discussed here, and took most of my (rather than CPU!) time to solve it. ### Choice of the tool for distributing tasks among several processes The way that we interact with the parallel processing mechanism can be described as follows: - We create a pool of background worker processes. - A queue is created for this pool. - The initiator process (let's call it the _local manager_), having a plan that was prepared in advance in step 3, places _work units_ in the queue; as a rule, the _work unit_ comprises the name and actual arguments of a certain class method. - The worker processes retrieve work units from the queue and perform processing, which boils down to calling a class method with the actual arguments that are passed to the work units. - After receiving confirmation from all worker processes that the processing of all queued work units is complete, the local manager releases the worker processes and finalizes processing if required. Fortunately, IRIS provides an excellent parallel processing engine that fits nicely into this scheme, which is implemented in the %SYSTEM.WorkMgr class. We will use it in a running example that we will explore across a planned series of articles. In the next article, I plan to focus on clarifying the solution to task number 3 in more detail. In the third article, which will appear if you will show some interest in my writing, I will talk about the nuances of solving task number 4, including, in particular, about the limitations of %SYSTEM.WorkMgr and the ways of overcoming them. Awesome! Nice article. Very much looking forward to your views on %SYSTEM.WorkMgr, which has been getting a lot of attention to help us serve the most demanding SQL workloads from our customers. You'll also be able to monitor some of its key metrics such as active worker jobs and average queue length in SAM starting with 2021.1. Interesting article, Alexey!But we are still eagerly waiting for the 2nd part, when we should expect it? Thanks for your interest. I try to fulfill it ASAP.
Announcement
Nikolay Solovyev · Nov 4, 2020

InterSystems Package Manager ZPM 0.2.8 release

We released a new version of ZPM (Package Manager) New in this release: 1) Interoperability support - support for dtl, bpl, lut, esd, x12 in module.xml Now it's allowed to refer to DTL and BPL files. ZPM stores and uses them as CLS. <Resource Name="ResourceTest.TestDTL.dtl"/> <Resource Name="ResourceTest.TestBPL.bpl"/> Other interoperability components such as LUT, ESD, X12 should be in XML format with .xml extension and expected in the folder named i14y (short form of interoperability) <Resource Name="Test_HIPAA_5010.X12"/> <Resource Name="hl7_2.5.LUT"/> 2) DFI files support Allows multiple ways of defining DFI resourcesBy Fullname <Resource Name="Sample Operational Reports-Auditing Overview.dashboard.DFI"/> Wildcard with the folder name <Resource Name="Sample Operational Reports-*.DFI"/> Just everything <Resource Name="*.DFI"/> Limited by keywords, used only during the export. <Resource Name="*.DFI" Keywords="myapp"/>
Article
Mikhail Khomenko · Nov 25, 2020

InterSystems Kubernetes Operator Deep Dive: Introduction to Kubernetes Operators

IntroductionSeveral resources tell us how to run IRIS in a Kubernetes cluster, such as Deploying an InterSystems IRIS Solution on EKS using GitHub Actions and Deploying InterSystems IRIS solution on GKE Using GitHub Actions. These methods work but they require that you create Kubernetes manifests and Helm charts, which might be rather time-consuming.To simplify IRIS deployment, InterSystems developed an amazing tool called InterSystems Kubernetes Operator (IKO). A number of official resources explain IKO usage in details, such as New Video: Intersystems IRIS Kubernetes Operator and InterSystems Kubernetes Operator.Kubernetes documentation says that operators replace a human operator who knows how to deal with complex systems in Kubernetes. They provide system settings in the form of custom resources. An operator includes a custom controller that reads these settings and performs steps the settings define to correctly set up and maintain your application. The custom controller is a simple pod deployed in Kubernetes. So, generally speaking, all you need to do to make an operator work is deploy a controller pod and define its settings in custom resources.You can find high-level explanation of operators in How to explain Kubernetes Operators in plain English. Also, a free O’Reilly ebook is available for download.In this article, we’ll have a closer look at what operators are and what makes them tick. We’ll also write our own operator. Prerequisites and Setup To follow along, you’ll need to install the following tools: kind $ kind --version kind version 0.9.0 golang $ go version go version go1.13.3 linux/amd64 kubebuilder $ kubebuilder version Version: version.Version{KubeBuilderVersion:"2.3.1"… kubectl $ kubectl version Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.11"... operator-sdk $ operator-sdk version operator-sdk version: "v1.2.0"… Custom Resources API resources is an important concept in Kubernetes. These resources enable you to interact with Kubernetes via HTTP endpoints that can be grouped and versioned. The standard API can be extended with custom resources, which require that you provide a Custom Resource Definition (CRD). Have a look at the Extend the Kubernetes API with CustomResourceDefinitions page for detailed info.Here is an example of a CRD: $ cat crd.yaml apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: irises.example.com spec: group: example.com version: v1alpha1 scope: Namespaced names: plural: irises singular: iris kind: Iris shortNames: - ir validation: openAPIV3Schema: required: ["spec"] properties: spec: required: ["replicas"] properties: replicas: type: "integer" minimum: 0 In the above example, we define the API GVK (Group/Version/Kind) resource as example.com/v1alpha1/Iris, with replicas as the only required field.Now let’s define a custom resource based on our CRD: $ cat crd-object.yaml apiVersion: example.com/v1alpha1 kind: Iris metadata: name: iris spec: test: 42 replicas: 1 In our custom resource, we can define any fields in addition to replicas, which is required by the CRD.After we deploy the above two files, our custom resource should become visible to standard kubectl.Let’s launch Kubernetes locally using kind, and then run the following kubectl commands: $ kind create cluster $ kubectl apply -f crd.yaml $ kubectl get crd irises.example.com NAME CREATED AT irises.example.com 2020-11-14T11:48:56Z $ kubectl apply -f crd-object.yaml $ kubectl get iris NAME AGE iris 84s Although we’ve set a replica amount for our IRIS, nothing actually happens at the moment. It’s expected. We need to deploy a controller - the entity that can read our custom resource and perform some settings-based actions.For now, let’s clean up what we’ve created: $ kubectl delete -f crd-object.yaml $ kubectl delete -f crd.yaml Controller A controller can be written in any language. We’ll use Golang as Kubernetes’ “native” language. We could write a controller’s logic from scratch but the good folks from Google and RedHat gave us a leg up. They have created two projects that can generate the operator code that will only require minimum changes – kubebuilder and operator-sdk. These two are compared at the kubebuilder vs operator-sdk page, as well as here: What is the difference between kubebuilder and operator-sdk #1758. Kubebuilder It is convenient to start our acquaintance with Kubebuilder at the Kubebuilder book page. The Tutorial: Zero to Operator in 90 minutes video from the Kubebuilder maintainer might help as well. Sample implementations of the Kubebuilder project can be found in the sample-controller-kubebuilder and in kubebuilder-sample-controller repositories. Let’s scaffold a new operator project: $ mkdir iris $ cd iris $ go mod init iris # Creates a new module, name it iris $ kubebuilder init --domain myardyas.club # An arbitrary domain, used below as a suffix in the API group Scaffolding includes many files and manifests. The main.go file, for instance, is the entrypoint of code. It imports the controller-runtime library, instantiates and runs a special manager that keeps track of the controller run. Nothing to change in any of these files. Let’s create the CRD: $ kubebuilder create api --group test --version v1alpha1 --kind Iris Create Resource [y/n] y Create Controller [y/n] y … Again, a lot of files are generated. These are described in detail at the Adding a new API page. For example, you can see that a file for kind Iris is added in api/v1alpha1/iris_types.go. In our first sample CRD, we defined the required replicas field. Let’s create an identical field here, this time in the IrisSpec structure. We’ll also add the DeploymentName field. The replicas’ count should be also visible in the Status section, so we need to make the following changes: $ vim api/v1alpha1/iris_types.go … type IrisSpec struct { // +kubebuilder:validation:MaxLength=64 DeploymentName string `json:"deploymentName"` // +kubebuilder:validation:Minimum=0 Replicas *int32 `json:"replicas"` } … type IrisStatus struct { ReadyReplicas int32 `json:"readyReplicas"` } … After editing the API, we’ll move to editing the controller boilerplate. All the logic should be defined in the Reconcile method (this example is mostly taken from mykind_controller.go). We also add a couple of auxiliary methods and rewrite the SetupWithManager method. $ vim controllers/iris_controller.go … import ( ... // Leave the existing imports and add these packages apps "k8s.io/api/apps/v1" core "k8s.io/api/core/v1" apierrors "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/tools/record" ) // Add the Recorder field to enable Kubernetes events type IrisReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme Recorder record.EventRecorder } … // +kubebuilder:rbac:groups=test.myardyas.club,resources=iris,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=test.myardyas.club,resources=iris/status,verbs=get;update;patch // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;delete // +kubebuilder:rbac:groups="",resources=events,verbs=create;patch func (r *IrisReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) { ctx := context.Background() log := r.Log.WithValues("iris", req.NamespacedName) // Fetch Iris objects by name log.Info("fetching Iris resource") iris := testv1alpha1.Iris{} if err := r.Get(ctx, req.NamespacedName, &iris); err != nil { log.Error(err, "unable to fetch Iris resource") return ctrl.Result{}, client.IgnoreNotFound(err) } if err := r.cleanupOwnedResources(ctx, log, &iris); err != nil { log.Error(err, "failed to clean up old Deployment resources for Iris") return ctrl.Result{}, err } log = log.WithValues("deployment_name", iris.Spec.DeploymentName) log.Info("checking if an existing Deployment exists for this resource") deployment := apps.Deployment{} err := r.Get(ctx, client.ObjectKey{Namespace: iris.Namespace, Name: iris.Spec.DeploymentName}, &deployment) if apierrors.IsNotFound(err) { log.Info("could not find existing Deployment for Iris, creating one...") deployment = *buildDeployment(iris) if err := r.Client.Create(ctx, &deployment); err != nil { log.Error(err, "failed to create Deployment resource") return ctrl.Result{}, err } r.Recorder.Eventf(&iris, core.EventTypeNormal, "Created", "Created deployment %q", deployment.Name) log.Info("created Deployment resource for Iris") return ctrl.Result{}, nil } if err != nil { log.Error(err, "failed to get Deployment for Iris resource") return ctrl.Result{}, err } log.Info("existing Deployment resource already exists for Iris, checking replica count") expectedReplicas := int32(1) if iris.Spec.Replicas != nil { expectedReplicas = *iris.Spec.Replicas } if *deployment.Spec.Replicas != expectedReplicas { log.Info("updating replica count", "old_count", *deployment.Spec.Replicas, "new_count", expectedReplicas) deployment.Spec.Replicas = &expectedReplicas if err := r.Client.Update(ctx, &deployment); err != nil { log.Error(err, "failed to Deployment update replica count") return ctrl.Result{}, err } r.Recorder.Eventf(&iris, core.EventTypeNormal, "Scaled", "Scaled deployment %q to %d replicas", deployment.Name, expectedReplicas) return ctrl.Result{}, nil } log.Info("replica count up to date", "replica_count", *deployment.Spec.Replicas) log.Info("updating Iris resource status") iris.Status.ReadyReplicas = deployment.Status.ReadyReplicas if r.Client.Status().Update(ctx, &iris); err != nil { log.Error(err, "failed to update Iris status") return ctrl.Result{}, err } log.Info("resource status synced") return ctrl.Result{}, nil } // Delete the deployment resources that no longer match the iris.spec.deploymentName field func (r *IrisReconciler) cleanupOwnedResources(ctx context.Context, log logr.Logger, iris *testv1alpha1.Iris) error { log.Info("looking for existing Deployments for Iris resource") var deployments apps.DeploymentList if err := r.List(ctx, &deployments, client.InNamespace(iris.Namespace), client.MatchingField(deploymentOwnerKey, iris.Name)); err != nil { return err } deleted := 0 for _, depl := range deployments.Items { if depl.Name == iris.Spec.DeploymentName { // Leave Deployment if its name matches the one in the Iris resource continue } if err := r.Client.Delete(ctx, &depl); err != nil { log.Error(err, "failed to delete Deployment resource") return err } r.Recorder.Eventf(iris, core.EventTypeNormal, "Deleted", "Deleted deployment %q", depl.Name) deleted++ } log.Info("finished cleaning up old Deployment resources", "number_deleted", deleted) return nil } func buildDeployment(iris testv1alpha1.Iris) *apps.Deployment { deployment := apps.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: iris.Spec.DeploymentName, Namespace: iris.Namespace, OwnerReferences: []metav1.OwnerReference{*metav1.NewControllerRef(&iris, testv1alpha1.GroupVersion.WithKind("Iris"))}, }, Spec: apps.DeploymentSpec{ Replicas: iris.Spec.Replicas, Selector: &metav1.LabelSelector{ MatchLabels: map[string]string{ "iris/deployment-name": iris.Spec.DeploymentName, }, }, Template: core.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: map[string]string{ "iris/deployment-name": iris.Spec.DeploymentName, }, }, Spec: core.PodSpec{ Containers: []core.Container{ { Name: "iris", Image: "store/intersystems/iris-community:2020.4.0.524.0", }, }, }, }, }, } return &deployment } var ( deploymentOwnerKey = ".metadata.controller" ) // Specifies how the controller is built to watch a CR and other resources // that are owned and managed by that controller func (r *IrisReconciler) SetupWithManager(mgr ctrl.Manager) error { if err := mgr.GetFieldIndexer().IndexField(&apps.Deployment{}, deploymentOwnerKey, func(rawObj runtime.Object) []string { // grab the Deployment object, extract the owner... depl := rawObj.(*apps.Deployment) owner := metav1.GetControllerOf(depl) if owner == nil { return nil } // ...make sure it's an Iris... if owner.APIVersion != testv1alpha1.GroupVersion.String() || owner.Kind != "Iris" { return nil } // ...and if so, return it return []string{owner.Name} }); err != nil { return err } return ctrl.NewControllerManagedBy(mgr). For(&testv1alpha1.Iris{}). Owns(&apps.Deployment{}). Complete(r) } To make the events logging work, we need to add yet another line to the main.go file: if err = (&controllers.IrisReconciler{ Client: mgr.GetClient(), Log: ctrl.Log.WithName("controllers").WithName("Iris"), Scheme: mgr.GetScheme(), Recorder: mgr.GetEventRecorderFor("iris-controller"), }).SetupWithManager(mgr); err != nil { Now everything is ready to set up an operator.Let’s install the CRD first using the Makefile target install: $ cat Makefile … # Install CRDs into a cluster install: manifests kustomize build config/crd | kubectl apply -f - ... $ make install You can have a look at the resulting CRD YAML file in the config/crd/bases/ directory. Now check CRD existence in the cluster: $ kubectl get crd NAME CREATED AT iris.test.myardyas.club 2020-11-17T11:02:02Z Let’s run our controller in another terminal, locally (not in Kubernetes) – just to see if it actually works: $ make run ... 2020-11-17T13:02:35.649+0200 INFO controller-runtime.metrics metrics server is starting to listen {"addr": ":8080"} 2020-11-17T13:02:35.650+0200 INFO setup starting manager 2020-11-17T13:02:35.651+0200 INFO controller-runtime.manager starting metrics server {"path": "/metrics"} 2020-11-17T13:02:35.752+0200 INFO controller-runtime.controller Starting EventSource {"controller": "iris", "source": "kind source: /, Kind="} 2020-11-17T13:02:35.852+0200 INFO controller-runtime.controller Starting EventSource {"controller": "iris", "source": "kind source: /, Kind="} 2020-11-17T13:02:35.853+0200 INFO controller-runtime.controller Starting Controller {"controller": "iris"} 2020-11-17T13:02:35.853+0200 INFO controller-runtime.controller Starting workers {"controller": "iris", "worker count": 1} … Now that we have the CRD and the controller installed, all we need to do is create an instance of our custom resource. A template can be found in the config/samples/example.com_v1alpha1_iris.yaml file. In this file, we need to make changes similar to those in the crd-object.yaml: $ cat config/samples/test_v1alpha1_iris.yaml apiVersion: test.myardyas.club/v1alpha1 kind: Iris metadata: name: iris spec: deploymentName: iris replicas: 1 $ kubectl apply -f config/samples/test_v1alpha1_iris.yaml After a brief delay caused by the need to pull an IRIS image, you should see the running IRIS pod: $ kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE iris 1/1 1 1 119s $ kubectl get pod NAME READY STATUS RESTARTS AGE iris-6b78cbb67-vk2gq 1/1 Running 0 2m42s $ kubectl logs -f -l iris/deployment-name=iris You can open the IRIS portal using the kubectl port-forward command: $ kubectl port-forward deploy/iris 52773 Go to http://localhost:52773/csp/sys/UtilHome.csp in your browser. What if we change the replicas’ count in CRD? Let’s make and apply this change: $ vi config/samples/test_v1alpha1_iris.yaml … replicas: 2 $ kubectl apply -f config/samples/test_v1alpha1_iris.yaml You should now see another Iris pod appear. $ kubectl get events … 54s Normal Scaled iris/iris Scaled deployment "iris" to 2 replicas 54s Normal ScalingReplicaSet deployment/iris Scaled up replica set iris-6b78cbb67 to 2 Log messages in the terminal where the controller in running report successful reconciliation: 2020-11-17T13:09:04.102+0200 INFO controllers.Iris replica count up to date {"iris": "default/iris", "deployment_name": "iris", "replica_count": 2} 2020-11-17T13:09:04.102+0200 INFO controllers.Iris updating Iris resource status {"iris": "default/iris", "deployment_name": "iris"} 2020-11-17T13:09:04.104+0200 INFO controllers.Iris resource status synced {"iris": "default/iris", "deployment_name": "iris"} 2020-11-17T13:09:04.104+0200 DEBUG controller-runtime.controller Successfully Reconciled {"controller": "iris", "request": "default/iris"} Okay, our controllers seem to be working. Now we’re ready to deploy that controller inside Kubernetes as a pod. For that, we need to create the controller docker container and push it to the registry. This can be any registry that works with Kubernetes – DockerHub, ECR, GCR, and so on. We’ll use the local (kind) Kubernetes, so let’s deploy the controller to the local registry using the kind-with-registry.sh script available from the Local Registry page. We can simply remove the current cluster and recreate it: $ kind delete cluster $ ./kind_with_registry.sh $ make install $ docker build . -t localhost:5000/iris-operator:v0.1 # Dockerfile is autogenerated by kubebuilder $ docker push localhost:5000/iris-operator:v0.1 $ make deploy IMG=localhost:5000/iris-operator:v0.1 The controller will be deployed into the IRIS-system namespace. Alternatively, you can scan all pods to find a namespace like kubectl get pod -A): $ kubectl -n iris-system get po NAME READY STATUS RESTARTS AGE iris-controller-manager-bf9fd5855-kbklt 2/2 Running 0 54s Let’s check the logs: $ kubectl -n iris-system logs -f -l control-plane=controller-manager -c manager You can experiment with changing replicas’ count in the CRD and observe how these changes are reflected in the IRIS instances count. Operator-SDK Another handy tool to generate the operator code is Operator SDK. To get the initial idea of this tool, have a look at this tutorial. You should install operator-sdk first.For our simple use case, the process will look similar to the one we’ve worked on with kubebuilder (you can delete/create the kind cluster with the Docker registry before continuing). Run in another directory: $ mkdir iris $ cd iris $ go mod init iris $ operator-sdk init --domain=myardyas.club $ operator-sdk create api --group=test --version=v1alpha1 --kind=Iris # Answer two ‘yes’ Now change the IrisSpec and IrisStatus structures in the same file – api/v1alpha1/iris_types.go.We’ll use the same iris_controller.go file as we did in kubebuilder. Don’t forget to add the Recorder field in the main.go file.Because kubebuilder and operator-sdk use different versions of the Golang packages, you should add a context in the SetupWithManager function in controllers/iris_controller.go: ctx := context.Background() if err := mgr.GetFieldIndexer().IndexField(ctx, &apps.Deployment{}, deploymentOwnerKey, func(rawObj runtime.Object) []string { Then, install the CRD and the operator (make sure that the kind cluster is running): $ make install $ docker build . -t localhost:5000/iris-operator:v0.2 $ docker push localhost:5000/iris-operator:v0.2 $ make deploy IMG=localhost:5000/iris-operator:v0.2 You should now see the CRD, operator pod, and IRIS pod(s) similar to the ones we’ve seen when we worked with kubebuilder. Conclusion Although a controller includes a lot of code, you’ve seen that changing the IRIS replicas is just a matter of changing a line in a custom resource. All the complexity is hidden in the controller implementation. We’ve looked at how a simple operator can be created using handy scaffolding tools. Our operator cared only about IRIS replicas. Now imagine that we actually need to have the IRIS data persisted on disk – this would require StatefulSet and Persistent Volumes. Also, we would need a Service and, perhaps, Ingress for external access. We should be able to set the IRIS version and system password, Mirroring and/or ECP, and so on. You can imagine the amount of work InterSystems had to do to simplify IRIS deployment by hiding all the IRIS-specific logic inside operator code. In the next article, we’re going to look at IRIS Operator (IKO) in more detail and investigate its possibilities in more complex scenarios. Great intro to K8s CRDs @Mikhail.Khomenko ! Thank you @Luca.Ravazzolo -)
Announcement
Anastasia Dyubaylo · May 4, 2022

InterSystems Grand Prix Contest Kick-off Webinar 2022

Hi Community, We're pleased to invite all the developers to the upcoming InterSystems Grand Prix Contest 2022 kick-off webinar! We'll share the details of our mega Grand Prix Contest 2022 and describe how you can win up to $22,000 in prizes! Unlike our other InterSystems Developer Community contests, this contest allows you to use any element of our data platform - IntegratedML, Native API, multi-model, Analytics and NLP, Open API and Interoperability, IKO, etc - in your project. In this webinar, we'll talk about the topics to expect from participants and show you how to develop, build and deploy applications on InterSystems IRIS data platform. Date & Time: Monday, May 9 – 11:00 AM EDT Speakers: 🗣 ​​​@Alexander.Woodhead, InterSystems Technical Specialist🗣 ​​​@Robert.Kuszewski, InterSystems Product Manager🗣 @Jeffrey.Fried, InterSystems Director of Product Management🗣 ​​​@Dean.Andrews2971, InterSystems Head of Developer Relations🗣 @Evgeny.Shvarov, InterSystems Developer Ecosystem Manager We will be happy to talk to you at our webinar in Zoom. ✅ Register for the kick-off today! Hey everyone, The kick-off will start in an hour! Please join us in Zoom. Or enjoy watching the stream on YouTube: https://youtu.be/hjFtYog-FQQ Hey Developers, The recording of this webinar is available on InterSystems Developers YouTube! Please welcome: ⏯InterSystems Grand Prix Contest Kick-off Webinar 2022
Announcement
Anastasia Dyubaylo · May 26, 2022

Webinar in Spanish: "SAM: Monitoring InterSystems IRIS with Grafana and Prometheus"

Hi Community, We're pleased to invite you to the upcoming webinar in Spanish called "SAM: Monitoring InterSystems IRIS with Grafana and Prometheus". Date & Time: June 15, 4:00 PM CEST Speaker: @Pierre-Yves.Duquesnoy, Sales Engineer, InterSystems Iberia The webinar is aimed at system administrators who want to monitor one or some InterSystems IRIS platforms and its applications at a glance. It is also aimed at DevOps who have to add application or interoperability metrics to the monitoring. SAM (System Alerting & Monitoring) is the perfect tool to monitor InterSystems IRIS data platforms. It offers an overall picture of the systems to be monitored, and facilitates alert management in any monitored systems. During the webinar, we'll show SAM architecture and its installation; and we'll start to monitor different InterSystems IRIS clusters. We'll also extend the existing metrics to monitor other application metrics. ➡️ Register today and enjoy!
Announcement
Riccardo Andriuzzi · May 27, 2022

MSC - job / InterSystems developer - great opportunity, Turin, Italy

MSC Mediterranean Shipping Company continues to invest and lead the world container market with ships, an extensive container fleet, intermodal and dedicated staff for its customers. The Company’s evolution to its leading brand needs consistency to market, and therefore it is paramount to standardise data, processes and management information. Established in 1998, MSC Technology provides development and technology support for the MSC transportation divisions and is composed by highly accomplished technology professionals. Today, with a team of 1000 plus, MSC Technology provides the best, most interactive maritime software solutions available in the industry. With an emphasis on equal employment opportunities and a collaborative approach to growing our expertise and solving complex problems, we are a trusted strategic partner with a great journey ahead of us. Intersystems Developer Under the governance of the internal program, the Intersystems developer designs, develops and helps the operations of the Event Processing solution based on the IRIS Data Platform. MSC Logistics department is the main customer of this platform. It’s applicable to computing of the equipment activities and their corresponding message streams. The main area of design and development regard data integration and validation through predefined business rules and processing to Database. Subsequent integration technologies will be required to complete the process automation. The developer will work with the global teams (India, USA, Europe), with external partners / vendors and in coordination with the in-place Community of Practices to ensure the coherence of the development with the program architecture. The developer will need to deliver several work packages with local/offshore resources. The developer will play a key role during the release process in coordination with both the development teams and the Application operational teams. The developer will need to understand a) business and application requirements, performance, upgrade and maintainability requirements, and b) existing components available on the market and within MSC, to provide his expertise in a proactive manner to the projects teams. Key Responsibilities Design, develop, document and maintain his/her work packages with transparency Enforce non-functional requirements such as reliability and performances in his/her project designs Work closely with the development teams and existing system providers to provide and design solutions Ensure knowledge transfer with development and operation teams for the corresponding platforms Problem isolation and resolution, with expectation that continuous improvement and enhancements are implemented Perform and coordinate development and operation across our multi-regions environment Work with external 3rd parties to integrate the technologies Additional Responsibilities Identify opportunities for integration technical components Advise, promote, coach, coordinate the use of these tools Actively participate in the solution identification; review, assess and prepare decisions to be taken by Program’s Management and Architecture. The developer ensure that all Projects and Applications effectively use technical components as per the validated Technical Architecture Design Applications performance (response time and availability) will be paramount to the success of the Program. Thus, the developer is to propose and lead the setup of a specific governance model to ensure that the right emphasis and focus is given to performance related topic. Participate in the building of applications development standards, best practices repository, and control mechanisms. Build technical components user guides for other IT and Program’s Teams Contribute to the development of training material and processes documentation Qualifications and Experience At least 4 years’ experience with InterSystems technology (IRIS/CACHE) with experience in the following subjects: COS (cache object script) & OO (object oriented) development using classes SQL (both embedded and dynamic), class queries & SP (stored procedures) Proven experience with Interoperability (aka Ensemble) in both design & implement solutions in a complex production, experience with use of various adaptors (e.g. SQL, REST, HTTP) Working with XML & JSON files Developing with a modern IDE (e.g. Visual studio code) or eclipse, with some Studio knowledge Developing with source control (preferable Dev-Ops) with experience using GIT Benefits? Healthcare? comp band? remote possible? Hello Fabian, we offer a relocation package for people coming from abroad and so full-remote working is not possible. An healthcare coverage is included. I remain at your disposal for any details Best regards, Riccardo Riccardo Andriuzzi riccardo.andriuzzi@msc.com
Question
prashanth ponugoti · Feb 28, 2022

InterSystems IRIS Community version does not have Samples Namespace

Hi Community , I have installed InterSystems IRIS Community in my personal laptop to do some POCs. Here are the defined namespaces available: %SYS HSCUSTOM HSLIB HSSYS PPONUGOTINS USER How to create samples in it? please advice me. Thanks, Prashanth https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=ASAMPLES Hi! Please, checkout this post: https://community.intersystems.com/node/498271 HTH If you want to discover IRIS for Health with some samples, the best way is to install ZPM (community package manager). More info here : https://community.intersystems.com/post/install-zpm-one-line Then, you have access of almost all application in OpenExchange. Let's have an example with csvgen-ui : https://openexchange.intersystems.com/package/csvgen-ui ``` zpm "install csvgen-ui" ``` In OpenExchange you will find may example about rest API, web app, and so.
Announcement
Jeff Fried · Mar 2, 2022

Maintenance releases for Caché, Ensemble, and InterSystems IRIS are Generally Available

NOTE: we previously found an issue with the 2021.1.1.324.0 builds. The 2021.1.1 maintenance releases have been removed from the WRC and have been replaced with 2021.1.2.336.0 builds. 2021.1.2 containers will be available shortly. Two new sets of maintenance releases are now available: Caché 2018.1.6, Ensemble 2018.1.6, and HSAP 2018.1.6 InterSystems IRIS 2020.1.2, IRIS for Health 2020.1.2, and HealthShare Health Connect 2020.1.2 Installation kits and containers can be downloaded from the WRC Software Distribution site. Container images for the Enterprise Editions of InterSystems IRIS and IRIS for Health and all corresponding components are available from the InterSystems Container Registry. These are maintenance releases with many updates across a wide variety of areas. For information about the corrections in these releases, refer to the documentation for that version, which includes a Release Notes and Upgrade Checklist, and a Release Changes list, as well as the Class Reference and a full set of guides, references, tutorials, and articles. All documentation can be reached via docs.intersystems.com. Build numbers for these releases are shown in the table below: Version Product Build number 2018.1.6 Caché and Ensemble 2018.1.6.717.0 2018.1.6 Caché Evaluation 2018.1.6.717.0su 2018.1.6 HealthShare Health Connect (HSAP) 2018.1.6HS.9063.0 2020.1.2 InterSystems IRIS 2020.1.2.517.0 2020.1.2 IRIS for Health 2020.1.2.517.0 2020.1.2 HealthShare Health Connect 2020.1.2.517.0 2020.1.2 IRIS Studio 2021.1.2.517.0 2021.1.2 InterSystems IRIS 2021.1.2.336.0 2021.1.2 IRIS for Health 2021.1.2.336.0 2021.1.2 HealthShare Health Connect 2021.1.2.336.0 2021.1.2 IRIS Studio 2021.1.2.336.0 Container images InterSystems IRIS and IRIS for Health and all corresponding components are available from the InterSystems Container Registry using the following commands for the 2020.1.2 release: docker pull containers.intersystems.com/intersystems/iris:2020.1.2.517.0 docker pull containers.intersystems.com/intersystems/irishealth:2020.1.2.517.0 docker pull containers.intersystems.com/intersystems/iris-arm64:2020.1.2.517.0 docker pull containers.intersystems.com/intersystems/irishealth-arm64:2020.1.2.517.0 For a full list of the available images, please refer to the ICR documentation. Do I understand correctly that Caché Evaluation and IRIS Studio (full kit) are only available through WRC, which is not available to everyone, but only for supported customers? It seemed to me that these products are designed for everyone. By the way, the same goes for ODBC, JDBC and CSPGateway. @Vitaliy.Serdtsev - you could download the Community Edition of InterSystems IRIS from evaluation.InterSystems.com and just install Studio from that if you wish. Also, is there a reason you want the SingleUser version of Caché rather than the InterSystems IRIS Community Edition (which is freely available to everyone)? It's only 2 weeks ago when WRC said that 2018.1.5 will be last ever version of Cache and Ensemble! But it's really nice to have new version, hopefully it fixes known bugs. What you heard from the WRC was in error - 2018.1.6 has been planned for over a year. It definitely fixes known bugs (otherwise why release a maintenance release? ;) ) you could download the Community Edition of InterSystems IRIS and just install Studio from that if you wish That's exactly what I'm doing now. But agree, downloading hundreds of megabytes for the sake of Studio and drivers is inefficient. Also, is there a reason you want the SingleUser version of Caché rather than the InterSystems IRIS Community Edition IRIS does not interest me in this case. Over the years I have "tormented" WRC about the bugs I found in Caché (in some way I acted as a free beta tester), until the technical support ended. Some fixes at that time were included only in future versions of Caché , which I can no longer check. I would be interested to check out these fixes on the free version. Maybe there are other reasons why there is still a single-user version of Caché, especially only for those who already have a full-featured version, but for me they are not obvious. Hi Vitaliy - InterSystems IRIS has been on the market more than 3 years now and is the platform the general market is developing on. We've worked to make it easy to get InterSystems IRIS and the VS Code dev tooling, and are working to make independent components available easily as well. By far the vast majority of users that want Caché are existing customers - so you can get this easily through the WRC. @Vitaliy.Serdtsev - you make a fair point about downloading the full Eval kit just for a few pieces of it ... unfortunately that is the only distribution mechanism that I know of currently available for non-supported customers. Thank you for all the time you've spent over the years helping to find ways to make Caché more robust!! It sounds like you've been able to identify quite a few areas of improvement. There was significant effort put into streamlining things and solving known issues in the rebirth of Caché as InterSystems IRIS. You might find it interesting to download the Community Edition (IRIS version of the SU kit but much more powerful with fewer restrictions) and see if the bugs you reported are still present there or not. What do you think about that approach? I got clarification that we still create an SU version of Caché which is available in the WRC because some of our large partners/APs rely on it for people to learn on (which makes sense). Obviously we won't want new prospects playing with it, which is why it is no longer publicly available and instead we provide the InterSystems IRIS Community Edition. Hi Jeffrey. By far the vast majority of users that want Caché are existing customers - so you can get this easily through the WRC. I have indicated why I already can't get Caché now, even the SU version:<..>, until the technical support ended. <..> in future versions of Caché, which I can no longer check. You might find it interesting to download the Community Edition <..> and see if the bugs you reported are still present there or not. What do you think about that approach? <..> we still create an SU version of Caché <..> because some of our large partners/APs rely on it for people to learn on <..> I have already written that, as well as your large partners/APs who prefer to stay on Caché, IRIS does not interest me in this case. Why would I check something that I won't be using in the foreseeable future, even if there are fixes there? And yes, for the sake of hobby, I have been using IRIS CE for a long time.
Announcement
Anastasia Dyubaylo · Feb 23, 2022

[Video] Creating Virtual Models with InterSystems IRIS Adaptive Analytics

Hey Community, Enjoy watching the new video on InterSystems Developers YouTube channel: ⏯ Creating Virtual Models with InterSystems IRIS Adaptive Analytics See how InterSystems IRIS Adaptive Analytics lets you quickly and easily create virtual data models, helping you make better business decisions. Adaptive Analytics includes built-in AI to improve query speed based on common requests by automatically creating data aggregates. Business users and analysts can access data using their business intelligence tool of choice. Stay tuned!
Announcement
Anastasia Dyubaylo · Apr 15, 2022

[Video] Creating Pivot Tables with InterSystems IRIS Business Intelligence

Hey Developers, New video is already on InterSystems Developers YouTube channel: ⏯ Creating Pivot Tables with InterSystems IRIS Business Intelligence Learn how to use the Analyzer tool from InterSystems IRIS Business Intelligence to create pivot tables for display on a dashboard. This video shows the central components of the Analyzer tool and demonstrates how to construct a pivot table. Enjoy and stay tuned!
Announcement
Anastasia Dyubaylo · May 3, 2022

[Video] Embed Business Intelligence into your Applications with InterSystems IRIS

Hey Community, New video is already on InterSystems Developers YouTube channel: ⏯ Embed Business Intelligence into your Applications with InterSystems IRIS Learn how InterSystems IRIS Business Intelligence works, and get an introduction to commonly used dashboards and user interfaces. InterSystems IRIS® Business Intelligence offers a complete set of tools for embedding business intelligence into your applications, making it easier to get insights into your data. Enjoy and stay tuned!