Search

Clear filter
Article
Sergey Mikhailenko · Oct 20, 2020

InterSystems: Solution for Technical Support and DBMS-Interoperability Administration

In this article, we'll talk about an application that I use every day when monitoring applications and integration solutions on the InterSystems IRIS platform and finding errors when they occur. While looking for a solution for logging object changes in InterSystems IRIS, Ensemble, and Caché DBMS, I came across a great article about [logging with macros](https://community.intersystems.com/post/logging-using-macros-intersystems-cach%C3%A9). Inspired by the idea, I forked the project the paper had described and adapted it to some specific needs. The resulting solution is implemented as a panel subclass, %CSP.Util.Pane, which has the main window for commands, the Run button, and enabled command configuration. This application enables viewing and editing global arrays, executing queries (including JDBC and ODBC), emailing search results as zipped XLS files, viewing and editing objects, as well as several simple graphs for system protocols. The apptools-admin application is based on jQuery-UI, UiKit, chart.js, and jsgrid.js. You are welcome to have a look at the [source code](https://openexchange.intersystems.com/package/apptools-admin). ###Installation All installation methods are described in detail in the repo. However, the simplest approach is to use the package manager command: ``` zpm "install apptools-admin" [apptools-admin] Reload START [apptools-admin] Reload SUCCESS [apptools-admin] Module object refreshed. [apptools-admin] Validate START [apptools-admin] Validate SUCCESS [apptools-admin] Compile START [apptools-admin] Compile SUCCESS [apptools-admin] Activate START [apptools-admin] Configure START http://hp-msw:52773/apptools/apptools.core.LogInfo.cls http://hp-msw:52773/apptools/apptools.Tabs.PanelUikitPermissMatrx.cls?autoload=Matrix [apptools-admin] Configure SUCCESS [apptools-admin] Activate SUCCESS ``` The first suggested link must be opened in the address field of the browser. And in the loaded panel enter `?` and press the "Execute" button. The application then displays command examples. ![](https://lh5.googleusercontent.com/Tsh6XG7TAcQJHcxWPFIWU8FK6rPFYhxzTvxtiKvjw_QAKxGicy_sJt0WhTcG8zBXNvkQzLlRQPTN4juAk8vOn3gyUXJREfgPs9rqUoM8) ###Commands In the panel, you can run utilities, view and edit globals, and execute queries. Each launch is saved in history in the context of the namespace, so it can be found and repeated. In this context, the word "launch" means starting the execution of commands, and commands will mean everything that we enter in the panel. This screenshot shows an example of a global array `^%apptools.History` view command ![](https://lh4.googleusercontent.com/Viy-pXX3dVlNrfUX7SV4Alxb9pM3I-uDKAYgHRVJKP1hK9BuvkMIuP6oPfDNYrmJb-VTl8b12Fy61q63O-nH0FYG2u8zIeux2e-vvl1h) As you know, automatic error detection and notifications can be handled by popular solutions like Prometheus. But often the severity of errors can be assessed visually. Very often I need to quickly get information about bugs in production in all namespaces. For this, I implemented a utility: `##class(apptools.core.Production).FindAndDrawAllErr` This starts a daily search request for errors for each namespaces that contains working products, and allows you to view these errors with a quick transition to visual tracing. You can run this utility like any others in the apptools panel with the `xec` prefix. ![](https://lh3.googleusercontent.com/0olzck-lNvNLsCwBphoTWLwZdSZJrNpb3qbkul4WuuXD9NnMnwpXofCsay9FxVW8S4iWvZD7L3z-s5UrKpicBofeXUrHsAfeQrnkEU8C-fjXqcdV3dmVGBcZOtgnSuFxWAHI-2Dr) All useful commands can be memorized in the global extensions, in the context of the scope, to be found and repeated at any time. ![](https://lh4.googleusercontent.com/qBRtuZL_gOFOZD92CQOr0-w-NH8PfpVhIpQZYZENmblg8_jpW-dN_pF7bKiPAcWjkE3Tew6pU0k0NsLelUE1KFcCd4Xhl3bF4SjNdtttUGqNq0_eW6GtTIiP9iBx7bjJ2UnAkrF8) ###Globals A large part of the apptools-admin application is dedicated to working with globals. Globals can be viewed in reverse order, as well as by applying a filter on both the link and the data. The displayed notes can be edited or deleted. ![](https://lh4.googleusercontent.com/_FfwdGX_A11k4ue8vZ51_3qwuVvTJd8a0UgFqDPsRJICYuUGmcRMFjOxdG1sdHkLJR3Ea7m30BHSpx33wjDDCd5qVvN01ewWUefSfgNaTzA9Z9HK2iFdYmZZ9yLYuTlTSHFAGfqJ) You can enter the `*` wildcard after the global name to get a list of globals with additional characteristics. A second `*` will add a new field, Allocated MB. A third one will add the Used MB field. This syntax resolves to a Union of the two reports, and the asterisks divide the report that is typically rather long into manageable sections. ![](https://lh6.googleusercontent.com/1osTx0tWcdQlteMHlFjIw3K6SEjH_3gO6EpTUEsfyPgR3_ns8LR3mMIPQGt31ToANPqx0fB_Fkjh6tc6WeUSwS9_8bYx5UgRjHnOkUF0o0izVz7dBB9eok2skmsCoWZbeB7gk_kY) When you get a report as a list of globals (in the screenshot above), you can follow the active links to view the global itself. You can also view and edit the global in the standard way from the management portal, by clicking `R` or `W` in the `Permission` field. Quite often, writing to the global is used to log the states of variables and objects when debugging a project. I use special macros for this: `set $$$AppL("MSW","anyText")=$$$AppObJs(%request)` In this example, `$$$AppL` forms a link to a glob with the `^log` prefix, and the date and time in the index value. `$$$AppObJs` is the object serialization macro. ![](https://lh4.googleusercontent.com/AOTb2Axpzo_YlYNJacWM4k9RAdO0OmYlkYjnUtvEWM3Djc9VQL6NTuEo1mXR5m5K-PtHtsRVUXNwsd7lwkKjuicOvRCq5j2Mwx5P2eBN8lpyPiFacue4riVFkmakPidY5P5-Iyrw) You can view the protocol global in the panel, and the object can be displayed in the window fully formatted. ###Query The function that sees almost as much use as globals is query. You run this function by entering a statement as a command. For example, you can perform an SQL statement. ![](https://lh3.googleusercontent.com/uPQs2IAuSpdORQXaxy_rlzSFmaB9RxKoiVWRGyLsG_tthobEpxU8uBunOxOTi695q9yDCHr0Xjez8IE-U8HKWKOzpvczDmmgaFrcmHfCpo6hMXsxJCP05LtdeohTiTrrooYuSRyh) You can also save the result in the global `^mtempSQLGN`. ![](https://lh4.googleusercontent.com/HE44MxizdxfluYQyuEvEs1k7vmSNganzvoxPWTGfYnjJwgYWD7u9fBlCmHUFT2LOPzfLp8vBC23yJyDDYvnMZU7gwIoJjKaVMhv5WQ8Da2-_F-NrNdjpcYyd4V0BakEiRCcrbejZ) Subsequently, the saved result in the global can be displayed in the panel. ![[](https://lh3.googleusercontent.com/yJVXXpPBZMsT-eXI0FCaWs6f7YvWpMH4zBIKAv-ejtpCdAxqK8fSh3YEy_IbF-aPv9ijRLZXcgy026xLLEAS449CtVjzeKiv2coQa9eK7OmyIbCFOs7pLxJa7Trw525xO3DJFsMH) ###Converting Reports to Excel Format One of the things that was missing in the standard management portal was the ability to execute queries configured in the database JDBC or ODBC sources, output the results in XLS format, and then archive and send the file via email. To achieve this in the application, you simply select the Upload to Excel file checkbox before executing the command. This feature saves a lot of time in my daily routine, and allows me to successfully incorporate ready-made modules into new applications and integrated solutions. ![](https://lh4.googleusercontent.com/LhyeRllHAL6q-rBiRNbCAgGOflKF8OZjomLMCjVapJ2qbvhouPS44dIHmbwt4I3-LmADhgaSNPg-u57am73bcdNGTH97rWtdL1FEmXHI5O9eQYyTBINjidT2H8TGIrXIc6kt4MnV) To enable this functionality, you first need to configure the path for creating files on the server, user credentials, as well as the mail server. For that, in turn, you need to edit the nodes of the global program settings, `^%apptools.Setting`. ![](https://lh4.googleusercontent.com/cTDe7pUN7bhHYiweuWbdL0bXsF98UoVCPsyLt84xlp-vCEH5edjvTgxiNfPIZbKRnCGpUk1m8mr0aPKHFMs0JdIDdwqS53wCF_997Z3KrRrBqv6jKCam0zlPkklC_YTxm8gRXhPb) ###Saving Reports Globally Quite often, you need to save the results of a report execution to the global. For this, you can use these procedures: | | functions | |------------------------------|---------------------------------------------------------------------| |For JDBC: |##class(apptools.core.sys).SqlToDSN | |For ODBC: |##class(apptools.core.sys).SaveGateway | |For SQL: |##class(apptools.core.sys).SaveSQL | |For Query: |##class(apptools.core.sys).SaveQuery | For example, using the `##class(apptools.core.sys).SaveQuery` function, saves the result of the query `%SYSTEM.License:Counts` to the global `^mtempGN`. ![](https://lh5.googleusercontent.com/VTwoteSkKE0MRg00CojD8HCpcK7CNAr8wAVldyVRp3dweYbXampTmhfAkwdIqGdj6H3zkJQ4_qdnCugQkkdpkga1hbXCghSHyZ5pIOufqwu5vcEv9YF3zdE__AwHaPN-5DeK2t9k) You can then display in the panel what you’ve saved with the following command: `result ^mtempGN("%SYSTEM.License:Counts", 0)` https://lh5.googleusercontent.com/KCIekwZw3guq79GWxVdHYdAbWQc4u97-dr-hWT26lYE2oEzUTSkwCE4ki1zvNqRFBg6dKQshSqcy3YSgUbjFKgX3v7Ecpa5Bm_NEQuZhP8Fn8p1gzrmAdTR-Cg9jBeVcNWGukW3a ###Enhanced Functionality Modules What else simplified and automated my work? Changes that enabled me to execute custom modules when forming a query string. I can embed new functionality into the report on the fly — like active links for additional operations on the data. Let’s see some examples. We display the query result in the browser using the function: `##class(apptools.core.LogInfoPane).DrawSQL` ![](https://lh3.googleusercontent.com/2s0tgxOgbOBLy-Pt1e8bx_gKJWNe5YQ6AWLRUCU02TcpTiUscKYeoBEce2qdzCGlbAPzIukRn5EuJ9jwu8eATPCH13zoR8A2fQoAWZfx3RpieD_8rACgikcCZpcIoAIofxlzv2mT) Let's add the word marking function `##class(apptools.core.LogInfo)`.MarkRed to parameter 5. ![](https://lh6.googleusercontent.com/OyotzU3vmjoXw_MzA6amZbpPlpbL-li71OH5JRw7sAfiVoEsAvi8wSfY588kzdyXTURtGtinj0WvIKDhNLGyy50BD40E7NEQSpNv2Iv85lQisJaMBquvheuXVrMravp6OlNxkcqI) In the same way, you can supplement the output with additional features, for example, active links or tooltips. The globals editor in this solution is implemented according to the same principle. Here's a list of functions for outputting globals and queries in tabular form: | | functions | |---------------------------|----------------------------------------------------------------------------------------------------------------------------------------| |For globals: |##class(apptools.core.LogInfoPane).DrawArray("^mtempSQLGN") | |For SQL: |##class(apptools.core.LogInfoPane).DrawSQL("select * From %SYS.ProcessQuery") | |For Query: |##class(apptools.core.LogInfoPane).DrawSQL("query %SYSTEM.License:Counts") | |For global result: |##class(apptools.core.LogInfoPane).DrawSQL("result ^mtempSQLGN") | Working with the apptools.core.Parameter Class This link will open the CSP application in a browser in the context of an instance on which apptools-admin is installed: `http://localhost:52773/apptools/apptools.Form.Exp.cls?NSP=APP&SelClass=apptools.core.Parameter` Or select the active link in the panel. ![](https://lh4.googleusercontent.com/dt3oFX7Aum3yuJ4lvOtmhUWqm55GyFPPRGbsW7phZWRAnnJkB5xE0CdD3ddFEnS0-5xzSD_ydNe2hXp8Eqk1R39aioTZunY7bymF4EkPfaukm86sfFb-YrQp5Mx_KOyU9sr9cGbR) The CSP application will be loaded for editing instances of stored classes, in this example: `apptools.core.Parameter.` ![](https://lh5.googleusercontent.com/OLXVridH04HDDbufXUp9kZ70h08ptXrRcvDRThemPEira4KANa2ECTVGUJm7nuc3crqAnerWcJMToyipqM4YZCcnwqWRVXbOFKN0ZakCvpqrpdMsQZ0yXtCBgUt8z2U_JOmKnSFF) ![](https://lh4.googleusercontent.com/8JO4JQRssC22LGhab6dZiG6PnD2NRYIQtYsY9zj-Z99IIHxVpekzsxfV3Pw04SgxEqto_JepTXcht6vBu17D834Z_7_Hh-Yr5GmXSOsI5axLf7vIHxUi-tmTwcJH9DFlomurpgCH) ###Creating a apptools.core.Parameter via the Table Navigator If you open this link in a browser in the context of the instance on which apptools-admin is installed: `http://localhost:52773/apptools/apptools.Form.Exp.cls?panel=AccordionExp&NSP=APP` Or select the active link in the panel. ![](https://lh3.googleusercontent.com/5me2dJ5aItW6iixR4mBfVRMJHDZfXRnq_pkGrlmCtCTeKzsRx0MbopN0YcLdvEsoWs46Aqw_0fVJGk6L88AaFNWeajDgggpwYawRTbdUIhHRxWo7pFiv3dqj_JdO5wgmm_uZTL5_) The CSP application will be loaded for navigating the stored classes with the ability to edit them. ![](https://lh3.googleusercontent.com/TjtNmzFRS8fTOpsRU0XAXYCHYHNpenI9H1WsEPEtVz7bT2jhakKjsfdJ_inLXX-cBsu5PlKgSJjIS3VoHXD6dqzEm0PDrhy2eOPFT-BoHx6ToPB6Jio21lN1bloGk1xtdlRR7Gd-) ###An example of a simple CSP application If you open this link in a browser in the context of the instance on which apptools-admin is installed: `http://localhost:52773/apptools/apptools.Tabs.PanelSample.cls` Or select the active link in the panel. ![](https://lh3.googleusercontent.com/mqMOjz96bd2YmJdIQtdVYsZhYDFW73BMJtjQh2q1wzzKYrzE39kWGTd2-M0kpBQlxIT2bkv2V7o7ieIlyV7aU8XNF29oI3spIoLGJJAHOKppLrTVrrR2XwOJHAQgLXM3TEQPWGGj) This example also shows the ability to edit class instances `apptools.core.Parameter.` ![](https://lh4.googleusercontent.com/sFXN0QJJb1UuyNJPhykvDOJeOEWC3RrO7oV1dqYixKnPlgEDFJdBqj5bORhaXlftxvngbu-UdgCqvG2UEr7_hKUhjGtJk6jrDNgc43f7DwWCmuDnFubMuIcavHAh7Z1--R72Pf_Q) ###Graphs To visualize database growth, the application offers a page that displays a graph of the monthly measured database size. This graph is derived from the IRIS file.log (cconsole.log for Caché) on records "Expand" retrospectively from the current day. The program traverses the protocol, finds the database extension records and subtracts the incremental megabytes from the current database size. It turns out a graph of the growth of databases. For example, the screenshot below shows a graph of events in InterSystems IRIS formed by the protocol file. ![](https://lh4.googleusercontent.com/EbO0ZVyJwj1EgKF9SR6BpKPyBERj3cNgK4ckrDrzVWVu35LUlQAINvsbArTJ946XQWBhDUzS_dm4m3ize-RM7EjyRLQkesaNvNQOvK8FuUGwKx_8gqYlCMvmC2Xy2ih0xgZKx_q-) Another example below: a schedule of events in the system based on the system protocol file.log (cconsole.log). ![](https://lh6.googleusercontent.com/s3Uz-F88rFnBWSCS5_m4vtCQL9kdS2dEL101oWtlfmmNpfjF1PgtPppI2GC1g3syXIr39X1dUBO0O-gC5mDXcT1k6xOkXrz19TeRqpRAWrNG_FL6kMoyAZqS2N7mIDjG2BKpPy_j) ###Summary The application we’ve discussed in this article was designed to help me perform my daily tasks. It includes a set of modules which you can use as building blocks for a custom administrator tool. I would be very glad if you found it useful in your work. You are welcome to add your wishes and suggestions as tasks to the project [repo](https://github.com/SergeyMi37/apptools-admin).
Announcement
Anastasia Dyubaylo · Dec 4, 2020

New Video: Building a REST API with InterSystems IRIS

Hi Community, See how to build a REST API with InterSystems IRIS in just five minutes, leveraging Docker containers: ⏯ Building a REST API with InterSystems IRIS Subscribe to InterSystems Developers YouTube and stay tuned!
Announcement
Anastasia Dyubaylo · Dec 11, 2020

New Video: The Freedom of Visualization Choice: InterSystems BI

Hi Community, Please welcome the new video on InterSystems Developers YouTube: ⏯ The Freedom of Visualization Choice: InterSystems BI Find out how three visualization tools available to InterSystems customers — InterSystems IRIS Business Intelligence, Microsoft Power BI, and InterSystems Reports — can be used to answer one data question. See a demonstration of each and learn about the benefits of each tool and what makes them different. Additional materials to this video you can find in this InterSystems Online Learning Course. Enjoy watching this video! 👍🏼
Announcement
Anastasia Dyubaylo · Jan 7, 2021

InterSystems Multi-Model Contest Kick-off Webinar

Hi Developers, We're pleased to invite all the developers to the upcoming InterSystems Multi-model contest kick-off webinar! The topic of this webinar is dedicated to the Multi-model contest. On this webinar, we will demonstrate the APIs for each data model in action. Date & Time: Monday, January 11 — 10:00 AM EDT Speakers: 🗣 @Benjamin.DeBoe, InterSystems Product Manager🗣 @Robert.Kuszewski, InterSystems Product Manager - Developer Experience🗣 @Evgeny.Shvarov, InterSystems Developer Ecosystem Manager So... We will be happy to talk to you at our webinar! ➡️ JOIN THE KICK-OFF WEBINAR Today! Don't miss our kick-off webinar! ➡️ JOIN THE WEBINAR HERE Hey Developers, The recording of this webinar is available on InterSystems Developers YouTube! Please welcome: ⏯ InterSystems Multi-Model Contest Kick-off Webinar Big applause to our speakers! 👏🏼 And thanks to everyone for joining our webinar!
Article
Mikhail Khomenko · Jan 21, 2021

InterSystems Kubernetes Operator Deep Dive: Part 2

In the previous article, we looked at one way to create a custom operator that manages the IRIS instance state. This time, we’re going to take a look at a ready-to-go operator, InterSystems Kubernetes Operator (IKO). Official documentation will help us navigate the deployment steps. Prerequisites To deploy IRIS, we need a Kubernetes cluster. In this example, we’ll use Google Kubernetes Engine (GKE), so we’ll need to use a Google account, set up a Google Cloud project, and install gcloud and kubectl command line utilities. You’ll also need to install the Helm3 utility: $ helm version version.BuildInfo{Version:"v3.3.4"...} Note: Be aware that on Google free tier, not all resources are free. It doesn’t matter in our case which type of GKE we use – zonal, regional, or private. After we create one, let’s connect to the cluster. We’ve created a cluster called “iko” in a project called “iko-project”. Use your own project name in place of “iko-project” in the later text. This command adds this cluster to our local clusters configuration: $ gcloud container clusters get-credentials iko --zone europe-west2-b --project iko-project Install IKO Let’s deploy IKO into our newly-created cluster. The recommended way to install packages to Kubernetes is using Helm. IKO is not an exception and can be installed as a Helm chart. Choose Helm version 3 as it's more secure. Download IKO from the WRC page InterSystems Components, creating a free developer account if you do not already have one. At the moment of writing, the latest version is 2.0.223.0. Download the archive and unpack it. We will refer to the unpacked directory as the current directory. The chart is in the chart/iris-operator directory. If you just deploy this chart, you will receive an error when describing deployed pods: Failed to pull image "intersystems/iris-operator:2.0.0.223.0": rpc error: code = Unknown desc = Error response from daemon: pull access denied for intersystems/iris-operator, repository does not exist or may require 'docker login'. So, you need to make an IKO image available from the Kubernetes cluster. Let’s push this image into Google Container Registry first: $ docker load -i image/iris_operator-2.0.0.223.0-docker.tgz $ docker tag intersystems/iris-operator:2.0.0.223.0 eu.gcr.io/iko-project/iris-operator:2.0.0.223.0 $ docker push eu.gcr.io/iko-project/iris-operator:2.0.0.223.0 After that, we need to direct IKO to use this new image. You should do this by editing the Helm values file: $ vi chart/iris-operator/values.yaml ... operator: registry: eu.gcr.io/iko-project ... Now, we’re ready to deploy IKO into GKE: $ helm upgrade iko chart/iris-operator --install --namespace iko --create-namespace $ helm ls --all-namespaces --output json | jq '.[].status' "deployed" $ kubectl -n iko get pods # Should be Running with Readiness 1/1 Let’s look at the IKO logs: $ kubectl -n iko logs -f --tail 100 -l app=iris-operator … I1212 17:10:38.119363 1 secure_serving.go:116] Serving securely on [::]:8443 I1212 17:10:38.122306 1 operator.go:77] Starting Iris operator Custom Resource Definition irisclusters.intersystems.com was created during IKO deployment. You can look at the API schema it supports, although it is quite long: $ kubectl get crd irisclusters.intersystems.com -oyaml | less One way to look at all available parameters is to use the “explain” command: $ kubectl explain irisclusters.intersystems.com Another way is using jq. For instance, viewing all top-level configuration settings: $ kubectl get crd irisclusters.intersystems.com -ojson | jq '.spec.versions[].schema.openAPIV3Schema.properties.spec.properties | to_entries[] | .key' "configSource" "licenseKeySecret" "passwordHash" "serviceTemplate" "topology" Using jq in this way (viewing the configuration fields and their properties), we can find out the following configuration structure: configSource name licenseKeySecret name passwordHash serviceTemplate metadata annotations spec clusterIP externalIPs externalTrafficPolicy healthCheckNodePort loadBalancerIP loadBalancerSourceRanges ports type topology arbiter image podTemplate controller annotations metadata annotations spec affinity nodeAffinity preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingIgnoredDuringExecution podAffinity preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingIgnoredDuringExecution podAntiAffinity preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingIgnoredDuringExecution args env imagePullSecrets initContainers lifecycle livenessProbe nodeSelector priority priorityClassName readinessProbe resources schedulerName securityContext serviceAccountName tolerations preferredZones updateStrategy rollingUpdate type compute image podTemplate controller annotations metadata annotations spec affinity nodeAffinity preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingIgnoredDuringExecution podAffinity preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingIgnoredDuringExecution podAntiAffinity preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingIgnoredDuringExecution args env imagePullSecrets initContainers lifecycle livenessProbe nodeSelector priority priorityClassName readinessProbe resources limits requests schedulerName securityContext serviceAccountName tolerations preferredZones replicas storage accessModes dataSource apiGroup kind name resources limits requests selector storageClassName volumeMode volumeName updateStrategy rollingUpdate type data image mirrored podTemplate controller annotations metadata annotations spec affinity nodeAffinity preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingIgnoredDuringExecution podAffinity preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingIgnoredDuringExecution podAntiAffinity preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingIgnoredDuringExecution args env imagePullSecrets initContainers lifecycle livenessProbe nodeSelector priority priorityClassName readinessProbe resources limits requests schedulerName securityContext serviceAccountName tolerations preferredZones shards storage accessModes dataSource apiGroup kind name resources limits requests selector storageClassName volumeMode volumeName updateStrategy rollingUpdate type There are so many settings, but, you don’t need to set them all. The defaults are suitable. You can see examples of configuration in the file iris_operator-2.0.0.223.0/samples. To run a minimal viable IRIS, we need to specify only a few settings, like IRIS (or IRIS-based application) version, storage size, and license key. Note about license key: we’ll use a community IRIS, so we don’t need a key. We cannot just omit this setting, but can create a secret containing a pseudo-license. License secret generation is simple: $ touch iris.key # remember that a real license file is used in the most cases $ kubectl create secret generic iris-license --from-file=iris.key An IRIS description understandable by IKO is: $ cat iko.yaml apiVersion: intersystems.com/v1alpha1 kind: IrisCluster metadata: name: iko-test spec: passwordHash: '' # use a default password SYS licenseKeySecret: name: iris-license # use a Secret name bolded above topology: data: image: intersystemsdc/iris-community:2020.4.0.524.0-zpm # Take a community IRIS storage: resources: requests: storage: 10Gi Send this manifest into the cluster: $ kubectl apply -f iko.yaml $ kubectl get iriscluster NAME DATA COMPUTE MIRRORED STATUS AGE iko-test 1 Creating 76s $ kubectl -n iko logs -f --tail 100 -l app=iris-operator db.Spec.Topology.Data.Shards = 0 I1219 15:55:57.989032 1 iriscluster.go:39] Sync/Add/Update for IrisCluster default/iko-test I1219 15:55:58.016618 1 service.go:19] Creating Service default/iris-svc. I1219 15:55:58.051228 1 service.go:19] Creating Service default/iko-test. I1219 15:55:58.216363 1 statefulset.go:22] Creating StatefulSet default/iko-test-data. We see that some resources (Service, StatefulSet) are going to be created in a cluster in the “default” namespace. In a few seconds, you should see an IRIS pod in the “default” namespace: $ kubectl get po -w NAME READY STATUS RESTARTS AGE iko-test-data-0 0/1 ContainerCreating 0 2m10s Wait a little until the IRIS image is pulled, that is, until Status becomes Ready and Ready becomes 1/1. You can check what type of disk was created: $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b356a943-219e-4685-9140-d911dea4c106 10Gi RWO Delete Bound default/iris-data-iko-test-data-0 standard 5m Reclaim policy “Delete” means that when you remove Persistent Volume, GCE persistent disk will be also removed. There is another policy, “Retain”, that allows you to save Google persistent disks to survive Kubernetes Persistent Volumes deletion. You can define a custom StorageClass to use this policy and other non-default settings. An example is present in IKO’s documentation: Create a storage class for persistent storage. Now, let’s check our newly created IRIS. In general, traffic to pods goes through Services or Ingresses. By default, IKO creates a service of ClusterIP type with a name from the iko.yaml metadata.name field: $ kubectl get svc iko-test NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE iko-test ClusterIP 10.40.6.33 <none> 1972/TCP,52773/TCP 14m We can call this service using port-forward: $ kubectl port-forward svc/iko-test 52773 Navigate a browser to http://localhost:52773/csp/sys/UtilHome.csp and type _system/SYS. You should see a familiar IRIS user interface (UI). Custom Application Let’s replace a pure IRIS with an IRIS-based application. First, download the COVID-19 application. We won’t consider a complete, continuous deployment here, just minimal steps: $ git clone https://github.com/intersystems-community/covid-19.git $ cd covid-19 $ docker build --no-cache -t covid-19:v1 . As our Kubernetes is running in a Google cloud, let’s use Google Docker Container Registry as an image storage. We assume here that you have an account in Google Cloud allowing you to push images. Use your own project name in the below-mentioned commands: $ docker tag covid-19:v1 eu.gcr.io/iko-project/covid-19:v1 $ docker push eu.gcr.io/iko-project/covid-19:v1 Let’s go to the directory with iko.yaml, change the image there, and redeploy it. You should consider removing the previous example first: $ cat iko.yaml ... data: image: eu.gcr.io/iko-project/covid-19:v1 ... $ kubectl delete -f iko.yaml $ kubectl -n iko delete deploy -l app=iris-operator $ kubectl delete pvc iris-data-iko-test-data-0 $ kubectl apply -f iko.yaml You should recreate the IRIS pod with this new image. This time, let’s provide external access via Ingress Resource. To make it work, we should deploy an Ingress Controller (choose nginx for its flexibility). To provide a traffic encryption (TLS), we will also add yet another component – cert-manager. To install both these components, we use a Helm tool, version 3. $ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx $ helm upgrade nginx-ingress \ --namespace nginx-ingress \ ingress-nginx/ingress-nginx \ --install \ --atomic \ --version 3.7.0 \ --create-namespace Look at an nginx service IP (it’s dynamic, but you can make it static): $ kubectl -n nginx-ingress get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-ingress-nginx-controller LoadBalancer 10.40.0.103 xx.xx.xx.xx 80:32032/TCP,443:32374/TCP 88s Note: your IP will differ. Go to your domain registrar and create a domain name for this IP. For instance, create an A-record: covid19.myardyas.club = xx.xx.xx.xx Some time will pass until this new record propagates across DNS servers. The end result should be similar to: $ dig +short covid19.myardyas.club xx.xx.xx.xx Having deployed Ingress Controller, we now need to create an Ingress resource itself (use your own domain name): $ cat ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: iko-test annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" certmanager.k8s.io/cluster-issuer: lets-encrypt-production # Cert manager will be deployed below spec: rules: - host: covid19.myardyas.club http: paths: - backend: serviceName: iko-test servicePort: 52773 path: / tls: - hosts: - covid19.myardyas.club secretName: covid19.myardyas.club $ kubectl apply -f ingress.yaml After a minute or so, IRIS should be available at http://covid19.myardyas.club/csp/sys/UtilHome.csp (remember to use your domain name) and the COVID-19 application at http://covid19.myardyas.club/dsw/index.html (choose namespace IRISAPP). Note: Above, we’ve exposed the HTTP IRIS port. If you need to expose via nginx TCP super-server port (1972 or 51773), you can read instructions at Exposing TCP and UDP services. Add Traffic Encryption The last step is to add traffic encryption. Let’s deploy cert-manager for that: $ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/v0.10.0/deploy/manifests/00-crds.yaml $ helm upgrade cert-manager \ --namespace cert-manager \ jetstack/cert-manager \ --install \ --atomic \ --version v0.10.0 \ --create-namespace $ cat lets-encrypt-production.yaml apiVersion: certmanager.k8s.io/v1alpha1 kind: ClusterIssuer metadata: name: lets-encrypt-production spec: acme: # Set your email. Let’s Encrypt will send notifications about certificates expiration email: mvhoma@gmail.com server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: lets-encrypt-production solvers: - http01: ingress: class: nginx $ kubectl apply -f lets-encrypt-production.yaml Wait a few minutes until cert-manager notices IRIS-application ingress and goes to Let’s Encrypt for a certificate. You can observe Order and Certificate resources in progress: $ kubectl get order NAME STATE AGE covid19.myardyas.club-3970469834 valid 52s $ kubectl get certificate NAME READY SECRET AGE covid19.myardyas.club True covid19.myardyas.club 73s This time, you can visit a more secured site version - https://covid19.myardyas.club/dsw/index.html: About Native Google Ingress Controller and Managed Certificates Google supports its own ingress controller, GCE, which you can use in place of an nginx controller. However, it has some drawbacks, for instance, lack of rewrite rules support, at least at the moment of writing. Also, you can use Google managed certificates in place of cert-manager. It’s handy, but initial retrieval of certificate and any updates of Ingress resources (like new path) causes a tangible downtime. Also, Google managed certificates work only with GCE, not with nginx, as noted in Managed Certificates. Next Steps We’ve deployed an IRIS-based application into the GKE cluster. To expose it to the Internet, we’ve added Ingress Controller and a certification manager. We’ve tried the IrisCluster configuration to highlight that setting up IKO is simple. You can read about more settings in Using the InterSystems Kubernetes Operator documentation. A single data server is good, but the real fun begins when we add ECP, mirroring, and monitoring, which are also available with IKO. Stay tuned and read the upcoming article in our Kubernetes operator series to take a closer look at mirroring.
Article
Yuri Marx Pereira Gomes · Feb 18, 2021

Do security scan in your InterSystems IRIS container

There are many options to do a full security scan in your docker images, the most popular option is Anchore community edition. Anchore will use the main public vulnerabilities databases available, including CVE. To install Anchore is very ease (source: https://engine.anchore.io/docs/quickstart/), follow the steps: Create a folder in your OS and download the anchor docker compose file to the created folder. curl -O https://engine.anchore.io/docs/quickstart/docker-compose.yaml Run: docker-compose up -d Check docker services availability (services with up status): docker-compose ps Check Anchore services availability (services with up and the product version in the last row): docker-compose exec api anchore-cli system status Now wait for the vulnerability database sync (about 30 to 120 minutes, depends the internet speed). You can check the progress running this command: docker-compose exec api anchore-cli system feeds list When all files synced, you can begin to use Anchore. To do a security scan is simple, but you need to know the docker image name to be scanned. I will scan the last InterSystems IRIS docker image (after add write your docker image name): # docker-compose exec api anchore-cli image add store/intersystems/iris-community:2020.4.0.524.0 # docker-compose exec api anchore-cli image wait store/intersystems/iris-community:2020.4.0.524.0 You will see this message until analysis end: Status: analyzing Waiting 5.0 seconds for next retry. With the image added, you can see the analysis status/content, see: docker-compose exec api anchore-cli image content store/intersystems/iris-community:2020.4.0.524.0 os: available files: available npm: available gem: available python: available java: available binary: available go: available malware: available With the status analyzed, it is possible list the current vulnerabilities found, see: docker-compose exec api anchore-cli image vuln store/intersystems/iris-community:2020.4.0.524.0 all Finally to know if the your image passed in the security scan, run: docker-compose exec api anchore-cli evaluate check store/intersystems/iris-community:2020.4.0.524.0 See more details in this tutorial: https://anchore.com/blog/docker-image-security-in-5-minutes-or-less/. Enjoy! Thanks @YURI MARX GOMESWell explained
Announcement
Anastasia Dyubaylo · May 30, 2022

InterSystems Grand Prix Contest 2022: Voting time!

Hey Developers, Let the voting week begin! It's time to cast your votes for the best applications in the Grand Prix Programming Contest! 🔥 You decide: VOTE HERE 🔥 How to vote? Details below. Experts nomination: InterSystems experienced jury will choose the best apps to nominate the prizes in the Experts Nomination. Please welcome our experts: ⭐️ @Alexander.Woodhead, Technical Specialist⭐️ @Steven.LeBlanc, Product Specialist⭐️ @Alexander.Koblov, Senior Support Specialist⭐️ @Daniel.Kutac, Senior Sales Engineer⭐️ @Eduard.Lebedyuk, Senior Cloud Engineer⭐️ @Steve.Pisani, Senior Solution Architect⭐️ @Timothy.Leavitt, Development Manager⭐️ @Thomas.Dyar, Product Specialist⭐️ @Andreas.Dieckow, Product Manager⭐️ @Benjamin.DeBoe, Product Manager⭐️ @Carmen.Logue, Product Manager⭐️ @Luca.Ravazzolo, Product Manager⭐️ @Stefan.Wittmann, Product Manager⭐️ @Raj.Singh5479, Product Manager⭐️ @Robert.Kuszewski, Product Manager⭐️ @Jeffrey.Fried, Director of Product Management⭐️ @Dean.Andrews2971, Head of Developer Relations ⭐️ @Evgeny.Shvarov, Developer Ecosystem Manager Community nomination: For each user, a higher score is selected from two categories below: Conditions Place 1st 2nd 3rd If you have an article posted on DC and an app uploaded to Open Exchange (OEX) 9 6 3 If you have at least 1 article posted on DC or 1 app uploaded to OEX 6 4 2 If you make any valid contribution to DC (posted a comment/question, etc.) 3 2 1 Level Place 1st 2nd 3rd VIP Global Masters level or ISC Product Managers 15 10 5 Ambassador GM level 12 8 4 Expert GM level or DC Moderators 9 6 3 Specialist GM level 6 4 2 Advocate GM level or ISC Employees 3 2 1 Blind vote! The number of votes for each app will be hidden from everyone. Once a day we will publish the leaderboard in the comments to this post. The order of projects on the contest page will be as follows: the earlier an application was submitted to the competition, the higher it will be in the list. P.S. Don't forget to subscribe to this post (click on the bell icon) to be notified of new comments. To take part in the voting, you need: Sign in to Open Exchange – DC credentials will work. Make any valid contribution to the Developer Community – answer or ask questions, write an article, contribute applications on Open Exchange – and you'll be able to vote. Check this post on the options to make helpful contributions to the Developer Community. If you changed your mind, cancel the choice and give your vote to another application! Support the application you like! Note: contest participants are allowed to fix the bugs and make improvements to their applications during the voting week, so don't miss and subscribe to application releases! So! After the first day of the voting we have: Expert Nomination, Top 3 Docker InterSystems Extension by @Dmitry.Maslennikov webterminal-vscode by @John.Murray Water Conditions in Europe by @Evgeniy.Potapov ➡️ Voting is here. Community Nomination, Top 3 webterminal-vscode by @John.Murray Docker InterSystems Extension by @Dmitry.Maslennikov Disease Predictor by @Yuri.Gomes ➡️ Voting is here. Experts, we are waiting for your votes! 🔥 Participants, improve & promote your solutions! Seems to be some confusion about whose (or which) app was first in Community section after the first day: Here are the results after 2 days of voting: Expert Nomination, Top 3 Water Conditions in Europe by @Evgeniy.Potapov Docker InterSystems Extension by @Dmitry Maslennikov test-dat by @Oliver.Wilms ➡️ Voting is here. Community Nomination, Top 3 iris-megazord by @José.Pereira webterminal-vscode by @John Murray Docker InterSystems Extension by @Dmitry Maslennikov ➡️ Voting is here. So, the voting continues. Please support the application you like! Devs! Here are the top 5 for now: Expert Nomination, Top 5 Water Conditions in Europe by @Evgeniy.Potapov CloudStudio by @Sean.Connelly iris-fhir-client by @Muhammad.Waseem iris-megazord by @José.Pereira Docker InterSystems Extension by @Dmitry Maslennikov ➡️ Voting is here. Community Nomination, Top 5 iris-megazord by @José.Pereira webterminal-vscode by @John.Murray iris-fhir-client by @Muhammad.Waseem CloudStudio by @Sean.Connelly Docker InterSystems Extension by @Dmitry Maslennikov ➡️ Voting is here. Who is gonna be the winner?!😱 Only 1 day left till the end of the voting period. Support our participants with your votes! Last day of voting! ⌛ Don't miss the opportunity to support the application you like!Our contestants need your votes! 📢 Developers! Last call!Only a few hours left to the end of voting! Cast your votes for applications you like!
Question
Ephraim Malane · Apr 25, 2022

InterSystems IRIS Backup mirror member is very slow

Hi All, When I log into Backup mirror member it becomes too slow to load and navigate, I tried to check message log and I saw the error message about Database mirror latency and database disk issue which when I check it looks fine to me. Please have a look at the below screenshots and advise what the issue could be. When I run df -h through SSH : 200G is the volume size, 194G is used space, 6.5G is available space and 97% IS %Use Datababase view from management portal for the same db volume Disk space error message from message log: The below error shows sometimes when the system tries to load Hello Ephraim, This seems like a better question for InterSystems Support / WRC who can look at the full log and system to try and determine what's happening. From a file system disk space perspective, the database is almost filling the disk, but there is free space WITHIN the database from the IRIS perspective. Regarding the performance side of things, what did you check such that it looks fine? CPU / memory / disk? When did the warnings start and are they ongoing? What's being done on the system, anything abnormal? Ephraim, Does the top command show a large swap space. I have seen backup servers with a smaller physical memory get the programs into swap space which as you know is extremely slow to operate. John Hello Ephraim, your database is 193GB and has 141GB of free space (73% free). There is a 6.5GB space on the OS file system, I recommend you to:- Compact globals in database- Compact free space ibn database- Return free space in database All those can be done when you do the following: USER>zn "%sys"%SYS>do ^DATABASE
Announcement
Anastasia Dyubaylo · May 16, 2022

[WEBINAR] What's New in InterSystems IRIS 2022.1

Hey Developers, We're pleased to invite you to the upcoming InterSystems webinar called "What's New in InterSystems IRIS 2022.1"! Date: Tuesday, May 24, 2022Time: 11:00 AM EDT In this webinar, we’ll highlight some of the new capabilities of InterSystems IRIS and InterSystems IRIS for Health, including: Full support for application development using Python Speed and Scalability enhancements including Adaptive SQL and SQL Loader Support for Apache Kafka for real-time use cases New cloud services, and expanded support for cloud adapters and Kubernetes Support for new and updated operating systems and client frameworks There will be time for Q&A at the end. Speakers:🗣 @Benjamin.DeBoe, Product Manager, InterSystems🗣 @Robert.Kuszewski, Product Manager, Developer Experience, InterSystems ➡️ Register for the webinar today!
Article
Lucas Enard · May 3, 2022

Formation on InterSystems' interoperability framework using ONLY Python

This **formation**, accessible on [my GitHub](https://github.com/LucasEnard/formation-template-python), will cover, in half a hour, how to read and write in csv and txt files, insert and get inside the **IRIS database** and a **distant database** using Postgres or how to use a FLASK API, all of that using the Interoperability framework using ONLY Python following the [PEP8 convention](https://peps.python.org/pep-0008/). This formation can mostly be done using copy paste and will guide you through everystep before challenging you with a global exercise. We are available to answer any question or doubt in the comment of that post, on teams or even by mail at lucas.enard@intersystems.com . We would really appreciate any feedback and remarks regarding every and any aspect of this formation. # 1. **Ensemble / Interoperability Formation** The goal of this formation is to learn InterSystems' interoperability framework using python, and particularly the use of: * Productions * Messages * Business Operations * Adapters * Business Processes * Business Services * REST Services and Operations **TABLE OF CONTENTS:** - [1. **Ensemble / Interoperability Formation**](#1-ensemble--interoperability-formation) - [2. Framework](#2-framework) - [3. Adapting the framework](#3-adapting-the-framework) - [4. Prerequisites](#4-prerequisites) - [5. Setting up](#5-setting-up) - [5.1. Docker containers](#51-docker-containers) - [5.2. Management Portal and VSCode](#52-management-portal-and-vscode) - [5.3. Having the folder open inside the container](#53-having-the-folder-open-inside-the-container) - [5.4. Register components](#54-register-components) - [5.5. The solution](#55-the-solution) - [6. Productions](#6-productions) - [7. Business Operations](#7-business-operations) - [7.1. Creating our object classes](#71-creating-our-object-classes) - [7.2. Creating our message classes](#72-creating-our-message-classes) - [7.3. Creating our operations](#73-creating-our-operations) - [7.4. Adding the operations to the production](#74-adding-the-operations-to-the-production) - [7.5. Testing](#75-testing) - [8. Business Processes](#8-business-processes) - [8.1. Simple BP](#81-simple-bp) - [8.2. Adding the process to the production](#82-adding-the-process-to-the-production) - [8.3. Testing](#83-testing) - [9. Business Service](#9-business-service) - [9.1. Simple BS](#91-simple-bs) - [9.2. Adding the service to the production](#92-adding-the-service-to-the-production) - [9.3. Testing](#93-testing) - [10. Getting access to an extern database using a db-api](#10-getting-access-to-an-extern-database-using-a-db-api) - [10.1. Prerequisites](#101-prerequisites) - [10.2. Creating our new operation](#102-creating-our-new-operation) - [10.3. Configuring the production](#103-configuring-the-production) - [10.4. Testing](#104-testing) - [10.5. Exercise](#105-exercise) - [10.6. Solution](#106-solution) - [11. REST service](#11-rest-service) - [11.1. Prerequisites](#111-prerequisites) - [11.2. Creating the service](#112-creating-the-service) - [11.3. Testing](#113-testing) - [12. Global exercise](#12-global-exercise) - [12.1. Instructions](#121-instructions) - [12.2. Hints](#122-hints) - [12.2.1. bs](#1221-bs) - [12.2.1.1. Get information](#12211-get-information) - [12.2.1.2. Get information with requests](#12212-get-information-with-requests) - [12.2.1.3. Get information with requests and using it](#12213-get-information-with-requests-and-using-it) - [12.2.1.4. Get information solution](#12214-get-information-solution) - [12.2.2. bp](#1222-bp) - [12.2.2.1. Average number of steps and dict](#12221-average-number-of-steps-and-dict) - [12.2.2.2. Average number of steps and dict : hint](#12222-average-number-of-steps-and-dict--hint) - [12.2.2.3. Average number of steps and dict : with map](#12223-average-number-of-steps-and-dict--with-map) - [12.2.2.4. Average number of steps and dict : the answer](#12224-average-number-of-steps-and-dict--the-answer) - [12.2.3. bo](#1223-bo) - [12.3. Solutions](#123-solutions) - [12.3.1. obj & msg](#1231-obj--msg) - [12.3.2. bs](#1232-bs) - [12.3.3. bp](#1233-bp) - [12.3.4. bo](#1234-bo) - [12.4. Testing](#124-testing) - [12.5. Conclusion of the global exercise](#125-conclusion-of-the-global-exercise) - [13. Conclusion](#13-conclusion) # 2. Framework This is the IRIS Framework. ![FrameworkFull](https://raw.githubusercontent.com/thewophile-beep/formation-template/master/misc/img/FrameworkFull.png) The components inside of IRIS represent a production. Inbound adapters and outbound adapters enable us to use different kind of format as input and output for our databse. The composite applications will give us access to the production through external applications like REST services. The arrows between them all of this components are **messages**. They can be requests or responses. # 3. Adapting the framework In our case, we will read lines from a csv file and save it into the IRIS database and in a .txt file. We will then add an operation that will enable us to save objects in an extern database too, using a db-api. This database will be located in a docker container, using postgre. Finally, we will see how to use composite applications to insert new objects in our database or to consult this database (in our case, through a REST service). The framework adapted to our purpose gives us: WIP ![FrameworkAdapted](https://raw.githubusercontent.com/thewophile-beep/formation-template/master/misc/img/FrameworkAdapted.png) # 4. Prerequisites For this formation, you'll need: * VSCode: https://code.visualstudio.com/ * The InterSystems addons suite for vscode: https://intersystems-community.github.io/vscode-objectscript/installation/ * Docker: https://docs.docker.com/get-docker/ * The docker addon for VSCode. * Automatically done : [Postgre requisites](#101-prerequisites) * Automatically done : [Flask requisites](#111-prerequisites) # 5. Setting up ## 5.1. Docker containers In order to have access to the InterSystems images, we need to go to the following url: http://container.intersystems.com. After connecting with our InterSystems credentials, we will get our password to connect to the registry. In the docker VScode addon, in the image tab, by pressing connect registry and entering the same url as before (http://container.intersystems.com) as a generic registry, we will be asked to give our credentials. The login is the usual one but the password is the one we got from the website. From there, we should be able to build and compose our containers (with the `docker-compose.yml` and `Dockerfile` files given). ## 5.2. Management Portal and VSCode This repository is ready for [VS Code](https://code.visualstudio.com/). Open the locally-cloned `formation-template-python` folder in VS Code. If prompted (bottom right corner), install the recommended extensions. ## 5.3. Having the folder open inside the container **It is really important** to be *inside* the container before coding. Mainly to be able to have autocompletion enabled. For this, docker must be on before opening VSCode. Then, inside VSCode, when prompted (in the right bottom corner), reopen the folder inside the container so you will be able to use the python components within it. The first time you do this it may take several minutes while the container is readied. [More information here](https://code.visualstudio.com/docs/remote/containers) ![Architecture](https://code.visualstudio.com/assets/docs/remote/containers/architecture-containers.png) By opening the folder remote you enable VS Code and any terminals you open within it to use the python components within the container. Configure these to use `/usr/irissys/bin/irispython` ## 5.4. Register components In order to register the components we are creating in python to the production it is needed to use the `register_component` function from the `grongier.pex._utils` module. **IMPORTANT**: The components were already registered before ( expect for the HelloWorldOperation and the [global exercise](#12-global-exercise) ). For the HelloWorldOperation and for the global exercise, here are the steps to register components: For this we advise you to use the build-in python console to add manually the component at first when you are working on the project. You will find those commands in the `misc/register.py` file.To use them you need to firstly create the component then you can start a terminal in VSCode ( it will be automatically in the container if you followed step [5.2.](#52-management-portal-and-vscode) and [5.3](#53-having-the-folder-open-inside-the-container)) To launch an IrisPython console enter : ``` /usr/irissys/bin/irispython ``` Then enter : ``` from grongier.pex._utils import register_component ``` Now you can register your component using something like : ``` register_component("bo","HelloWorldOperation","/irisdev/app/src/python/",1,"Python.HelloWorldOperation") ``` This line will register the class `HelloWorldOperation` that is coded inside the module `bo`, file situated at `/irisdev/app/src/python/` (which is the right path if you follow this course) using the name `Python.HelloWorldOperation` in the management portal. It is to be noted that if you don't change the name of the file, the class or the path, if a component was registered you can modify it on VSCode without the need to register it again. Just don't forget to restart it in the management portal. ## 5.5. The solution If at any point in the formation you feel lost, or need further guidance, the `solution` branche on github holds all the correction and a working [production](#6-productions). # 6. Productions A **production** is the base of all our work on Iris, it must be seen as the shell of our [framework](#2-framework) that will hold the **services**, **processes** and **operations**. Everything in the production is going to inherit functions ; Those are the `on_init` function that resolve at the creation of an instance of this class and the `on_tear_down` function that resolve when the instance is killed. This will be useful to set variables or close a used open file when writing. It is to be noted that **a production** with almost all the services, processes and operations **was alredy created**. If you are asked to connect use username:SuperUser and password:SYS Then, we will go through the [Interoperability] and [Configure] menus and click Production: ![ProductionMenu](https://user-images.githubusercontent.com/77791586/164473827-ffa2b322-095a-46e3-8c8b-16d467a80485.png) If the **production isn't open** do : Go to the [Interoperability] and [Configure] menu then click[Production]. Now click [Open] then chose `iris` / `Production` If the **production ins't in iris/production**, note that it is important to choose the namespace `IRISAPP` in the management portal. ![SwitchNamespace](https://user-images.githubusercontent.com/77791586/166930683-fb1232a1-8895-4eb8-bc60-35f42c79ef9e.png) From here you can go directly to [Business Operations](#7-business-operations). But if you are interested on how to create a production, the steps to create one if needed or just for information are: Go to the management portal and to connect using username:SuperUser and password:SYS Then, we will go through the [Interoperability] and [Configure] menus: ![ProductionMenu](https://user-images.githubusercontent.com/77791586/164473827-ffa2b322-095a-46e3-8c8b-16d467a80485.png) We then have to press [New], select the [Formation] package and chose a name for our production: ![ProductionCreation](https://user-images.githubusercontent.com/77791586/164473884-5c7aec69-c45d-4062-bedc-2933e215da22.png) Immediately after creating our production, we will need to click on [Production Settings] just above the [Operations] section. In the right sidebar menu, we will have to activate [Testing Enabled] in the [Development and Debugging] part of the [Settings] tab (don't forget to press [Apply]). ![ProductionTesting](https://user-images.githubusercontent.com/77791586/164473965-47ab1ba4-85d5-46e3-9e15-64186b5a457e.png) In this first production we will now add Business Operations. # 7. Business Operations A **Business Operation** (BO) is a specific operation that will enable us to send requests from IRIS to an external application / system. It can also be used to directly save in IRIS what we want. BO also have an `on_message` function that will be called every time this instance receive a message from any source, this will allow us to receive information and send it, as seen in the framework, to an external client. We will create those operations in local in VSCode, that is, in the `src/python/bo.py` file.Saving this file will compile them in IRIS. To start things we will design the simplest operation possible and try it out. In the `src/python/bo.py` file we will create a class called `HelloWorldOperation` that will write a message in the logs when it receive any request. To do so we just have to add in the `src/python/bo.py` file, right after the import line and just before the class FileOperation: : ```python class HelloWorldOperation(BusinessOperation): def on_message(self, request): self.log_info("Hello World!") ``` Now we need to register it to our production, add it to the production and finally try it out. To register it follow step by step [How to register a component](#54-register-components). Now go to the management portal and click on the [Production] tab. To add the operation, we use the Management Portal. By pressing the [+] sign next to [Operations], we have access to the [Business Operation Wizard].There, we chose the operation classes we just created in the scrolling menu. ![OperationCreation](https://user-images.githubusercontent.com/77791586/164474068-49c7799c-c6a2-4e1e-8489-3788c50acb86.png) Now double click on the operation we just created and press start, then start the production. **IMPORTANT**:To test the operation,select the `Python.HelloWorldOperation` **operation** and going in the [Actions] tabs in the right sidebar menu, we should be able to **test** the **operation** (if it doesn't work, [activate testing](#6-productions) and check if the production is started and reload the operation by double clicking it and clicking restart). **Testing on HelloWorldOperation** By using the test function of our management portal, we will send the operation a message. Using as `Request Type`: `Ens.request` in the scrolling menu. ( Or almost any other message type ) Then click `Call test service` Then by going to the `visual trace` and clicking the white square you should read : "Hello World". Well done, you have created your first full python operation on IRIS. Now, for our firsts big operations we will save the content of a message in the local database and write the same information locally in a .txt file. We need to have a way of storing this message first. ## 7.1. Creating our object classes We will use `dataclass` to hold information in our [messages](#72-creating-our-message-classes). In our `src/python/obj.py` file we have, for the imports: ```python from dataclasses import dataclass ``` for the code: ```python @dataclass class Formation: id_formation:int = None nom:str = None salle:str = None ``` The `Formation` class will be used as a Python object to store information from a csv and send it to the [# 8. business process](#8-business-processes). **Your turn to create your own object class** The same way, create the `Training` class, in the same file, that will be used to send information from the [# 8. business process](#8-business-processes) to the multiple operation, to store it into the Iris database or write it down on a .txt file. We only need to store a `name` which is a string and a `room` which is a string. Try it by yourself before checking the solution. Solution : The final form of the `obj.py` file: ```python from dataclasses import dataclass @dataclass class Formation: id_formation:int = None nom:str = None salle:str = None @dataclass class Training: name:str = None room:str = None ``` ## 7.2. Creating our message classes These messages will contain a `Formation` object or a `Training` object, located in the `obj.py` file created in [7.1](#71-creating-our-object-classes) Note that messages, requests and responses all inherit from the `grongier.pex.Message` class. In the `src/python/msg.py` file we have, for the imports: ```python from dataclasses import dataclass from grongier.pex import Message from obj import Formation,Training ``` for the code: ```python @dataclass class FormationRequest(Message): formation:Formation = None ``` Again,the `FormationRequest` class will be used as a message to store information from a csv and send it to the [# 8. business process](#8-business-processes). **Your turn to create your own message class** The same way, create the `TrainingRequest` class, in the same file, it will be used to send information from the [# 8. business process](#8-business-processes) to the multiple operation, to store it into the Iris database or write it down on a .txt file. We only need to store a `training` which is a Training object. Try it by yourself before checking the solution. Solution : The final form of the `msg.py` file: ``` from dataclasses import dataclass from grongier.pex import Message from obj import Formation,Training @dataclass class FormationRequest(Message): formation:Formation = None @dataclass class TrainingRequest(Message): training:Training = None ``` ## 7.3. Creating our operations Now that we have all the elements we need, we can create our operations. Note that any Business Operation inherit from the `grongier.pex.BusinessOperation` class. All of our operations will be in the file `src/python/bo.py`, to differentiate them we will have to create multiple classes as seen right now in the file as all the classes for our operations are already there, but of course, almost empty for now. When an operation receive a message/request, it will automatically dispatch the message/request to the correct function depending of the type of the message/request specified in the signature of each function. If the type of the message/request is not handled, it will be forwarded to the `on_message` function. Now, we will create an operation that will store data to our database. In the `src/python/bo.py` file we have for the code of the class `IrisOperation`: ```python class IrisOperation(BusinessOperation): """ It is an operation that write trainings in the iris database """ def insert_training(self, request:TrainingRequest): """ It takes a `TrainingRequest` object, inserts a new row into the `iris.training` table, and returns a `TrainingResponse` object :param request: The request object that will be passed to the function :type request: TrainingRequest :return: A TrainingResponse message """ sql = """ INSERT INTO iris.training ( name, room ) VALUES( ?, ? ) """ name = request.training.name room = request.training.room iris.sql.exec(sql,name,room) return None def on_message(self, request): return None ``` As we can see, if the `IrisOperation` receive a message of the type `msg.TrainingRequest`, the information hold by the message will be transformed into an SQL query and executed by the `iris.sql.exec` IrisPython function. This method will save the message in the IRIS local database. As you can see, we gathered the name and the room from the request by getting the training object and then the name and room strings from the training object. It is now time to write that data to a .csv file. **Your turn to create your own operation** The same way that for IrisOperation, you have to fill the FileOperation class. First of all, write the put_line function inside the `FileOperation` class: ```python def put_line(self,filename,string): """ It opens a file, appends a string to it, and closes the file :param filename: The name of the file to write to :param string: The string to be written to the file """ try: with open(filename, "a",encoding="utf-8",newline="") as outfile: outfile.write(string) except Exception as error: raise error ``` Now you can try to create the write_training function, which will call the put_line function once. It will gather the name and the room from the request by getting the training object and then the name and room strings from the training object. Then it will call the put_line function with the name of the file of your choice and the string to be written to the file. Solution : In the `src/python/bo.py` file we have, for the imports: ```python from grongier.pex import BusinessOperation import os import iris from msg import TrainingRequest,FormationRequest ``` for the code of the class `FileOperation`: ```python class FileOperation(BusinessOperation): """ It is an operation that write a training or a patient in a file """ def on_init(self): """ It changes the current working directory to the one specified in the path attribute of the object, or to /tmp if no path attribute is specified. It also sets the filename attribute to toto.csv if it is not already set :return: None """ if hasattr(self,'path'): os.chdir(self.path) else: os.chdir("/tmp") return None def write_training(self, request:TrainingRequest): """ It writes a training to a file :param request: The request message :type request: TrainingRequest :return: None """ room = name = "" if request.training is not None: room = request.training.room name = request.training.name line = room+" : "+name+"\n" filename = 'toto.csv' self.put_line(filename, line) return None def on_message(self, request): return None def put_line(self,filename,string): """ It opens a file, appends a string to it, and closes the file :param filename: The name of the file to write to :param string: The string to be written to the file """ try: with open(filename, "a",encoding="utf-8",newline="") as outfile: outfile.write(string) except Exception as error: raise error ``` As we can see, if the `FileOperation` receive a message of the type `msg.TrainingRequest` it will dispatch it to the `write_training` function since it's signature on `request` is `TrainingRequest`. In this function, the information hold by the message will be written down on the `toto.csv` file. Note that `path` is already a parameter of the operation and you could make `filename` a variable with a base value of `toto.csv` that can be changed directly in the management portal. To do so, we need to edit the `on_init` function like this: ```python def on_init(self): if hasattr(self,'path'): os.chdir(self.path) else: os.chdir("/tmp") if not hasattr(self,'filename'): self.filename = 'toto.csv' return None ``` Then, we would call `self.filename` instead of coding it directly inside the operation and using `filename = 'toto.csv'`. Then, the `write_training` function would look like this: ```python def write_training(self, request:TrainingRequest): room = name = "" if request.training is not None: room = request.training.room name = request.training.name line = room+" : "+name+"\n" self.put_line(self.filename, line) return None ``` See the part Testing below in 7.5 for further information on how to choose our own `filename`. Those components were **already registered** to the production in advance. For information, the steps to register your components are: Following [5.4.](#54-register-components) and using: ``` register_component("bo","FileOperation","/irisdev/app/src/python/",1,"Python.FileOperation") ``` And: ``` register_component("bo","IrisOperation","/irisdev/app/src/python/",1,"Python.IrisOperation") ``` ## 7.4. Adding the operations to the production Our operations are already on our production since we have done it for you in advance. However if you create a new operation from scratch you will need to add it manually. If needed for later of just for information, here are the steps to register an operation. For this, we use the Management Portal. By pressing the [+] sign next to [Operations], we have access to the [Business Operation Wizard].There, we chose the operation classes we just created in the scrolling menu. ![OperationCreation](https://user-images.githubusercontent.com/77791586/164474068-49c7799c-c6a2-4e1e-8489-3788c50acb86.png) Don't forget to do it with all your new operations ! ## 7.5. Testing Double clicking on the operation will enable us to activate it or restart it to save our changes. **IMPORTANT**: Note that this step of deactivating it and reactivating it is crucial to save our changes. **IMPORTANT**: After that, by selecting the `Python.IrisOperation` **operation** and going in the [Actions] tabs in the right sidebar menu, we should be able to **test** the **operation** (if it doesn't work, [activate testing](#6-productions) and check if the production is started and reload the operation by double clicking it and clicking restart). **Testing on IrisOperation** For `IrisOperation` it is to be noted that the table was created automatically. For information, the steps to create it are: Access the Iris DataBase using the management portal by seeking [System Explorer] then [SQL] then [Go]. Now you can enter in the [Execute Query] : ``` CREATE TABLE iris.training ( name varchar(50) NULL, room varchar(50) NULL ) ``` By using the test function of our management portal, we will send the operation a message of the type we declared earlier. If all goes well, showing the visual trace will enable us to see what happened between the processes, services and operations. Using as `Request Type`: `Grongier.PEX.Message` in the scrolling menu. Using as `%classname`: ``` msg.TrainingRequest ``` Using as `%json`: ``` { "training":{ "name": "name1", "room": "room1" } } ``` Then click `Call test service` Here, we can see the message being sent to the operation by the process, and the operation sending back a response (It must say no response since in the code used `return None`, we will see later how to return messages). You should get a result like this : ![IrisOperation](https://user-images.githubusercontent.com/77791586/166424497-0267af70-1fd5-40aa-bc71-1b0b0954e67a.png) **Testing on FileOperation** For `FileOperation` it is to be noted that you can fill the `path` in the `%settings` available on the Management Portal as follow ( and you can add in the settings the `filename` if you have followed the `filename` note from [7.3.](#73-creating-our-operations) ) using: ``` path=/tmp/ ``` or this: ``` path=/tmp/ filename=tata.csv ``` You should get a result like this: ![Settings for FileOperation](https://user-images.githubusercontent.com/77791586/165781963-34027c47-0188-44df-aedc-20537fc0ee32.png) Again, by selecting the `Python.FileOperation` **operation** and going in the [Actions] tabs in the right sidebar menu, we should be able to **test** the **operation** (if it doesn't work, [activate testing](#6-productions) and check if the production is started). Using as `Request Type`: `Grongier.PEX.Message` in the scrolling menu. Using as `%classname`: ``` msg.TrainingRequest ``` Using as `%json`: ``` { "training":{ "name": "name1", "room": "room1" } } ``` Then click `Call test service` You should get a result like this : ![FileOperation](https://user-images.githubusercontent.com/77791586/166424216-8c250477-4337-4fee-97c9-28e0cca2f406.png) In order to see if our operations worked it is needed for us to acces the toto.csv (or tata.csv if you have followed the `filename` note from [7.3.](#73-creating-our-operations)) file and the Iris DataBase to see the changes. It is needed to be inside the container for the next step, if [5.2.](#52-management-portal-and-vscode) and [5.3](#53-having-the-folder-open-inside-the-container) were followed it should be good. To access the toto.csv you will need to open a terminal then type: ``` bash ``` ``` cd /tmp ``` ``` cat toto.csv ``` or use `"cat tata.csv"` if needed. **IMPORTANT**: If the file doesn't exist you may not have restarted the operation on the management portal therefore nothing happened ! To do that, double click on the operation and select restart ( or deactivate then double click again and activate) You may need to [test](#75-testing) again To access the Iris DataBase you will need to access the management portal and seek [System Explorer] then [SQL] then [Go]. Now you can enter in the [Execute Query] : ``` SELECT * FROM iris.training ``` # 8. Business Processes **Business Processes** (BP) are the business logic of our production. They are used to process requests or relay those requests to other components of the production. BP also have an `on_request` function that will be called everytime this instance receive a request from any source, this will allow us to receive information and process it in anyway and disptach it to the right BO. We will create those process in local in VSCode, that is, in the `src/python/bp.py` file.Saving this file will compile them in IRIS. ## 8.1. Simple BP We now have to create a **Business Process** to process the information coming from our future services and dispatch it accordingly. We are going to create a simple BP that will call our operations. Since our BP will only redirect information we will call it `Router` and it will be in the file `src/python/bp.py` like this, for the imports: ```python from grongier.pex import BusinessProcess from msg import FormationRequest, TrainingRequest from obj import Training ``` for the code: ```python class Router(BusinessProcess): def on_request(self, request): """ It receives a request, checks if it is a formation request, and if it is, it sends a TrainingRequest request to FileOperation and to IrisOperation :param request: The request object that was received :return: None """ if isinstance(request,FormationRequest): msg = TrainingRequest() msg.training = Training() msg.training.name = request.formation.nom msg.training.room = request.formation.salle self.send_request_sync('Python.FileOperation',msg) self.send_request_sync('Python.IrisOperation',msg) return None ``` The Router will receive a request of the type `FormationRequest` and will create and send a message of the type `TrainingRequest` to the `IrisOperation` and the `FileOperation` operations. If the message/request is not an instance of the type we are looking for, we will just do nothing and not dispatch it. Those components were **already registered** to the production in advance. For information, the steps to register your components are: Following [5.4.](#54-register-components) and using: ``` register_component("bp","Router","/irisdev/app/src/python/",1,"Python.Router") ``` ## 8.2. Adding the process to the production Our process is already on our production since we have done it for you in advance. However if you create a new process from scratch you will need to add it manually. If needed for later of just for information, here are the steps to register a process. For this, we use the Management Portal. By pressing the [+] sign next to [Process], we have access to the [Business Process Wizard].There, we chose the process class we just created in the scrolling menu. ## 8.3. Testing Double clicking on the process will enable us to activate it or restart it to save our changes. **IMPORTANT**: Note that this step of deactivating it and reactivating it is crucial to save our changes. **IMPORTANT**: After that, by selecting the **process** and going in the [Actions] tabs in the right sidebar menu, we should be able to **test** the **process** (if it doesn't work, [activate testing](#6-productions) and check if the production is started and reload the process by double clicking it and clicking restart). By doing so, we will send the process a message of the type `msg.FormationRequest`. Using as `Request Type`: `Grongier.PEX.Message` in the scrolling menu. Using as `%classname`: ``` msg.FormationRequest ``` Using as `%json`: ``` { "formation":{ "id_formation": 1, "nom": "nom1", "salle": "salle1" } } ``` Then click `Call test service` ![RouterTest](https://user-images.githubusercontent.com/77791586/164474368-838fd740-0548-44e6-9bc0-4c6c056f0cd7.png) If all goes well, showing the visual trace will enable us to see what happened between the process, services and processes. Here, we can see the messages being sent to the operations by the process, and the operations sending back a response. ![RouterResults](https://user-images.githubusercontent.com/77791586/164474411-efdae647-5b8b-4790-8828-5e926c597fd1.png) # 9. Business Service **Business Service** (BS) are the ins of our production. They are used to gather information and send them to our routers. BS also have an `on_process_input` function that often gather information in our framework, it can be called by multiple ways such as a REST API or an other service, or by the service itself to execute his code again. BS also have a `get_adapter_type` function that allow us to allocate an adapter to the class, for example `Ens.InboundAdapter` that will make it so that the service will call his own `on_process_input` every 5 seconds. We will create those services in local in VSCode, that is, in the `python/bs.py` file.Saving this file will compile them in IRIS. ## 9.1. Simple BS We now have to create a Business Service to read a CSV and send each line as a `msg.FormationRequest` to the router. Since our BS will read a csv we will call it `ServiceCSV` and it will be in the file `src/python/bs.py` like this, for the imports: ```python from grongier.pex import BusinessService from dataclass_csv import DataclassReader from obj import Formation from msg import FormationRequest ``` for the code: ```python class ServiceCSV(BusinessService): """ It reads a csv file every 5 seconds, and sends each line as a message to the Python Router process. """ def get_adapter_type(): """ Name of the registred adaptor """ return "Ens.InboundAdapter" def on_init(self): """ It changes the current path to the file to the one specified in the path attribute of the object, or to '/irisdev/app/misc/' if no path attribute is specified :return: None """ if not hasattr(self,'path'): self.path = '/irisdev/app/misc/' return None def on_process_input(self,request): """ It reads the formation.csv file, creates a FormationRequest message for each row, and sends it to the Python.Router process. :param request: the request object :return: None """ filename='formation.csv' with open(self.path+filename,encoding="utf-8") as formation_csv: reader = DataclassReader(formation_csv, Formation,delimiter=";") for row in reader: msg = FormationRequest() msg.formation = row self.send_request_sync('Python.Router',msg) return None ``` It is advised to keep the `FlaskService` as it is and juste fill the `ServiceCSV`. As we can see, the ServiceCSV gets an InboundAdapter that will allow it to function on it's own and to call on_process_input every 5 seconds ( parameter that can be changed in the basic settings of the settings of the service on the Management Portal) Every 5 seconds, the service will open the `formation.csv` to read each line and create a `msg.FormationRequest` that will be send to the `Python.Router`. Those components were **already registered** to the production in advance. For information, the steps to register your components are: Following [5.4.](#54-register-components) and using: ``` register_component("bs","ServiceCSV","/irisdev/app/src/python/",1,"Python.ServiceCSV") ``` ## 9.2. Adding the service to the production Our service is already on our production since we have done it for you in advance. However if you create a new service from scratch you will need to add it manually. If needed for later of just for information, here are the steps to register a service. For this, we use the Management Portal. By pressing the [+] sign next to [service], we have access to the [Business Services Wizard].There, we chose the service class we just created in the scrolling menu. ## 9.3. Testing Double clicking on the service will enable us to activate it or restart it to save our changes. **IMPORTANT**: Note that this step of deactivating it and reactivating it is crucial to save our changes. As explained before, nothing more has to be done here since the service will start on his own every 5 seconds. If all goes well, showing the visual trace will enable us to see what happened between the process, services and processes. Here, we can see the messages being sent to the process by the service, the messages to the operations by the process, and the operations sending back a response. ![ServiceCSVResults](https://user-images.githubusercontent.com/77791586/164474470-c77c4a06-0d8f-4ba9-972c-ce09b20fa54a.png) # 10. Getting access to an extern database using a db-api In this section, we will create an operation to save our objects in an extern database. We will be using the db-api, as well as the other docker container that we set up, with postgre on it. ## 10.1. Prerequisites In order to use postgre we need psycopg2 which is a python module allowing us to connect to the postegre database with a simple command. It was already done automatically but for informations,the steps are : access the inside of the docker container to install psycopg2 using pip3.Once you are in the terminal enter : ``` pip3 install psycopg2-binary ``` Or add your module in the requirements.txt and rebuild the container. ## 10.2. Creating our new operation Our new operation needs to be added after the two other one in the file `src/python/bo.py`. Our new operation and the imports are as follows, for the imports: ```python import psycopg2 ``` for the code: ```python class PostgresOperation(BusinessOperation): """ It is an operation that write trainings in the Postgre database """ def on_init(self): """ it is a function that connects to the Postgre database and init a connection object :return: None """ self.conn = psycopg2.connect( host="db", database="DemoData", user="DemoData", password="DemoData", port="5432") self.conn.autocommit = True return None def on_tear_down(self): """ It closes the connection to the database :return: None """ self.conn.close() return None def insert_training(self,request:TrainingRequest): """ It inserts a training in the Postgre database :param request: The request object that will be passed to the function :type request: TrainingRequest :return: None """ cursor = self.conn.cursor() sql = "INSERT INTO public.formation ( name,room ) VALUES ( %s , %s )" cursor.execute(sql,(request.training.name,request.training.room)) return None def on_message(self,request): return None ``` This operation is similar to the first one we created. When it will receive a message of the type `msg.TrainingRequest`, it will use the psycopg module to execute SQL requests. Those requests will be sent to our postgre database. As you can see here the connection is written directly into the code, to improve our code we could do as before for the other operations and make, `host`, `database` and the other connection information, variables with a base value of `db` and `DemoData` etc that can be change directly onto the management portal.To do this we can change our `on_init` function by : ```python def on_init(self): if not hasattr(self,'host'): self.host = 'db' if not hasattr(self,'database'): self.database = 'DemoData' if not hasattr(self,'user'): self.user = 'DemoData' if not hasattr(self,'password'): self.password = 'DemoData' if not hasattr(self,'port'): self.port = '5432' self.conn = psycopg2.connect( host=self.host, database=self.database, user=self.user, password=self.password, port=self.port) self.conn.autocommit = True return None ``` Those components were **already registered** to the production in advance. For information, the steps to register your components are: Following [5.4.](#54-register-components) and using: ``` register_component("bo","PostgresOperation","/irisdev/app/src/python/",1,"Python.PostgresOperation") ``` ## 10.3. Configuring the production Our operation is already on our production since we have done it for you in advance. However if you create a new operation from scratch you will need to add it manually. If needed for later of just for information, here are the steps to register an operation. For this, we use the Management Portal. By pressing the [+] sign next to [Operations], we have access to the [Business Operation Wizard].There, we chose the operation classes we just created in the scrolling menu. Afterward, if you wish to change the connection, you can simply add in the %settings in [Python] in the [parameter] window of the operation the parameter you wish to change. See the second image of [7.5. Testing](#75-testing) for more details. ## 10.4. Testing Double clicking on the operation will enable us to activate it or restart it to save our changes. **IMPORTANT**: Note that this step of deactivating it and reactivating it is crucial to save our changes. **IMPORTANT**: After that, by selecting the **operation** and going in the [Actions] tabs in the right sidebar menu, we should be able to **test** the **operation** (if it doesn't work, [activate testing](#6-productions) and check if the production is started and reload the operation by double clicking it and clicking restart). For `PostGresOperation` it is to be noted that the table was created automatically. By doing so, we will send the operation a message of the type `msg.TrainingRequest`. Using as `Request Type`: `Grongier.PEX.Message` in the scrolling menu. Using as `%classname`: ``` msg.TrainingRequest ``` Using as `%json`: ``` { "training":{ "name": "name1", "room": "room1" } } ``` Then click `Call test service` Like this: ![testpostgres](https://user-images.githubusercontent.com/77791586/166425212-de16bfa0-6b6a-48a8-b333-d4d5cb3770f2.png) When testing the visual trace should show a success. We have successfully connected with an extern database. If you have followed this formation so far you should have understand that for now, no processes nor services calls our new `PostgresOperation` meaning that without using the test function of our management portal, it will not be called. ## 10.5. Exercise As an exercise, it could be interesting to modify `bo.IrisOperation` so that it returns a boolean that will tell the `bp.Router` to call `bo.PostgresOperation` depending on the value of that boolean. That way, our new operation will be called. **Hint**: This can be done by changing the type of reponse bo.IrisOperation returns and by adding to that new type of message/response a new boolean property and using the `if` activity in our bp.Router. ## 10.6. Solution First, we need to have a response from our `bo.IrisOperation` . We are going to create a new message after the other two, in the `src/python/msg.py` like, for the code: ```python @dataclass class TrainingResponse(Message): decision:int = None ``` Then, we change the response of bo.IrisOperation by that response, and set the value of its `decision` to 1 or 0 randomly.In the `src/python/bo.py`you need to add two imports and change the IrisOperation class, for the imports: ```python import random from msg import TrainingResponse ``` for the code: ```python class IrisOperation(BusinessOperation): """ It is an operation that write trainings in the iris database """ def insert_training(self, request:TrainingRequest): """ It takes a `TrainingRequest` object, inserts a new row into the `iris.training` table, and returns a `TrainingResponse` object :param request: The request object that will be passed to the function :type request: TrainingRequest :return: A TrainingResponse message """ resp = TrainingResponse() resp.decision = round(random.random()) sql = """ INSERT INTO iris.training ( name, room ) VALUES( ?, ? ) """ iris.sql.exec(sql,request.training.name,request.training.room) return resp def on_message(self, request): return None ``` We will now change our process `bp.Router` in `src/python/bp.py`, where we will make it so that if the response from the IrisOperation is 1 it will call the PostgesOperation. Here is the new code : ````python class Router(BusinessProcess): def on_request(self, request): """ It receives a request, checks if it is a formation request, and if it is, it sends a TrainingRequest request to FileOperation and to IrisOperation, which in turn sends it to the PostgresOperation if IrisOperation returned a 1. :param request: The request object that was received :return: None """ if isinstance(request,FormationRequest): msg = TrainingRequest() msg.training = Training() msg.training.name = request.formation.nom msg.training.room = request.formation.salle self.send_request_sync('Python.FileOperation',msg) form_iris_resp = self.send_request_sync('Python.IrisOperation',msg) if form_iris_resp.decision == 1: self.send_request_sync('Python.PostgresOperation',msg) return None ```` VERY IMPORTANT : we need to make sure we use **send_request_sync** and not **send_request_async** in the call of our operations, or else the activity will set off before receiving the boolean response. Before testing don't forget to double click on every modified service/process/operation to restart them or your changes won't be effective. In the visual trace, after testing, we should have approximately half of objects read in the csv saved also in the remote database. Note that to test you can just start the `bs.ServiceCSV` and it will automatically send request to the router that will then dispatch properly the requests. Also note that you **must** double click on a service/operation/process and press reload or restart if you want your saved changes on VSCode to apply. # 11. REST service In this part, we will create and use a REST Service. ## 11.1. Prerequisites In order to use Flask we will need to install flask which is a python module allowing us to easily create a REST service. **It was already done automatically** but for information the steps are : access the inside of the docker container to install flask on iris python. Once you are in the terminal enter : ``` pip3 install flask ``` Or add your module in the requirements.txt and rebuild the container. ## 11.2. Creating the service To create a REST service, we will need a service that will link our API to our production, for this we create a new simple service in `src/python/bs.py` just after the `ServiceCSV` class. ```python class FlaskService(BusinessService): def on_init(self): """ It changes the current target of our API to the one specified in the target attribute of the object, or to 'Python.Router' if no target attribute is specified :return: None """ if not hasattr(self,'target'): self.target = "Python.Router" return None def on_process_input(self,request): """ It is called to transmit information from the API directly to the Python.Router process. :return: None """ return self.send_request_sync(self.target,request) ``` on_process_input this service will simply transfer the request to the Router. Those components were **already registered** to the production in advance. For information, the steps to register your components are: Following [5.4.](#54-register-components) and using: ``` register_component("bs","FlaskService","/irisdev/app/src/python/",1,"Python.FlaskService") ``` To create a REST service, we will need Flask to create an API that will manage the `get` and `post` function: We need to create a new file as `python/app.py`: ```python from flask import Flask, jsonify, request, make_response from grongier.pex import Director import iris from obj import Formation from msg import FormationRequest app = Flask(__name__) # GET Infos @app.route("/", methods=["GET"]) def get_info(): info = {'version':'1.0.6'} return jsonify(info) # GET all the formations @app.route("/training/", methods=["GET"]) def get_all_training(): payload = {} return jsonify(payload) # POST a formation @app.route("/training/", methods=["POST"]) def post_formation(): payload = {} formation = Formation() formation.nom = request.get_json()['nom'] formation.salle = request.get_json()['salle'] msg = FormationRequest(formation=formation) service = Director.CreateBusinessService("Python.FlaskService") response = service.dispatchProcessInput(msg) return jsonify(payload) # GET formation with id @app.route("/training/", methods=["GET"]) def get_formation(id): payload = {} return jsonify(payload) # PUT to update formation with id @app.route("/training/", methods=["PUT"]) def update_person(id): payload = {} return jsonify(payload) # DELETE formation with id @app.route("/training/", methods=["DELETE"]) def delete_person(id): payload = {} return jsonify(payload) if __name__ == '__main__': app.run('0.0.0.0', port = "8081") ``` Note that the Flask API will use a Director to create an instance of our FlaskService from earlier and then send the right request. We made the POST formation functional in the code above, if you wish, you can make the other functions in order to get/post the right information using all the things we have learned so far, however note that no solution will be provided for it. ## 11.3. Testing We now need to start our flask app using Python Flask: ![How to start our flask app.py ](https://user-images.githubusercontent.com/77791586/165757717-d62131d7-039a-4ed5-835f-ffe32ebd2547.mov) Finally, we can test our service with any kind of REST client after having reloaded the Router service. Using any REST service (as RESTer for Mozilla), it is needed to fill the headers like this: ``` Content-Type : application/json ``` ![RESTHeaders](https://user-images.githubusercontent.com/77791586/165522396-154a4ef4-535b-44d7-bcdd-a4bfd2f574d3.png) The body like this: ``` { "nom":"testN", "salle":"testS" } ``` ![RESTBody](https://user-images.githubusercontent.com/77791586/166432001-0cca76a8-bd90-4d3b-9dcb-80b309786bc0.png) The authorization like this: Username: ``` SuperUser ``` Password: ``` SYS ``` ![RESTAuthorization](https://user-images.githubusercontent.com/77791586/165522730-bb89797a-0dd1-4691-b1e8-b7c491b53a6a.png) Finally, the results should be something like this: ![RESTResults](https://user-images.githubusercontent.com/77791586/165522839-feec14c0-07fa-4d3f-a435-c9a06a544785.png) # 12. Global exercise Now that we are familliar with all the important concepts of the Iris DataPlatform and its [Framework](#2-framework) it is time to try ourselves on a global exercise that will make us create a new BS and BP, modify greatly our BO and also explore new concept in Python. ## 12.1. Instructions Using this **endpoint** : `https://lucasenard.github.io/Data/patients.json` we have to automatically **get** information about `patients and their number of steps`. Then, we must calculate the average number of steps per patient before writing it down on a csv file locally. If needed, it is advised to seek guidance by rereading through the whole formation or the parts needed or by seeking help using the [hints](#122-hints) below. Don't forget to [register your components](#54-register-components) to acces them on the management portal. When everything is done and tested, or if the hints aren't enough to complete the exercise, the [solution](#123-solutions) step-by-step is present to walk us through the whole procedure. ## 12.2. Hints In this part we can find hints to do the exercise. The more you read through a part the more hints you get, it is advised to read only what you need and not all the part every time. For example you can read [How to gather information](#12211-get-information) and [How to gather information with request](#12211-get-information-with-request) in the [bs](#1232-bs) part and not read the rest. ### 12.2.1. bs #### 12.2.1.1. Get information To get the information from the endpoint it is advised to search for the `requests` module of python and use `json` and `json.dumps` to make it into str to send it in the bp #### 12.2.1.2. Get information with requests An online python website or any local python file can be used to use requests and print the output and it's type to go further and understand what we get. #### 12.2.1.3. Get information with requests and using it It is advised to create a new message type and object type to hold information and send it to a process to calculate the average. #### 12.2.1.4. Get information solution Solution on how to use request to get data and in our case, partially what to do with it. ```python r = requests.get(https://lucasenard.github.io/Data/patients.json) data = r.json() for key,val in data.items(): ... ``` Again, in an online python website or any local python file, it is possible to print key, val and their type to understand what can be done with them. It is advised to store `val` usign `json.dumps(val)` and then, after the SendRequest,when you are in the process, use `json.loads(request.patient.infos)`to get it ( if you have stored the informations of `val` into `patient.infos` ) ### 12.2.2. bp #### 12.2.2.1. Average number of steps and dict `statistics` is a native library that can be used to do math. #### 12.2.2.2. Average number of steps and dict : hint The native `map` function in python can allow you to seperate information within a list or a dict for example. Don't forget to transform the result of `map` back to a list using the `list` native function. #### 12.2.2.3. Average number of steps and dict : with map Using an online python website or any local python file it is possible to calculate average of a list of lists or a list of dict doing : ```python l1 = [[0,5],[8,9],[5,10],[3,25]] l2 = [["info",12],["bidule",9],[3,3],["patient1",90]] l3 = [{"info1":"7","info2":0},{"info1":"15","info2":0},{"info1":"27","info2":0},{"info1":"7","info2":0}] #avg of the first columns of the first list (0/8/5/3) avg_l1_0 = statistics.mean(list(map(lambda x: x[0]),l1)) #avg of the second columns of the first list (5/9/10/25) avg_l1_1 = statistics.mean(list(map(lambda x: x[1]),l1)) #avg of 12/9/3/90 avg_l2_1 = statistics.mean(list(map(lambda x: x[1]),l2)) #avg of 7/15/27/7 avg_l3_info1 = statistics.mean(list(map(lambda x: int(x["info1"])),l3)) print(avg_l1_0) print(avg_l1_1) print(avg_l2_1) print(avg_l3_info1) ``` #### 12.2.2.4. Average number of steps and dict : the answer If your request hold a patient which as an atribute infos which is a json.dumps of a dict of date and number of steps, you can calculate his avergae number of steps using : ```python statistics.mean(list(map(lambda x: int(x['steps']),json.loads(request.patient.infos)))) ``` ### 12.2.3. bo It is advised to use something really similar to `bo.FileOperation.WriteFormation` Something like `bo.FileOperation.WritePatient` ## 12.3. Solutions ### 12.3.1. obj & msg In our `obj.py` we can add : ```python @dataclass class Patient: name:str = None avg:int = None infos:str = None ``` In our `msg.py` we can add, for the imports: ```python from obj import Formation,Training,Patient ``` for the code: ```python @dataclass class PatientRequest(Message): patient:Patient = None ``` We will hold the information in a single obj and we will put the str of the dict out of the get request directly into the `infos` attribute. The average will be calculated in the Process. ### 12.3.2. bs In our `bs.py` we can add, for the imports: ```python import requests ``` for the code: ```python class PatientService(BusinessService): def get_adapter_type(): """ Name of the registred adaptor """ return "Ens.InboundAdapter" def on_init(self): """ It changes the current target of our API to the one specified in the target attribute of the object, or to 'Python.PatientProcess' if no target attribute is specified. It changes the current api_url of our API to the one specified in the target attribute of the object, or to 'https://lucasenard.github.io/Data/patients.json' if no api_url attribute is specified. :return: None """ if not hasattr(self,'target'): self.target = 'Python.PatientProcess' if not hasattr(self,'api_url'): self.api_url = "https://lucasenard.github.io/Data/patients.json" return None def on_process_input(self,request): """ It makes a request to the API, and for each patient it finds, it creates a Patient object and sends it to the target :param request: The request object that was sent to the service :return: None """ req = requests.get(self.api_url) if req.status_code == 200: dat = req.json() for key,val in dat.items(): patient = Patient() patient.name = key patient.infos = json.dumps(val) msg = PatientRequest() msg.patient = patient self.send_request_sync(self.target,msg) return None ``` It is advised to make the target and the api url variables ( see on_init ). After the `requests.get`putting the information in the `req` variable, it is needed to extract the information in json, which will make `dat` a dict. Using dat.items it is possible to iterate on the patient and its info directly. We then create our object patient and put `val` into a string into the `patient.infos` variable using `json.dumps` that transform any json data to string. Then, we create the request `msg` which is a `msg.PatientRequest` to call our process. Don't forget to register your component : Following [5.4.](#54-register-components) and using: ``` register_component("bs","PatientService","/irisdev/app/src/python/",1,"Python.PatientService") ``` ### 12.3.3. bp In our `bp.py` we can add, for the imports: ```python import statistic ``` for the code: ```python class PatientProcess(BusinessProcess): def on_request(self, request): """ It takes a request, checks if it's a PatientRequest, and if it is, it calculates the average number of steps for the patient and sends the request to the Python.FileOperation service. :param request: The request object that was sent to the service :return: None """ if isinstance(request,PatientRequest): request.patient.avg = statistics.mean(list(map(lambda x: int(x['steps']),json.loads(request.patient.infos)))) self.send_request_sync('Python.FileOperation',request) return None ``` We take the request we just got, and if it is a `PatientRequest` we calculate the mean of the steps and we send it to our FileOperation. This fills the `avg` variable of our patient with the right information ( see the hint on the bp for more information ) Don't forget to register your component : Following [5.4.](#54-register-components) and using: ``` register_component("bp","PatientProcess","/irisdev/app/src/python/",1,"Python.PatientProcess") ``` ### 12.3.4. bo In our `bo.py` we can add, inside the class `FileOperation` : ```python def write_patient(self, request:PatientRequest): """ It writes the name and average number of steps of a patient in a file :param request: The request message :type request: PatientRequest :return: None """ name = "" avg = 0 if request.patient is not None: name = request.patient.name avg = request.patient.avg line = name + " avg nb steps : " + str(avg) +"\n" filename = 'Patients.csv' self.put_line(filename, line) return None ``` As explained before, it is not needed to register `FileOperation` again since we did it already before. ## 12.4. Testing See [7.4.](#74-adding-the-operations-to-the-production) to add our operation. See [9.2.](#92-adding-the-service-to-the-production) to add our service. Now we can head towards the management portal and do as before. Remember that our new service will execute automatically since we added an InboundAdapter to it. The same way we checked for the `toto.csv` we can check the `Patients.csv` ## 12.5. Conclusion of the global exercise Through this exercise it is possible to learn and understand the creation of messages, services, processes and operation. We discovered how to fecth information in Python and how to execute simple task on our data. In the github, a [`solution` branch](https://github.com/LucasEnard/formation-template-python/tree/solution) is available with everything already completed. # 13. Conclusion Through this formation, we have created a fully fonctional production using only IrisPython that is able to read lines from a csv file and save the read data into a local txt, the IRIS database and an extern database using a db-api. We also added a REST service in order to use the POST verb to save new objects. We have discovered the main elements of InterSystems' interoperability Framework. We have done so using docker, vscode and InterSystems' IRIS Management Portal. 💡 This article is considered as InterSystems Data Platform Best Practice.
Announcement
Larry Finlayson · May 4, 2022

InterSystems Virtual Classroom Training and Certification Exam News

Learning Services has posted the calendar for virtual classroom training through September! All classes are held 9am to 5pm US Eastern time as live, instructor-led virtual classrooms with hands-on exercises and interactive discussions. Go to classroom.intersystems.com to view the schedule, register for a class, or request a private training for 5-15 people at your company. Taking our classes are part of a great way to prepare for our professional certification exams! Of note, September 7-9 we are offering the InterSystems Change Control: Tier 1 Basics training which is part of the recommended preparation for the new InterSystems CCR Technical Implementation Specialist exam. Go to certification.intersystems.com to read about the available exams, requirements, and scheduling instructions. Also, InterSystems Certification will be proctoring free certification exams ($150 value) at Global Summit 2022. This is a great opportunity for people to prep for Professional Certification as a CCR Technical Implementation Specialist!!
Article
Evgeny Shvarov · May 4, 2022

The Update for a Default Dockerfile Template for development with InterSystems IRIS

Hi developer folks! Thanks to all of you who start the development with InterSystems IRIS from the basic development template! Recently, thanks to @Dmitry.Maslennikov's contributions I've updated the Dockerfile to make the development simpler, images lighter and the building process faster. And it looks more beautiful too ;) Here is what changed: Let's go through the changes: 1. Lines 11-15 were substituted to only one line 7, that sets the WORKDIR. WORKDIR /home/irisowner/irisbuild Indeed, if the WORKDIR points to a home folder of the user that runs iris then there is no need to perform security adjustments. Of course I should change the line in iris.script to load zpm module from another workdir folder 2. Lines 17-21 were COPY commands that I used to use to copy different files from source folder of repo to an image to use it for iris or the environment. now all these were substituted with one line RUN --mount=type=bind,src=.,dst=. \ This syntax uses buildkit feature of Docker that allows mount files you need in the image without actually coping it. This saves the space and speeds up the building process. So with this you can forget about COPY commands at all: everything from the repo folder will be mounted and available in a WORKDIR during the build phase. Beautiful, isn't it? Make sure you have this option turned on in your Docker: 3. What's added is the line with TEST parameter: ARG TESTS=0 If TESTS=1 it makes Dockerfile to call unittest for ZPM module you develop in the project. This is being controlled with the following line: ([ $TESTS -eq 0 ] || iris session iris "##class(%ZPM.PackageManager).Shell(\"test $MODULE -v -only\",1,1)") && \ This TESTS parameter also helps to perform unit test automatically during the CI. This deserves and additional topic to discuss and I'll continue with this in the next article. Or you can check it in Dmitry's code if you don't want to wait ) Hope you liked it, but friendly feedback, concerns and pull requests are very welcome! Happy coding! Great initiative, I will try to apply this to most of my repository. BTW, there is an easy way to enable BuildKit without editing the config file of docker : Unix : ```sh DOCKER_BUILDKIT=1 docker-compose build ``` Windows : ```sh set "DOCKER_BUILDKIT=1" & docker-compose build ``` To find this quoting in Windows kept me busy for quite a while 2 months agoas it wasn't part of the README.md ! GRAND MERCI ! Thanks, @Guillaume.Rongier7183! Nice finding on the BUILDKIT! Direct source code mounting in image is must have, thank you!
Announcement
Evgeny Shvarov · May 7, 2022

InterSystems Grand Prix Contest 2022 Technical Bonuses

Hi developers! InterSystems Grand Prix unites all the key features of InterSystems data platforms! Thus we invite you to use the following features and collect additional technical bonuses that will help you to win the prize! Here we go! InterSystems FHIR - 5 IntegratedML - 4 Native API - 3 Interoperability - 3 Production EXtension(PEX) - 4 Embedded Python - 5 Adaptive Analytics (AtScale) Cubes usage - 4 Tableau, PowerBI, Logi usage - 3 InterSystems IRIS BI - 3 Docker container usage - 2 ZPM Package deployment - 2 Online Demo - 2 Unit Testing - 2 First Article on Developer Community - 2 Second Article On DC - 1 Code Quality pass - 1 Video on YouTube - 3 InterSystems FHIR as a Service and IRIS For Health - 5 points We invite all developers to build new or test existing applications using InterSystems FHIR Server (FHIRaaS). Sign in to the portal, make the deployment and start using your InterSystems FHIR server on AWS in your application for the programming contest. You can also build an FHIR application using InterSystems IRIS for Health, docker version. You can take the IRIS-FHIR-Template which prepares the FHIR server during the docker image building. The documentation for FHIR API 4.0.1 could be found here. Learn more in InterSystems IRIS for Health documentation. IntegratedML usage - 4 points 1. Use InterSystems IntegratedML in your AI/ML solution. Here is the template that uses it. InterSystems IntegratedML template 2. Data import tools: Data Import Wizard CSVGEN - CSV import util CSVGEN-UI - the web UI for CSVGEN 3. Documentation: Using IntegratedML 4. Online courses & videos: Learn IntegratedML in InterSystems IRIS Preparing Your Data for Machine Learning Predictive Modeling with the Machine Learning Toolkit IntegratedML Resource Guide Getting Started with IntegratedML Machine Learning with IntegratedML & Data Robot InterSystems Native API usage - 3 points You get this bonus if you access the data in your Full-Stack application using any of the InterSystems Native API options: .NET, Java, Python, Node.js. Learn more here. Interoperability Productions with BPL or DTL - 3 points One of the key features of IRIS Interoperability Productions is a business process, which could be described by BPL (Business Process Language). Learn more about Business Processes in the documentation. Business Rule is a no-code/low-code approach to managing the processing logic of the interoperability production. In InterSystems IRIS you can create a business rule which you can create visually or via the ObjectScript representation. You can collect the Business Process/Business Rule bonus if you create and use the business process or business rule in your interoperability production. Business Rule Example Learn more on Business Rules in the documentation Production EXtension (PEX) Usage - 4 points PEX is a Python, Java or .NET extension of Interoperability productions. You get this bonus if you use PEX with Python, JAVA or .NET in your interoperability production. PEX Demo. Learn more on PEX in Documentation. InterSystems IRIS has Python Pex module that provides the option to develop InterSystems Interoperability productions from Python. Use it and collect 3 extra points for your application. It's OK also to use alternative python.pex wheel introduced by Guillaume Ronguier. Embedded Python - 4 points Use Embedded Python in your application and collect 4 extra points. You'll need at least InterSystems IRIS 2021.2 for it. Adaptive Analytics (AtScale) Cubes usage - 4 pointsInterSystems Adaptive Analytics provides the option to create and use AtScale cubes for analytics solutions. You can use the AtScale server we set up for the contest (URL and credentials can be collected in the Discord Channel) to use cubes or create a new one and connect to your IRIS server via JDBC. The visualization layer for your Analytics solution with AtScale can be crafted with Tableau, PowerBI, Excel, or Logi. Documentation, AtScale documentation Training Tableau, PowerBI, Logi usage - 3 points Collect 3 points for the visualization you made with Tableau, PowerBI, or Logi - 3 points per each. Visualization can be made vs a direct IRIS BI server or via the connection with AtScale. Logi is available on behalf of the InterSystems Reports solution - you can download the composer on InterSystems WRC. A temporary license can be collected in the discord channel. Documentation Training InterSystems IRIS BI - 3 points InterSystems IRIS Business Intelligence is a feature of IRIS which gives you the option to create BI cubes and pivots against persistent data in IRIS and deliver then this information to users using interactive dashboards. Learn more The basic iris-analytics-template contains examples of an IRIS BI cube, pivot, and a dashboard. Here is the set of examples of IRIS BI solutions: Samples BI Covid19 analytics Analyze This Game of Throne Analytics Pivot Subscriptions Error Globals Analytics Creating InterSystems IRIS BI Solutions Using Docker & VSCode (video) The Freedom of Visualization Choice: InterSystems BI (video) InterSystems BI(DeepSee) Overview (online course) InterSystems BI(DeepSee) Analyzer Basics (online course) Docker container usage - 2 points The application gets a 'Docker container' bonus if it uses InterSystems IRIS running in a docker container. Here is the simplest template to start from. ZPM Package deployment - 2 points You can collect the bonus if you build and publish the ZPM(InterSystems Package Manager) package for your Full-Stack application so it could be deployed with: zpm "install your-multi-model-solution" command on IRIS with ZPM client installed. ZPM client. Documentation. Online Demo of your project - 2 pointsCollect 2 more bonus points if you provision your project to the cloud as an online demo. You can do it on your own or you can use this template - here is an Example. Here is the video on how to use it. Unit Testing - 2 points Applications that have Unit Testing for the InterSystems IRIS code will collect the bonus. Learn more about ObjectScript Unit Testing in Documentation and on Developer Community. Article on Developer Community - 2 points Post an article on Developer Community that describes the features of your project and collect 2 points for the article. The Second article on Developer Community - 1 point You can collect one more bonus point for the second article or the translation regarding the application. The 3rd and more will not bring more points but the attention will all be yours. Code quality pass with zero bugs - 1 point Include the code quality Github action for code static control and make it show 0 bugs for ObjectScript. Video on YouTube - 3 points Make the Youtube video that demonstrates your product in action and collect 3 bonus points per each. The list of bonuses is subject to change. Stay tuned!
Question
Muhammad Waseem · May 16, 2022

InterSystems FHIR Server (FHIRaaS) API Error

Hi All, Since yesterday I am getting below error while calling InterSystems FHIR Server (FHIRaaS) API Looking ForwardThanks
Announcement
Anastasia Dyubaylo · Sep 10, 2022

[Video] Overview of Basic Components for InterSystems Integration Solutions

Hey Community, In this demonstration you will see the building blocks of an integration in InterSystems IRIS for Health and HealthShare and see how messages are received, processed, and sent—including messages in the HL7 format: ⏯ Overview of Basic Components for InterSystems Integration Solutions Don't miss the latest videos for InterSystems developers on our DC YouTube!