Search

Clear filter
Announcement
Sarah Henkel · Aug 5, 2024

Tell us about your InterSystems IRIS for Health experience!

Hi Community! Calling all InterSystems IRIS for Health users! We’re looking to gather reviews on G2 about your experience with the platform. The first reviewers will receive a $25* gift card, so get your review in now! To write a review, please follow the link. G2 is an independent review site where over a million professionals share opinions on software. The gift card is from G2, so your eligibility to receive it is not affected by the content of your review. *25 USD or equivalent
Article
sween · Mar 4, 2024

InterSystems IRIS® CloudSQL Metrics to Google Cloud Monitoring

If you are a customer of the new InterSystems IRIS® Cloud SQL and InterSystems IRIS® Cloud IntegratedML® cloud offerings and want access to the metrics of your deployments and send them to your own Observability platform, here is a quick and dirty way to get it done by sending the metrics to Google Cloud Platform Monitoring (formerly StackDriver). The Cloud portal does contain a representation of some top level metrics for at-a-glance heads up metrics, which is powered by a metrics endpoint that is exposed to you, but without some inspection you would not know it was there. 🚩 This approach is most likely taking advantage of a "to be named feature", so with that being said, it is not future-proof and definitely not supported by InterSystems. So what if you wanted a more comprehensive set exported? This technical article/example shows a technique to scrape and forward metrics to observability, it can be modified to suit your needs, to scrape ANY metrics target and send to ANY observability platform using the Open Telemetry Collector. The mechanics leading up to the above result can be accomplished in many ways, but for here we are standing up a Kubernetes pod to run a python script in one container, and Otel in another to pull and push the metrics... definitely a choose your own adventure, but for this example and article k8s is the actor pulling this off with Python. Steps: Prereqs Python Container Kubernetes Google Cloud Monitoring Prerequisites: An active subscription to IRIS® Cloud SQL One Deployment, running, optionally with Integrated ML Secrets to supply to your environment Environment Variables Obtain Secrets I dropped this in a teaser as it is a bit involved and somewhat off target of the point, but these are the values you will need to generate the secrets. ENV IRIS_CLOUDSQL_USER 'user' ENV IRIS_CLOUDSQL_PASS 'pass' ☝ These are your credentials for https://portal.live.isccloud.io ENV IRIS_CLOUDSQL_USERPOOLID 'userpoolid' ENV IRIS_CLOUDSQL_CLIENTID 'clientid' ENV IRIS_CLOUDSQL_API 'api' ☝ These you have to dig out of development tools for your browser. `aud` = clientid `userpoolid`= iss `api` = request utl ENV IRIS_CLOUDSQL_DEPLOYMENTID 'deploymentid' ☝ This can be derived from the Cloud Service Portal Python: Here is the python hackery to pull the metrics from the Cloud Portal and export them locally as metrics for the otel collector to scrape: iris_cloudsql_exporter.py import time import os import requests import json from warrant import Cognito from prometheus_client.core import GaugeMetricFamily, REGISTRY, CounterMetricFamily from prometheus_client import start_http_server from prometheus_client.parser import text_string_to_metric_families class IRISCloudSQLExporter(object): def __init__(self): self.access_token = self.get_access_token() self.portal_api = os.environ['IRIS_CLOUDSQL_API'] self.portal_deploymentid = os.environ['IRIS_CLOUDSQL_DEPLOYMENTID'] def collect(self): # Requests fodder url = self.portal_api deploymentid = self.portal_deploymentid print(url) print(deploymentid) headers = { 'Authorization': self.access_token, # needs to be refresh_token, eventually 'Content-Type': 'application/json' } metrics_response = requests.request("GET", url + '/metrics/' + deploymentid, headers=headers) metrics = metrics_response.content.decode("utf-8") for iris_metrics in text_string_to_metric_families(metrics): for sample in iris_metrics.samples: labels_string = "{1}".format(*sample).replace('\'',"\"") labels_dict = json.loads(labels_string) labels = [] for d in labels_dict: labels.extend(labels_dict) if len(labels) > 0: g = GaugeMetricFamily("{0}".format(*sample), 'Help text', labels=labels) g.add_metric(list(labels_dict.values()), "{2}".format(*sample)) else: g = GaugeMetricFamily("{0}".format(*sample), 'Help text', labels=labels) g.add_metric([""], "{2}".format(*sample)) yield g def get_access_token(self): try: user_pool_id = os.environ['IRIS_CLOUDSQL_USERPOOLID'] # isc iss username = os.environ['IRIS_CLOUDSQL_USER'] password = os.environ['IRIS_CLOUDSQL_PASS'] clientid = os.environ['IRIS_CLOUDSQL_CLIENTID'] # isc aud print(user_pool_id) print(username) print(password) print(clientid) try: u = Cognito( user_pool_id=user_pool_id, client_id=clientid, user_pool_region="us-east-2", # needed by warrant, should be derived from poolid doh username=username ) u.authenticate(password=password) except Exception as p: print(p) except Exception as e: print(e) return u.id_token if __name__ == '__main__': start_http_server(8000) REGISTRY.register(IRISCloudSQLExporter()) while True: REGISTRY.collect() print("Polling IRIS CloudSQL API for metrics data....") #looped e loop time.sleep(120) Docker: Dockerfile FROM python:3.8 ADD src /src RUN pip install prometheus_client RUN pip install requests WORKDIR /src ENV PYTHONPATH '/src/' ENV PYTHONUNBUFFERED=1 ENV IRIS_CLOUDSQL_USERPOOLID 'userpoolid' ENV IRIS_CLOUDSQL_CLIENTID 'clientid' ENV IRIS_CLOUDSQL_USER 'user' ENV IRIS_CLOUDSQL_PASS 'pass' ENV IRIS_CLOUDSQL_API 'api' ENV IRIS_CLOUDSQL_DEPLOYMENTID 'deploymentid' RUN pip install -r requirements.txt CMD ["python" , "/src/iris_cloudsql_exporter.py"] docker build -t iris-cloudsql-exporter . docker image tag iris-cloudsql-exporter sween/iris-cloudsql-exporter:latest docker push sween/iris-cloudsql-exporter:latest Deployment: k8s; Create us a namespace: kubectl create ns iris k8s; Add the secret: kubectl create secret generic iris-cloudsql -n iris \ --from-literal=user=$IRIS_CLOUDSQL_USER \ --from-literal=pass=$IRIS_CLOUDSQL_PASS \ --from-literal=clientid=$IRIS_CLOUDSQL_CLIENTID \ --from-literal=api=$IRIS_CLOUDSQL_API \ --from-literal=deploymentid=$IRIS_CLOUDSQL_DEPLOYMENTID \ --from-literal=userpoolid=$IRIS_CLOUDSQL_USERPOOLID otel, Create Config: apiVersion: v1 data: config.yaml: | receivers: prometheus: config: scrape_configs: - job_name: 'IRIS CloudSQL' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 30s scrape_timeout: 30s static_configs: - targets: ['192.168.1.96:5000'] metrics_path: / exporters: googlemanagedprometheus: project: "pidtoo-fhir" service: pipelines: metrics: receivers: [prometheus] exporters: [googlemanagedprometheus] kind: ConfigMap metadata: name: otel-config namespace: iris k8s; Load the otel config as a configmap: kubectl -n iris create configmap otel-config --from-file config.yaml k8s; deploy load balancer (definitely optional), MetalLB. I do this to scrape and inspect from outside of the cluster. cat <<EOF | kubectl apply -f -n iris - apiVersion: v1 kind: Service metadata: name: iris-cloudsql-exporter-service spec: selector: app: iris-cloudsql-exporter type: LoadBalancer ports: - protocol: TCP port: 5000 targetPort: 8000 EOF gcp; need the keys to google cloud, the service account needs to be scoped roles/monitoring.metricWriter kubectl -n iris create secret generic gmp-test-sa --from-file=key.json=key.json k8s; the deployment/pod itself, two containers: deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: iris-cloudsql-exporter labels: app: iris-cloudsql-exporter spec: replicas: 1 selector: matchLabels: app: iris-cloudsql-exporter template: metadata: labels: app: iris-cloudsql-exporter spec: containers: - name: iris-cloudsql-exporter image: sween/iris-cloudsql-exporter:latest ports: - containerPort: 5000 env: - name: "GOOGLE_APPLICATION_CREDENTIALS" value: "/gmp/key.json" - name: IRIS_CLOUDSQL_USERPOOLID valueFrom: secretKeyRef: name: iris-cloudsql key: userpoolid - name: IRIS_CLOUDSQL_CLIENTID valueFrom: secretKeyRef: name: iris-cloudsql key: clientid - name: IRIS_CLOUDSQL_USER valueFrom: secretKeyRef: name: iris-cloudsql key: user - name: IRIS_CLOUDSQL_PASS valueFrom: secretKeyRef: name: iris-cloudsql key: pass - name: IRIS_CLOUDSQL_API valueFrom: secretKeyRef: name: iris-cloudsql key: api - name: IRIS_CLOUDSQL_DEPLOYMENTID valueFrom: secretKeyRef: name: iris-cloudsql key: deploymentid - name: otel-collector image: otel/opentelemetry-collector-contrib:0.92.0 args: - --config - /etc/otel/config.yaml volumeMounts: - mountPath: /etc/otel/ name: otel-config - name: gmp-sa mountPath: /gmp readOnly: true env: - name: "GOOGLE_APPLICATION_CREDENTIALS" value: "/gmp/key.json" volumes: - name: gmp-sa secret: secretName: gmp-test-sa - name: otel-config configMap: name: otel-config kubectl -n iris apply -f deployment.yaml Running Assuming nothing is amiss, lets peruse the namespace and see how we are doing. ✔ 2 config maps, one for GCP, one for otel ✔ 1 load balancer ✔ 1 pod, 2 containers successful scrapes Google Cloud Monitoring Inspect observability to see if the metrics are arriving ok and be awesome in observability! 💡 This article is considered InterSystems Data Platform Best Practice.
Announcement
Olga Zavrazhnova · May 30, 2024

Recalling the Best Moments from the Past InterSystems Global Summits

In the summer of 1993 for the first time was held InterSystems Global Summit. Many warm memories are associated with this event! We hope you had the opportunity to visit one or more of Global Summits! Let's share photos of the best moments you had at any InterSystems Global Summit in the comments below. InterSystems Global Summit 2024 this year will take place on June 9-12, 2024 National Harbor, Maryland. Learn more about this event and register here. Hi @Olga.Zavrazhnova2637 Thank you for taking the initiative.I had the pleasure of attending last 2 year's InterSystems Global Summits, and it was a fantastic experience! The sessions were insightful, and I had the chance to connect with many industry experts. Here’s few of my best moments from the event. Thanks See you in Maryland! Can't wait to see everyone again this year!!!🤩
Article
Hiroshi Sato · Jun 6, 2024

How to perform specific actions when starting an InterSystems product

InterSystems FAQ rubric If you want to run an OS executable file, command, or a program created within an InterSystems product when the InterSystems product starts, write the processing in the SYSTEM^%ZSTART routine. (The %ZSTART routine is created in the %SYS namespace). Before you write any code in SYSTEM^%ZSTART, make sure that it works properly under all conditions. If the ^%ZSTART routine is written incorrectly, or if it is written correctly but the command does not return a response or an error occurs during processing, InterSystems products may not be able to start. For more information, please refer to the following document. About writing %ZSTART and %ZSTOP routines [IRIS]About writing %ZSTART and %ZSTOP routines One other thing to note is that you should not put any long running processes in %ZSTART as they will hold up the Start process. If you have to, then consider JOBbing it off to a separate process, so it can run in the background and allow the rest of the startup steps to run in parallel.
Announcement
Anastasia Dyubaylo · Sep 17, 2024

[Video] Chatbots are all the RAGe: Generative AI with InterSystems IRIS

Hey Developers, Play the new video on InterSystems Developers YouTube: ⏯ Chatbots are all the RAGe: Generative AI with InterSystems IRIS @ Global Summit 2024 Discover the capabilities of InterSystems IRIS Vector Search, including integrating dense vector storage and search with InterSystems IRIS SQL for advanced semantic search and other GenAI application patterns. This video will illustrate how Vector Search powers Retrieval-Augmented Generation (RAG) for chatbots, presenting practical use cases that transform your data and applications with AI innovations. Presenters:🗣 @tomd, Product Manager, Machine Learning, InterSystems🗣 @Alvin.Ryanputra, Systems Developer, InterSystems Enjoy watching, and get ready for more videos! 👍
Article
Muhammad Waseem · May 22, 2024

Overview of InterSystems IRIS® SQL usage options - Part 1

Hi Community,In this series of articles, we will explore the following InterSystems SQL usage options: Embedded SQL Dynamic SQL Class Queries SQL Overview InterSystems SQL provides a full set of standard relational features, including the ability to define table schema, execute queries, and define and execute stored procedures. You can execute InterSystems SQL interactively from the Management Portal or programmatically from using a SQL shell interface. Embedded SQL enables you to embed SQL statements in your ObjectScript code, while Dynamic SQL enables you to execute dynamic SQL statements from ObjectScript at runtime. 1. Embedded SQL Within ObjectScript, InterSystems SQL supports Embedded SQL: the ability to place an SQL statement within the body of a method (or other code). Using Embedded SQL, you can query a single record, or define a cursor and use that to query multiple records. Embedded SQL is compiled. By default, it is compiled the first time it is executed (runtime), not when the routine that contains it is compiled. Embedded SQL is quite powerful when used in conjunction with the object access capability of InterSystems IRIS. 2. Dynamic SQL Dynamic SQL refers to SQL statements that are prepared and executed at runtime. In Dynamic SQL preparing and executing an SQL command are separate operations. Dynamic SQL lets you program within InterSystems IRIS in a manner similar to an ODBC or JDBC application (except that you are executing the SQL statement within the same process context as the database engine). Dynamic SQL is invoked from an ObjectScript program. Dynamic SQL queries are prepared at program execution time, not compilation time. 3. Class Queries A class query is a tool, contained in a class and meant for use with dynamic SQL, to look up records that meet specified criteria. With class queries, you can create predefined lookups for your application. For example, you can look up records by name, or provide a list of records that meet a particular set of conditions, such as all the flights from Paris to Madrid. Before moving to the first option, let us create a persistent class Demo.Person, that also extends the %Populate class to populate some data. Class Demo.Person Extends (%Persistent, %Populate) { /// Person's name. Property Name As %String(POPSPEC = "Name()") [ Required ]; /// Person's Social Security number. This is validated using pattern match. Property SSN As %String(PATTERN = "3N1""-""2N1""-""4N") [ Required ]; /// Person's Date of Birth. Property DOB As %Date(POPSPEC = "Date()"); /// Person's City Property CITY As %String; } Run the following command to check the table data after compiling the above class: SELECT ID, CITY, DOB, Name, SSN FROM Demo.Person Now run the following command to populate 20 records: do ##class(Demo.Person).Populate(20) Run the select query again We have created the table and populated it with some data. In the upcoming article, we will review Embedded SQL.Thanks
Announcement
Anastasia Dyubaylo · Jun 2

Winners of the InterSystems FHIR and Digital Health Interoperability Contest 2025

Hi Community, It's time to announce the winners of the InterSystems FHIR and Digital Health Interoperability Contest! Thanks to all our amazing participants who submitted 11 applications 🔥 Now it's time to announce the winners! Experts Nomination 🥇 1st place and $5,000 go to the FHIRInsight app by @José.Pereira, @henry, @Henrique 🥈 2nd place and $2,500 go to the iris-fhir-bridge app by @Muhammad.Waseem 🥉 3rd place and $1,000 go to the health-gforms app by @Yuri.Gomes 🏅 4th place and $500 go to the fhir-craft app by @Laura.BlázquezGarcía 🏅 5th place and $300 go to the CCD Data Profiler app by @Landon.Minor 🌟 $100 go to the IRIS Interop DevTools app by @Chi.Nguyen-Rettig 🌟 $100 go to the hc-export-editor app by @Eric.Fortenberry 🌟 $100 go to the iris-medbot-guide app by @shan.yue 🌟 $100 go to the Langchain4jFhir app by @ErickKamii 🌟 $100 go to the ollama-ai-iris app by @Oliver.Wilms Community Nomination 🥇 1st place and $1,000 go to the iris-medbot-guide app by @shan.yue 🥈 2nd place and $600 go to the FHIRInsight app by @José.Pereira, @henry, @Henrique 🥉 3rd place and $300 go to the FhirReportGeneration app by @XININGMA 🏅 4th place and $200 go to the iris-fhir-bridge app by @Muhammad.Waseem 🏅 5th place and $100 go to the fhir-craft app by @Laura.BlázquezGarcía Our sincerest congratulations to all the winners! Join the fun next time ;) Thanks for experts nomination award That's amazing!!!! ![wow](https://media0.giphy.com/media/v1.Y2lkPTc5MGI3NjExejlrd3l6c2xvcm5xdDFkY281aGtjMmJwazEzc2lmazRmc24yNGl3aCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/5yLgocczwbnOzByTsoU/giphy.gif) Congratulations to AAAALLLLL!! 👏👏👏👏👏 Congratulations to all the winners! 👏 Kudos to all the winners! 👏 Congratulations to all participants! It's been a very tough competition!
Question
Rick Rowbotham · Feb 20

Need to query the rule data from the tables in the InterSystems database

I need read only access using a JDBC query to the tables that contain the rules data for a particular interface. I'm having difficulty locating the tables that house the rule data and I'm wondering if someone could help me with that information and any sample queries if possible. Thanks in advance! Just to clarify a bit: Are you talking about, for example, HL7 routing rules that are used with the standard business process EnsLib.HL7.MsgRouter.RoutingEngine? What information do you want to retrieve for the rules? Yes, I would like to query the rule structure. Would you mind specify what kind of rule? Edit to add: note that rules are defined as XML, there is no table available to directly query a rule I would like to query all rules. Rules of any kind. As Enrico mentioned, the rule logic is stored as XML in a rule class, so you can't query the logic directly via SQL. You can find the names of all rule classes using SQL:SELECT ID FROM %Dictionary.ClassDefinition where Super='Ens.Rule.Definition' To then view the rule logic you would need to open a class and view the "RuleDefinition" XData block: Class ORU.RouterRoutingRule Extends Ens.Rule.Definition { Parameter RuleAssistClass = "EnsLib.HL7.MsgRouter.RuleAssist"; XData RuleDefinition [ XMLNamespace = "http://www.intersystems.com/rule" ] { <ruleDefinition alias="" context="EnsLib.HL7.MsgRouter.RoutingEngine" production="ADTPKG.FoundationProduction"> <ruleSet name="" effectiveBegin="" effectiveEnd=""> <rule name=""> <when condition="1" comment=""> <send transform="Demo.ORUTransformation" target="ORU.Out"></send> <send transform="Demo.ORUTransformation" target="ORU.Two"></send> </when> </rule> </ruleSet> </ruleDefinition> } } To get the XML rule definition from SQL you can write/define a stored procedure that returns the XML rule definition, then....parse the XML. Something like: Class Community.Rule.XDATA { ClassMethod GetXML(RuleName As %String) As %String(MAXLEN="") [ SqlProc ] { Set xdataOBJ = ##class(%Dictionary.XDataDefinition).IDKEYOpen(RuleName,"RuleDefinition") Quit xdataOBJ.Data.Read($$$MaxLocalLength) } } Then from SQL: select Community_Rule.XDATA_GetXML('Your.Rule.Name')
Question
Sebastian Thiele · Feb 27

Searching InterSystems FHIR Server for specific Encounter (based on peroid)

Hi guys, I am looking for a way to search for FHIR Encounter resources from an InterSystems FHIR server where there period.start is before or after a certain time. I can´t get my head around which would be a correct way to do this since docs and FHIR spec is not clear to me which fields can be used for searching with wich prefixes. In my local InterSystems FHIR server I have a set of Encounter ressources, each set with a period.start and (possibly) a period.end. I´d like to retrieve all Encounters with a start date time prior to a given datetime. I did a little testing with URL parameters.// retrieve encounters with a start prior to 2024-10-25T15:30:00Zhttp://localhost:54107/csp/healthshare/ukerhirservernew/fhir/r4/Encounter?&date=lt2024-10-25T15:30:00Zhttp://localhost:54107/csp/healthshare/ukerhirservernew/fhir/r4/Encounter?period.start=lt2024-10-25T15:30:00Z But none of the calls returned the correct set of encounters I expected. I´am not sure If I get the whole idea of searching for periods wrong or if it has something to do with the way period (datatype) information are stored and indexed within the InterSystems FHIR server. Any idea or hint would be hihgly apreciated. Best regards,sebastian Hello @Sebastian.Thiele As per the documentation, date is fully supported search parameter. so, This should work prefix lt dates I've 2 encounter resource in my FHIR server and I've tested with different dates ex: &date=lt2024-10-27T15:29:00Z period and it works to me Hi Ashok, thank you for the response. This url is the exact same syntax that I used above (possibly my links contracted by the UI and therefore not fully visible). Anyway in my case there is no filtering applied at all and simply all encounters are returned regardless of the date. Could you post your encounter resources he used for testing so I can test if it is a FHIR server version specific issue? best regards,Sebastian Hi @Sebastian.Thiele Here is the Encounter resource. I'm using IRIS for Windows (x86-64) 2024.1.1 url : http://localhost:52773/csp/healthshare/learning/fhir/r4/Encounter?&date=lt2024-10-27T15:29:00Z Spoiler {"resourceType":"Encounter","id":"example-01","meta":{"lastUpdated":"2024-10-27T15:30:00Z","profile":["https://nrces.in/ndhm/fhir/r4/StructureDefinition/Encounter"]},"text":{"status":"generated","div":"<div xmlns=\"http://www.w3.org/1999/xhtml\"> Admitted to Cardiac Unit,UVW Hospital between June 28 and July 9 2020</div>"},"identifier":[{"system":"https://ndhm.in","value":"S100"}],"status":"finished","class":{"system":"http://terminology.hl7.org/CodeSystem/v3-ActCode","code":"IMP","display":"inpatient encounter"},"subject":{"reference":"Patient/example-01"},"period":{"start":"2024-10-29T15:30:00Z","end":"2024-10-29T19:30:00Z"},"hospitalization":{"dischargeDisposition":{"coding":[{"system":"http://terminology.hl7.org/CodeSystem/discharge-disposition","code":"home","display":"Home"}],"text":"Discharged to Home Care"}}} Hi Ashok, seems there is no Encounter resource in your post. Did you missed that? best regards,Sebastian Can you click the Spoiler to view the message. Hi Ashok, sorry for the late response. That get me a little further. The approach of ordering by date seem to work only as long as all of the resources have a period.start and period.end provided. In my scenario the most acutal encounter doesn´t have an period.end set since it has not yet ended. The field period.start of one encounter is the the period.end of it´s predecessor. Can you confirm that behaviour of the FHIR server if no period.end is set? If so what would be your approach? best regards,sebastian I think what you are looking for is... (I have not tried this on the InterSystems FHIR server as I use the Microsoft Azure FHIR server) Search where start date is after a time... /Encounter?date=sa2022-08-15T09:00:00 Search where a start date is before a time /Encounter?date=lt2022-08-15T09:00:00 For me the times I search by are UTC so that might be catching you out?
Announcement
AYUSH Shetty · May 18

Do you have any openings for InterSystems developer Job

I am writing to express my interest in the "IRIS Ensemble Integration . I have 2 years of experience as an Ensemble IRIS Developer, working with Ensemble and IRIS for integration, server management, and application development. Looking for more opportunites to work under Iris Cache Objectscript Changed to an annoucement and added Job Wanted tag. Also take a look at Job Opportunity tag offerings. Thankyou @Evgeny.Shvarov Much appreciated
Announcement
Dmitry Maslennikov · Oct 28, 2022

InterSystems Package Manager ZPM 0.5.0 Release

A new release of ZPM has been published 0.5.0 New in this release Added support for Python's requirements.txt file Using tokens for publishing packages Fixed various issues Python's requirements.txt Now, if your project uses Python embedded and requires some Python's dependencies, you can add requirements.txt file to the project, as usual for any Python project, file have to be in the root of a project next to module.xml. And with load command or install command, ZPM will install dependencies from that file with using pip. USER>zpm "install python-faker" [USER|python-faker] Reload START (/usr/irissys/mgr/.modules/USER/python-faker/0.0.2/) [USER|python-faker] requirements.txt START [USER|python-faker] requirements.txt SUCCESS [USER|python-faker] Reload SUCCESS [python-faker] Module object refreshed. [USER|python-faker] Validate START [USER|python-faker] Validate SUCCESS [USER|python-faker] Compile START [USER|python-faker] Compile SUCCESS [USER|python-faker] Activate START [USER|python-faker] Configure START [USER|python-faker] Configure SUCCESS [USER|python-faker] Activate SUCCESS Great feature, @Dmitry.Maslennikov ! Thank you! [USER|python-faker] Reload START (/usr/irissys/mgr/.modules/USER/python-faker/0.0.2/) [USER|python-faker] requirements.txt START [USER|python-faker] requirements.txt SUCCESS Is it possible with -v tag to see what packages were installed? Yeah, sure, -v will show the actual output from pip
Discussion
Ben Spead · Dec 9, 2022

Can ChatGPT be helpful to an developer using InterSystems technology?

After seeing several article raving about how ground-breaking the recent release of ChatGPT is, I thought I would try asking it to help with a Caché newbie question: How do you find the version of InterSystems Caché? To be honest, I was quite surprised at what the chat bot told me: Not going to lie - I was impressed! I tried several other searches with regard to InterSystems IRIS version number and was told to use $zv. I did a google search for part of the answer it gave me and I came up with zero results - this is synthesized information and not just copied and pasted from InterSystems docs (I couldn't find the string in our docs either). What do you want to try asking ChatGPT about technological know-how? 12/11/22 UPDATE: As is clear from the screenshot, I was playing with this on my phone and didn't properly vet the answer on an actual Caché instance. I fell prey to the observation made in the article linked in the comments w.r.r. ChatGPT answers being banned from StackOverflow: "But Open AI also notes that ChatGPT sometimes writes "plausible-sounding but incorrect or nonsensical answers." " So my respect for its technical answers has diminished a bit, but overall I am still rather impressed with the system as a whole and think it is a pretty big leap forward technologically. Thanks for those who chimed in!! To be honest, I was also impressed, but not about ChatGPT rather about the suggested solution! I have no idea, who at ISC wrote this recommendation, but the five clicks on my IRIS-2021.1 end up in NOTHING. On the right-side panel I see 'System Information' but this can't be clicked, below is 'View System Dashboard', here you see everithing, except a version info. So dear ChatGPT, if someone asks you again, the (better) answer is:- login to management portal- click on 'About' (here you see the version and much much more)This works for all IRIS versions (until now) and for all Cache versions which have a Management Portal. For the older (over 20 years) Cache versions right-click on the cube and select 'Config Manager' (on Windows, of course. In that times, I didn't worked on other platforms, hence no advices). For business scenarios will be very useful, for developers is better to use docs, learning, discord and community tools minor convincing example ????? You can't access the class name alone write $system.Version // is just a class name, something like write ##class(SYSTEM.Version) // (hence both lines gives you a syntax error) You have to specify a property or a method. Because $System.Version is an abstract class, you can specify a method only write $system.Version.GetVersion() Hope this clarifies things "Hope this clarifies things" Indeed: ChatGPT has not qualified for me.I'll stay with standard docs and learning. Seems to be kind a controversial topic: Stack Overflow temporarily bans answers from OpenAI's ChatGPT chatbot yup - I saw that this morning as well "I have no idea, who at ISC wrote this recommendation " - this is the whole point ... this wasn't written by anyone at ISC, but is inferred / synthesized / arrived at by the ChatGPT algorithm. I did several google searched on strings from the result I was given and couldn't find this anywhere. So ChatGPT isn't copying content, it is generating it ... albeit incorrectly in this case :) But the way that this IA understands and creates text is impressive, no doubts. I think this is something we'll learn how to deal with our daily tasks. As the zdnet article says, Stack Overflow removes **temporarily**, so it may be a matter of time until we get handed by IA in our development tasks, with services like GitHub copilot. So thank you for bringing this topic to discussion! Oh, this is a misunderstanding. I thought, the screenshot is coming from a (not showed) link, and anticipated the link points to some ISC site.Anyway, ChatGPT and other ChatBots (nowadays, they often pop up on various sites, mainly on those of big companies) tries to mimic a human and often end up only with an reference to FAQs or with a wrong (inappropriate) answer.They all base upon AI (some says: AI=artificial intelligence, others say: AI=absent intelligence). My bottom line is, AI is for some areas already "usable" and for others, it still "will require some more time".
Announcement
Anastasia Dyubaylo · Jan 24, 2023

Webinar in Spanish: "Validating FHIR profiles with InterSystems IRIS for Health"

Hi Community, We're pleased to invite you to the upcoming webinar in Spanish called "Validating FHIR profiles with InterSystems IRIS for Health". Date & time: February 2, 3:00 PM CET Speaker: @Ariel.Arias, Sales Engineer, InterSystems Chile The webinar is aimed at developers and entrepreneurs. During the webinar, we will build a FHIR server and repository. We will also add a local profile with its extensions, to validate resources upon that guide. We will do it by using InterSystems IRIS, the IRIS validator (Java), and SUSHI. With all of this, we will have all we need to validate profiles before sending them to a central repository and test the FHIR applications by consuming those resources stored on the InterSystems IRIS for Health's FHIR Repository. ➡️ Register today and enjoy! >> Is the video of the webinar available? calling @Esther.Sanchez ;) Hi @Evgeny.Shvarov! The recording of the webinar is on the Spanish DC YouTube: https://www.youtube.com/watch?v=tCWoOfNcaQ4&t=270s
Announcement
Bob Kuszewski · Apr 14, 2023

IKO (InterSystems Kubernetes Operator) 3.5 Release Announcement

InterSystems Kubernetes Operator (IKO) 3.5 is now Generally Available. IKO 3.5 adds significant new functionality along with numerous bug fixes. Highlights include: Simplified setup of TLS across the Web Gateway, ECP, Mirroring, Super Server, and IAM The ability to run container sidecars along with compute or data nodes – perfect for scaling web gateways with your compute nodes. Changes to the CPF configmap and IRIS key secret are automatically processed by the IRIS instances when using IKO 3.5 with IRIS 2023.1 and up. The initContainer is now configurable with both the UID/GID and image. IKO supports topologySpreadConstraints to let you more easily control scheduling of pods Compatibility Version to support a wider breadth of IRIS instances Autoscale of compute nodes (Experimental) IKO is now available for ARM Follow the Installation Guide for guidance on how to download, install, and get started with IKO. The complete IKO 3.5 documentation gives you more information about IKO and using it with InterSystems IRIS and InterSystems IRIS for Health. IKO can be downloaded from the WRC download page (search for Kubernetes). The container is available from the InterSystems Container Registry. IKO simplifies working with InterSystems IRIS or InterSystems IRIS for Health in Kubernetes by providing an easy-to-use irisCluster resource definition. See the documentation for a full list of features, including easy sharding, mirroring, and configuration of ECP.
Article
Evgeniy Potapov · Nov 2, 2022

How to develop an InterSystems Adaptive Analytics (AtScale) cube

Today we will talk about Adaptive Analytics. This is a system that allows you to receive data from various sources with a relativistic data structure and create OLAP cubes based on this data. This system also provides the ability to filter and aggregate data and has mechanisms to speed up the work of analytical queries. Let's take a look at the path that data takes from input to output in Adaptive Analytics. We will start by connecting to a data source - our instance of IRIS. In order to create a connection to the source, you need to go to the Settings tab of the top menu and select the Data Warehouses section. Here we click the “Create Data Warehouse” button and pick “InterSystems IRIS” as the source. Next, we will need to fill in the Name and External Connection ID fields (use the name of our connection to do that), and the Namespace (corresponds to the desired Namespace in IRIS). Since we will talk about the Aggregate Schema and Custom Function Installation Mode fields later, we will leave them by default for now. When Adaptive Analytics creates our Data Warehouse, we need to establish a connection with IRIS for it. To do this, open the Data Warehouse with a white arrow and click the “Create Connection” button. Here we should fill in the data of our IRIS server (host, port, username, and password) as well as the name of the connection. Please note that the Namespace is filled in automatically from the Data Warehouse and cannot be changed in the connection settings. After the data has entered our system, it must be processed somewhere. To make it happen, we will create a project. The project processes data from only one connection. However, one connection can be involved in several projects. If you have multiple data sources for a report, you will need to create a project for each of them. All entity names in a project must be unique. The cubes in the project (more on them later) are interconnected not only by links explicitly configured by the user, but also if they use the same table from the data source. To create a project, go to the Projects tab and click the “New Project” button. Now you can create OLAP cubes in the project. To do that, we will need to use the “New Cube” button, fill in the name of the cube and proceed to its development. Let's dwell on the rest of the project's functionality. Under the name of the project, we can see a menu of tabs, out of which it is worth elaborating on the Update, Export, and Snapshots tabs. On the Export tab, we can save the project structure as an XML file. In this way, you can migrate projects from one Adaptive Analytics server to another or clone projects to connect to multiple data sources with the same structure. On the Update tab, we can insert text from the XML document and bring the cube to the structure that is described in this document. On the Snapshots tab, we can do version control of the project, switching between different versions if desired. Now let's talk about what the Adaptive Analytics cube contains. Upon entering the cube, we are greeted by a description of its contents which shows us the type and number of entities that are present in it. To view its structure, press the “Enter model” button. It will bring you to the Cube Canvas tab, which contains all the data tables added to the cube, dimensions, and relationships between them. In order to get data into the cube, we need to go to the Data Sources tab on the right control panel. The icon of this tab looks like a tablet. Here we should click on the “hamburger” icon and select Remap Data Source. We select the data source we need by name. Congratulations, the data has arrived in the project and is now available in all its cubes. You can see the namespace of the IRIS structure on this tab and what the data looks like in the tables. Now it’s time to talk about each entity that makes up the structure of the cube. We will start with separate tables with data from the namespace of IRIS, which we can add to our cube using the same Data Sources tab. Drag the table from this tab to the project workspace. Now we can see a table with all the fields that are in the data source. We can enter the query editing window by clicking on the “hamburger” icon in the upper right corner of the table and after that going to the “Edit dataset” item. In this window, you can see that the default option is loading the entire table. In this mode, we can add calculated columns to the table. Adaptive Analytics has its own syntax for creating them. Another way to get data into a table is to write an SQL query to the database in Query mode. In this query, we must write a single Select statement, where we can use almost any language construct. Query mode gives us a more flexible way to get data from a source into a cube. Based on columns from data tables, we can create measures. Measures are an aggregation of data in a column that includes the calculation of the number of records, sum of numbers in a column, maximum, minimum and average values, etc. Measures are created with the help of the Measures tab on the right menu. We should select from which table and which of its columns we will use the data to create the measure, as well as the aggregation function applied to those columns. Each measure has 2 names. The first one is displayed in the Adaptive Analytics interface. The second name is generated automatically by the column name and aggregation type and is indicated in the BI systems. We can change the second name of measure to our own choice, and it is a good idea to take this opportunity. Using the same principle, we also can build dimensions with non-aggregated data from one column. Adaptive Analytics has two types of dimensions - actual and degenerate ones. Degenerate dimensions include all records from the columns bound to them, while not linking the tables to each other. Normal dimensions are based on one column of one table, that is why they allow us to select only unique values from the column. However, other tables can be linked to this dimension too. When the data for records has no key in the dimension, it is simply ignored. For example, if the main table does not have a specific date, then data from related tables for this date will be skipped in calculations since there is no such member in the dimension.From a usability point of view, degenerate dimensions are more convenient compared to actual ones. It happens because they make it impossible to lose data or establish unintended relationships between cubes in a project. However, from a performance point of view, the use of normal dimensions is preferable. Dimensions are created in the corresponding tab on the right panel. We should specify the table and its column, from where we will get all unique values to fill the dimension. At the same time, we can use one column as a source of keys for the dimension, whereas the data from another one will fall into the actual dimension. For example, we can use the user's ID as a key, and moderately send his name. Therefore, users with the same name will be different entities for the measure. Degenerate dimensions are created by dragging a column from a table from the workspace to the Dimensions tab. After that, the corresponding dimension is automatically assembled in the workspace. All dimensions are organized in a hierarchical structure, even if there is only one of them. The structure has three levels. The first one is the name of the structure itself. The second one is the name of the hierarchy. The third level is the actual dimension in the hierarchy. A structure can have multiple hierarchies. Using the created measures and dimensions, we can develop calculated measures. These are measures that were made with the help of the cut-off MDX language. They can do simple transformations with data in an OLAP structure, which is sometimes a practical feature. Once you have assembled data structure, you can test it using a simple built-in previewer. To do this, go to the Cube Data Preview tab on the top menu of the workspace. Enter measures in Rows and dimensions in Columns or vice versa. This viewer is similar to Analysts in IRIS but with less functionality. Knowing that our data structure works, we can set up our project to return data. To do this, click the “Publish” button on the main screen of the project. After that, the project immediately becomes available via the generated link. To get this link, we need to go to the published version of any of the cubes. To do that, open the cube in the Published section on the left menu. Go to the Connect tab and copy the link for the JDBC connection from the cube. It will be different for each project but the same for all the cubes in a project. When you finish editing cubes and want to save changes, go to the export tab of the project and download the XML representation of your cube. Then put this file in the “/atscale-server/src/cubes/” folder of the repository (the file name doesn't matter) and delete the existing XML file of the project. If you don't delete the original file, Adaptive Analytics will not publish the updated project with the same name and ID. At the next build, a new version of the project will be automatically passed to Adaptive Analytics and will be ready for use as a default project. We have figured out the basic functionality of Adaptive Analytics for now so let's talk about optimizing the execution time of analytical queries using UDAF. I will explain what benefits it gives us, and what problems might arise in this case. UDAF stands for USER Defined aggregate functions. UDAF gives AtScale 2 main advantages. The first one is the ability to store query cash (they call it Aggregate Tables). It allows the next query to take already pre-calculated results from the database, using aggregation of data. The second one is the ability to use additional functions (actually User-Defined Aggregate Functions) and data processing algorithms that Adaptive Analytics is forced to store in the data source. They are kept in the database in a separate table, and Adaptive Analytics can call them by name in auto-generated queries. When Adaptive Analytics can use these functions, the performance of analytics queries increases dramatically. The UDAF component must be installed in IRIS. It can be done manually (check the documentation about UDAF on https://docs.intersystems.com/irisforhealthlatest/csp/docbook/DocBook.UI.Page.cls?KEY=AADAN#AADAN_config) or by installing a UDAF package from IPM (InterSystems Package Manager). At UDAF Adaptive Analytics Data Warehouse settings, change Custom Function Installation Mode to Custom Managed value. The problem that appears when using aggregates is that such tables store outdated information at the time of the request. After the aggregate table is built, new values ​​that come to the data source are not added to aggregation tables. In order for aggregate tables to contain the freshest data possible, queries for them must be restarted, and new results should be written in the table. Adaptive Analytics has an internal logic for updating aggregate tables, but it is much more convenient to control this process yourself. You can configure updates on a per-cube basis in the web interface of Adaptive Analytics and then use scripts from repository DC-analytics (https://github.com/teccod/Public-InterSystems-Developer-Community-analytics/tree/main/iris/src/aggregate_tables_update_shedule_scripts) to export schedules and import them to another instance, or use the exported schedule file as a backup. You will also find a script to set all cubes to the same update schedule if you do not want to configure each one individually. To set the schedule for updating aggregates in the Adaptive Analytics interface, we need to get into the published cube of the project (the procedure was described earlier). In the cube, go to the Build tab and find the window for managing the aggregation update schedule for this cube using the “Edin schedules” link. An easy-to-use editor will open up. Use it to set up a schedule for periodically updating data in the aggregate tables. Thus, we have considered all main aspects of working with Adaptive Analytics. Of course, there are quite a lot of features and settings that we have not reviewed in this article. However, I am sure that if you need to use some of the options we haven't examined, it will not be difficult for you to figure things out on your own.