Search

Clear filter
Announcement
Anastasia Dyubaylo · Mar 3, 2023

[Video] The OWASP Top 10 & InterSystems IRIS Application Development

Hi Community, Watch this video to explore common security pitfalls within the industry and how to avoid them when building applications on InterSystems IRIS: ⏯ The OWASP Top 10 & InterSystems IRIS Application Development @ Global Summit 2022 Presenters: 🗣 @Timothy.Leavitt, Application Services Development Manager🗣 @Pravin.Barton, Developer, Application Services🗣 @Wangyi.Huang, Technical Specialist, Application Services Subscribe to our Youtube channel InterSystems Developers to stay up to date!
Announcement
Anastasia Dyubaylo · Feb 19, 2023

[Video] Understanding your InterSystems Login Account & Where to Use It

Hey Developers, Enjoy watching the new video on InterSystems Developers YouTube: ⏯ Understanding your InterSystems Login Account & Where to Use It @ Global Summit 2022 Learn about your InterSystems Login Account, how to use it to get access to InterSystems Services like the Developer Community, Evaluation Service, Open Exchange, Online Learning, WRC and others. This will also cover the new features for controlling your personal communication preferences. Presenters: 🗣 @Timothy.Leavitt, AppServices Development Manager,InterSystems🗣 @Pravin.Barton, Internal Application Developer, InterSystems Hope you like it and stay tuned! 👍 Correction on this post - I was originally supposed to present but unfortunately was unable to attend Global Summit due to testing positive for COVID the day before :( Call out to @Timothy.Leavitt who stepped in and presented in my place and did a great job. Watch the video! I got way too much air time last Summit. Thanks for noticing, guys! Fixed ;)
Article
Eduard Lebedyuk · Feb 10, 2023

Encrypted JDBC connection between Tableau Desktop and InterSystems IRIS

In this article, we will establish an encrypted JDBC connection between Tableau Desktop and InterSystems IRIS database using a JDBC driver. While [documentation on configuring TLS with Java clients](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GTLS_javacli) covers all possible topics on establishing an encrypted JDBC connection, configuring it with Tableau might be a little bit tricky, so I decided to write it down. # Securing SuperServer Before we start with client connections, you need to configure SuperServer, which by default runs on port `1972` and is responsible for xDBC traffic to accept encrypted connections. This topic is described in the [documentation](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GTLS_superserver), but to summarise it, you need to do three things: 1. Generate CA and server cert/key. I use [easyRSA](https://github.com/OpenVPN/easy-rsa): ```bash mkdir /tls cd /easyrsa3 ./easyrsa init-pki # Prep a vars file https://www.sbarjatiya.com/notes_wiki/index.php/Easy-rsa#Initialize_pki_infrastructure # cp /tls/vars /opt/install/easy-rsa/easyrsa3/pki/vars ./easyrsa build-ca ./easyrsa gen-req IRIS nopass ./easyrsa sign-req server IRIS cp pki/issued/* /tls/ cp pki/private/* /tls/ cp pki/ca.crt /tls/ sudo chown irisusr:irissys /tls/* sudo chmod 440 /tls/* ``` Optionally, generate client cert/key for mutual verification. I recommend doing it after establishing an initial encrypted connection. 2. Create `%SuperServer` SSL Configuration, which uses server cert/key from (1). ```objectscript set p("CertificateFile")="/tls/IRIS.crt" set p("Ciphersuites")="TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256" set p("Description")="Autogenerated SuperServer Configuration" set p("Enabled")=1 set p("PrivateKeyFile")="/tls/IRIS.key" set p("PrivateKeyPassword")="" set p("PrivateKeyType")=2 set p("TLSMaxVersion")=32 set p("TLSMinVersion")=16 // Set TLSMinVersion to 32 to stick with TLSv1.3 set p("Type")=1 set sc = ##class(Security.SSLConfigs).Create("%SuperServer", .p) kill p ``` 3. Enable (or Require) SuperServer SSL/TLS support:: ``` set p("SSLSuperServer")=1 set sc = ##class(Security.System).Modify("SYSTEM", .p) kill p ``` Where: 0 - disabled, 1 - enabled, 2 - required. Before you Require SSL/TLS connections, remember to enable SSL/TLS connections to the SuperServer for [WebGateway](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GTLS_WEBGATEWAY) and [Studio](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GTLS_studio). I recommend first enabling SSL/TLS connections, then verifying that all clients (xDBC, WebGateway, Studio, NativeAPI, etc.) use encrypted SSL/TLS connections and requiring SSL/TLS connections only after that. Now we are ready to establish an encrypted JDBC connection from Tableau Desktop. # Configuring encrypted JDBC connection from Tableau Desktop 1. [Download JDBC drivers](https://intersystems-community.github.io/iris-driver-distribution/). 2. Place the JDBC driver into the correct folder: - Windows: `C:\Program Files\Tableau\Drivers` - Mac: `~/Library/Tableau/Drivers` - Linux: `/opt/tableau/tableau_driver/jdbc` 3. Create `truststore.jks` from the SuperServer CA certificate: `keytool -import -file CA.cer -alias CA -keystore truststore.jks -storepass 123456 -noprompt` 4. Create `SSLConfig.properties` file: ``` debug = false logFile = javatls.log protocol = TLSv1.3 cipherSuites = TLS_AES_256_GCM_SHA384 trustStoreType = JKS trustStore = truststore.jks trustStorePassword = 123456 trustStoreRecoveryPassword = 123456 ``` 5. Place `SSLConfig.properties` and `truststore.jks` in a working directory of your java program. Note that Tableau spawns several processes. You need a working directory for the Java process (which is not a root process or one of the QT processes). Here's how to find it on Windows with [ProcessExplorer](https://learn.microsoft.com/en-us/sysinternals/downloads/process-explorer): ![image](https://user-images.githubusercontent.com/5127457/218109251-a77bfc11-5a5c-42a6-a7e7-c6d153434cb1.png) and go to the java process properties and get the current directory: ![image](https://user-images.githubusercontent.com/5127457/218109489-a8ff2bc4-3354-4988-8e89-fd98b2aa10a5.png) So for Windows, for me it's `C:\Users\elebedyu\AppData\Local\Temp\TableauTemp`, or in general `%LOCALAPPDATA%\Temp\TableauTemp`. On Mac and Linux, you can figure it out using `lsof -d cwd`. Please write in comments if you determined the directory for Mac/Linux. 6. Create `.properties` file. This file contains JDBC customizations. Name it `IRIS.properties` and place it: - On Windows: `C:\Users\\Documents\My Tableau Repository` - On Mac and Linux: `~/Documents/My Tableau Repository` If you are using the Tableau version before 2020.3, you should use Java 8 for JDBC connections. Java 8 properties files require ISO-8859-1 encoding. As long as your properties file contains only ASCII characters, you can also save it in UTF-8. However, ensure that you don't save with a BOM (Byte order mark). Starting in Tableau version 2020.3, you can use UTF-8 encoded properties files, which are standard with Java 9+. In this file, specify your JDBC connection properties: ``` host= port= user= password= connection\ security\ level=10 ``` Note that Tableau uses `JDBCDriverManager`, all properties are [listed here](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=BJAVA_refapi#BJAVA_refapi_connparms). As per the `.properties` [file spec](https://en.wikipedia.org/wiki/.properties), you must escape whitespaces in keys with a `\`. 7. Open Tableau and create a new JDBC connection, specifying the properties file. It should connect. ![image](https://user-images.githubusercontent.com/5127457/218100160-df31cb02-da86-40bd-8082-946d39546f3b.png) # Debugging connections First try establishing an unencrypted connection. If something does not work, go to `My Tableau Repository\Logs` and open `jprotocolserver.log`. In there, search for: ``` Connecting to jdbc:IRIS://host:port/DATABASE Connection properties {password=*******, connection security level=*******, port=*******, host=*******, user=_SYSTEM} Connected using driver {com.intersystems.jdbc.IRISDriver} from isolatedDriver. ``` If logged connection properties do not contain the properties you expect, Tableau is not seeing your `IRIS.properties` file. # Verifying that connection is encrypted 1. Close everything except for Tableau, which might communicate with IRIS. 2. Open WireShark and start capturing on the correct network interface. Set Capture Filter to: `host `. ![image](https://user-images.githubusercontent.com/5127457/218099263-ee159182-103d-4ec2-9622-fed1c61b00ba.png) 3. In Tableau, try to connect to IRIS, change the schema, or perform any other action requiring server communication. 4. If everything is properly configured, you should see Packets with Protocol `TLSv1.3`, and following a TCP stream should result in you looking at a ciphertext. ![image](https://user-images.githubusercontent.com/5127457/218099637-534f5251-9df2-411a-9f9c-24f6287d0080.png) # Conclusion Use the best security practices for xDBC connections. # Links - [SuperServer TLS](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GTLS_superserver) - [WebGateway TLS](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GTLS_WEBGATEWAY) - [Studio TLS](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GTLS_studio) - [Java TLS](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GTLS_javacli) - [Java Connection Properties](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=BJAVA_refapi#BJAVA_refapi_connparms) - [easyRSA](https://github.com/OpenVPN/easy-rsa) This helped me a lot, thanks!! I just want to note that in the section "Securing SuperServer" there is a typo where it says: set sc = ##class(Security.SSLConfig).Create("%SuperServer", .p) should say: set sc = ##class(Security.SSLConfigs).Create("%SuperServer", .p) Because it threw me a "Class Does Not Exists" error. Regards, Mauro Thanks! Fixed. Are you on Mac by any chance? No, Windows as the client and Linux as the server
Announcement
Laurel James (GJS) · Feb 15, 2023

[Webinar] Deltanji demo: source control tailored for InterSystems IRIS

How source control integrates with your system is imperative in ensuring it works seamlessly behind the scenes without interruption. Deltanji source control understands the internal workings of InterSystems IRIS and provides a solution that can seamlessly handle its unique needs, with client integrations for VS Code, Studio, Management Portal, and Management Portal Productions. You can find out more about the benefits of using a source control tailored for InterSystems IRIS at this webinar. This demo will show how Deltanji goes beyond the traditional CI/CD pipeline, automating the project lifecycle from development through to deployment, making it the perfect source control companion for organizations with continually evolving systems. 🗓 Thursday, February 23rd⏰ 4 pm GMT | 5 pm CET | 11 am ET Sign up here > http://bit.ly/40JOaxo
Announcement
Anastasia Dyubaylo · Mar 24, 2023

Extra Bonuses for Tech Article Contest: InterSystems IRIS Tutorials

Hey Community! Here are the bonuses for participants' articles that take part in the Tech Article Contest: InterSystems IRIS Tutorials: No Article Topic bonus Video bonus Discussion bonus Translation bonus New member bonus Total points 1 Quick sample database tutorial + + 4 2 Tutorial - Working with %Query #1 + + + 9 3 Tutorial - Working with %Query #2 + + 8 4 Tutorial - Working with %Query #3 + + 8 5 Tutorial - Streams in Pieces + + 8 6 SQLAlchemy - the easiest way to use Python and SQL with IRIS's databases + + + 9 7 Creating an ODBC connection - Step to Step + + + 9 8 Tutorial - Develop IRIS using SSH + + + 9 9 InterSystems Embedded Python in glance + 5 10 Query as %Query or Query based on ObjectScript + + + + 10 11 Setting up VS Code to work with InterSystems technologies + + 4 12 Tutorial: Improving code quality with the visual debug tool's color-coded logs + 3 13 Kinds of properties in IRIS 0 14 Backup and rebuilding procedure for the IRIS server + + 4 15 Stored Procedures the Swiss army knife of SQL + + 4 16 Tutorial how to analyze requests and responses received and processed in webgateway pods 0 17 InterSystems's Embedded Python with Pandas + + 8 18 Tutorial for Middle/Senior Level Developer: General Query Solution + + + 9 19 Tutorial - Creating a HL7 TCP Operation for Granular Error Handling 0 20 Tutorial from Real Practice in China Hosipital Infomatics Construction: How to autobackup your code/ auto excute code when you are not allowed to use Git? + + 4 21 SQL IRIS Editor and IRIS JAVA CONNECTION + + 8 22 Perceived gaps to GPT assisted COS development automation + + 4 23 Set up an IRIS docker image on a Raspberry Pi 4 + 3 24 Using JSON in IRIS + 5 Bonuses are subject to change upon the update. Please claim your bonuses here in the comments below! We've updated bonuses! This time our expert decided that 3 articles will collect our "Discussion Bonus" for the most useful discussion in the post. p.s. Only one day left to enter the competition! And collect all our bonuses. Good luck to all participants!
Announcement
Bob Kuszewski · Apr 14, 2023

IKO (InterSystems Kubernetes Operator) 3.5 Release Announcement

InterSystems Kubernetes Operator (IKO) 3.5 is now Generally Available. IKO 3.5 adds significant new functionality along with numerous bug fixes. Highlights include: Simplified setup of TLS across the Web Gateway, ECP, Mirroring, Super Server, and IAM The ability to run container sidecars along with compute or data nodes – perfect for scaling web gateways with your compute nodes. Changes to the CPF configmap and IRIS key secret are automatically processed by the IRIS instances when using IKO 3.5 with IRIS 2023.1 and up. The initContainer is now configurable with both the UID/GID and image. IKO supports topologySpreadConstraints to let you more easily control scheduling of pods Compatibility Version to support a wider breadth of IRIS instances Autoscale of compute nodes (Experimental) IKO is now available for ARM Follow the Installation Guide for guidance on how to download, install, and get started with IKO. The complete IKO 3.5 documentation gives you more information about IKO and using it with InterSystems IRIS and InterSystems IRIS for Health. IKO can be downloaded from the WRC download page (search for Kubernetes). The container is available from the InterSystems Container Registry. IKO simplifies working with InterSystems IRIS or InterSystems IRIS for Health in Kubernetes by providing an easy-to-use irisCluster resource definition. See the documentation for a full list of features, including easy sharding, mirroring, and configuration of ECP.
Announcement
Anastasia Dyubaylo · Aug 25, 2022

Check out InterSystems Ideas - Our Official Feedback Portal

Hey Community! We've always had this idea on the back burner about improving the process of collecting, analyzing, and responding to product enhancement requests from our members. We knew we needed a good user experience and even better internal processes to make sure the best ideas were gathered, heard, and responded to. And finally, this thought has come to fruition! So in case you missed it, let me introduce to you the Official InterSystems feedback portal: 💡 >> InterSystems Ideas << 💡 InterSystems Ideas is a new and improved method for you to submit product enhancement requests and ideas related to our services (Documentation, Dev Community, Global Masters, etc.), see what others have submitted, vote on your favorite ones, and get feedback from InterSystems. We are starting active development and promotion of both the Ideas portal and your ideas. We wish for you to have a public way to get feedback from our product managers and members of the Community. ✅ The ideas with the most votes will be sent to the Product Management Department for review. Share your ideas with the community, and contribute by voting and commenting on other ideas – the more votes the more influence! See you on the InterSystems Ideas portal!
Announcement
Anastasia Dyubaylo · Nov 2, 2022

[Video] InterSystems IRIS FHIR SQL Builder: Sneak Peek

Hi Community, In this video, you will learn about exciting new ways to perform analytics using data in your HL7® FHIR® repository: ⏯ InterSystems IRIS FHIR SQL Builder: Sneak Peek @ Global Summit 2022 🗣 Presenter: @Patrick.Jamieson3621, Product Manager, InterSystems Subscribe to InterSystems Developers YouTube to get updates about new videos!
Question
prashanth ponugoti · Oct 6, 2022

How to create desk top icon for InterSystems terminal command

Hi Friends, I have created an object script class method to anonymize live hl7 messages with some info masking. To anonymize files , I need to place live messages in d://input folder and need to execute the below command in the Intersystems terminal do ##class(prashanth.tool.HL7Annonymiser).processFilesInDir("D:\Input\") ANONYMIZED files will be generated in D:\output\" folder. everything is working fine. Here when i need to annonymise some files , 1) I need to open terminal (iris) 2) connected to user 3) change to my namespace 4) run classmethod command I am looking for a solution, where i will place files in input folder and , I will click some icon in desktop. Could you please guide me , how to create windows icon for above 4 steps? (need to create some bat file) I used to do create a desktop icons to run any specific java program. Is it possible to execute InterSystems terminal commands on windows CMD? Thanks in advance Prashanth Refer to this documentation https://docs.intersystems.com/iris20201/csp/docbook/DocBook.UI.Page.cls?KEY=GTER_batch#GTER_example_command_Line thanks, Deepak, it works for me
Announcement
Laurel James (GJS) · Oct 10, 2022

Managing InterSystems Interoperability Productions using Deltanji Source Control

Introducing a new component driver for Deltanji source control, which enables highly granular management of InterSystems Interoperability Productions with tight integration into the management portal InterSystems Interoperability Productions are defined in a single monolithic class definition. A production can contain many hundreds or thousands of configuration items. This presents a problem if multiple developers are working on different Business Processes within the same production simultaneously. Simultaneous development is almost inevitable for large productions containing many configuration items. Deltanji source control now addresses this problem by increasing the granularity with which items can be managed. Instead of versioning a single class definition containing all the configuration items, this new component driver allows individual configuration items to be managed separately. Each configuration item has its own versioning and revision history and can be checked-out, checked-in, and deployed independently of any other items in the same Production. Because each configuration item within a production class is managed by Deltanji as a first-class component in its own right, Deltanji provides all the source control, versioning, and workflow capabilities that it provides for any other component. We'll be showing a demo of this feature during our partner presentation at the InterSystems UKI Summit next week on Day 2 (19th Oct) @ 1.15pm. If you can't make it, you can also see it in action at our User Group Session on November 3rd at 3pm (GMT). Register your attendance on Eventbrite here - https://bit.ly/3yqzfvS To find out more about Deltanji, visit our website georgejames.com or drop us an email info@georgejames.com
Announcement
Evgeny Shvarov · Aug 29, 2022

Technology Bonuses for InterSystems "Sustainability" InterOperability Contest 2022

Hi Developers! Here're the technology bonuses for the InterSystems "Sustainability" Interoperability Contest 2022 that will give you extra points in the voting: Sustainability Topic Sustainability Dataset Business Process BPL or Business Rule DTL Usage Custom Interoperability Adapter Production EXtension(PEX) Python, Java, or .NET usage Embedded Python usage Docker container usage ZPM Package Deployment Online Demo Code Quality pass Article on Developer Community The second article on Developer Community Video on YouTube See the details below. Sustainability Solution - 5 points Collect the bonus of 5 points if your solution is related to solving the global challenge of Sustainability. Check the contest announcement for more details on the problem. Sustainability Dataset - 3 points each Get 3 bonus points per each sustainability dataset (for the first 3 datasets) submitted to Open Exchange and used in a solution. Dataset should be published as a ZPM package, check Datasets Contest for dataset publishing examples. Business Process BPL or Business Rules Usage - 3 points One of the key features of IRIS Interoperability Productions are business processes, which could be described by BPL (Business Process Language). Learn more about Business Processes in the documentation. Business Rules it's a no-code/low-code approach to manage the processing logic of the interoperability production. In InterSystems IRIS you can create a business rule which you can create visually or via the ObjectScript representation. You can collect the Business Process/Business Rule bonus if you create and use the business process or business rule in your interoperability production. Business Rule Example Learn more about Business Rules in the documentation. Custom Interoperability Adapter Usage - 2 points InterSystems Interoperability production can contain inbound or Outbound adapters which are being used to communicate with external systems by business services and operations of the production. You can use out-of-the-box adapters (like File, or Email) or develop your own. You get the bonus if you develop your own custom inbound or outbound adapter and use it in your production. Example of an adapter. Learn more on adapters. Production EXtension (PEX) Usage - 3 points PEX is a Python, Java, or .NET extension of Interoperability productions - see the documentation. You get this bonus if you use PEX with JAVA or .NET in your interoperability production. PEX Demo. Learn more on PEX in Documentation. Embedded Python - 3 points Use Embedded Python in your application and collect 3 extra points. You'll need at least InterSystems IRIS 2021.2 for it. Docker container usage - 2 points The application gets a 'Docker container' bonus if it uses InterSystems IRIS running in a docker container. Here is the simplest template to start from. ZPM Package deployment - 2 points You can collect the bonus if you build and publish the ZPM(ObjectScript Package Manager) package for your Full-Stack application so it could be deployed with: zpm "install your-multi-model-solution" command on IRIS with ZPM client installed. ZPM client. Documentation. Online Demo of your project - 2 pointsCollect 3 more bonus points if you provision your project to the cloud as an online demo. You can do it on your own or you can use this template - here is an Example. Here is the video on how to use it. Code quality pass with zero bugs - 1 point Include the code quality Github action for code static control and make it show 0 bugs for ObjectScript. Article on Developer Community - 2 points Post an article on Developer Community that describes the features of your project. Collect 2 points for each article. Translations to different languages work too. The Second article on Developer Community - 1 point You can collect one more bonus point for the second article or the translation regarding the application. The 3rd and more will not bring more points but the attention will all be yours. Video on YouTube - 3 points Make the Youtube video that demonstrates your product in action and collect 3 bonus points per each. Example. The list of bonuses is subject to change. Stay tuned! Good luck with the competition! Sustainability Datasets bonus is introduced - 3 points for a dataset, up to 3 datasets per project.
Announcement
Olga Zavrazhnova · Sep 19, 2022

InterSystems at Global DevSlam in Dubai, Oct 10-13

Hi Community! Are you in Dubai on October 10-13? Join us at the Global DevSlam conference for developers with 15,000 participants expected in person!📍Let's meet here: Hall 9, Stand № H9-E30🌟 We will host the event: "The hands-on workshop on InterSystems IRIS Data Platform"Speaker: @Guillaume.Rongier7183, Sales Engineer at InterSystemsOctober 10, 2pm - 3:30pm Register here ❕We have free promo passes for our Community Members, partners, and customers. Please drop me a line if you are willing to attend! I will be there! Looking forward to meeting everyone! me too! ;) I'll be there! Thanks all, now I'm feeling pressured ;)
Article
Dmitry Maslennikov · Sep 15, 2022

n8n workflow platform with InterSystems Cloud SQL

On the Latest GlobalSummit 2022, InterSystems Introduced Cloud SQL. So, you may have lightweight InterSystems IRIS with access to SQL only. Well, what if you would still need some Interoperability features in the cloud as well? There are various solutions on the market nowadays, which offer a bunch of integration adapters out of the box and can be extended with support from the community. Some time ago, I've implemented an adapter for the Node-RED project, which can be deployed manually everywhere you want. Now I would like to introduce a new integration with my recent discovery, n8n.io n8n.io is a workflow automation platform, that supports over 200 different integrations out of the box and from a community, and now including InterSystems IRIS. Let's Install it. n8n is a NodeJS application, and can be installed directly on a host machine with npm $ npm install -g n8n $ n8n s Or it just with docker image. InterSystems IRIS node package requires debian, so, we need debian version of n8n $ docker run -it --rm \ --name n8n \ -p 5678:5678 \ -v ~/.n8n:/home/node/.n8n \ n8nio/n8n:latest-debian Now we can open http://localhost:5678/ in a browser. With the first start it will offer to register the owner of this instance, you may skip it. So, it's the main window. Where you can use quite simply add new nodes in a workflow, all nodes accessible through the Plus buttons. By default only out-of-the box integrations available there, InterSystems IRIS integration is not in the list, yet. Let's install it. Settings -> Community Nodes -> Install, enter n8n-nodes-iris, check consent checkbox, and press Install again Now this node is installed And can be added to a workflow InterSystems Cloud SQL For the demo I will use Cloud SQL, but you may use any IRIS instance. n8n-nodes-iris does not support encrypted connection, yet. So, we would need to use unecrypted connection for this case. I have Cloud SQL instance up and running, I have hostname, port, and Password which I set manually. Back to n8n instance, and add this server there. Credentials on the left panel, New, find IRIS here and Continue You may create as many Credentials as you need, if say about IRIS, you would need it per any Server and Namespace, you would want to connect with. Not so much fields here, fill them and Save Let's query The idea of n8n nodes is quite simple. There are two types of nodes, triggers - can start the workflow, manually or by some event regular - just do some work, they get input from any previous node represented as JSON in UI, and may or may not return some other object after work. If node returns nothing workflow stops, after that node. IRIS node is a regular one, and it may accept data from the previous node, but can't trigger any actions (but probably may get this functionality in the future), Add new IRIS node, select previously created credentials from the list, or create a new one right from here. The only supported features is Execute Query and Insert, at the moment. And this can be extended in the future Execute Query, allows to execute any SQL query, as parameters placements allows to use $1, $2 and so on, values for parameters can be added too, and with help from n8n itself, it can just easily insert needed values from the input data. Insert, allows to insert data to the selected table. It shows the list of available tables, so, no need to retype it, as well as available columns for the selected table For the demo, I decided to request reddit for a new topics and place them in IRIS SQL Table. To simplify the process, I've created table Sample.reddit with three fields subreddit, title and created. I've added HTTP Request node, and configured it to request reddit's new feed in JSON format, and I can test the node with Execute node button, and the the real output right the way It's a whole output as one, so, I need to split the posts here, with Items list node, 1 item in input, and 20 items in output And finally it's time to insert data to SQL Table. When node executed it, executes the query as many as many items in input, and automatically places the data, as configured And let's check whats in our table now form Cloud SQL UI So, as you can see, even with simple version of IRIS, we still may get some integration, and possible to do more with this. Check the video with demo of this project in action Do not hesitate to vote for my application on the contest, and for sure you may support my work here Hi Dmitry, Your video is now on InterSystems Developers YouTube: ⏯ InterSystems IRIS with n8n.io Enjoy! Great initiative ! I love this approach, building plug-ins for third-party software to facilitate integration with IRIS. You have done it too for node red ? Next one, i vote for make.com Yeah, node-red was my first attempt. And now, I only found n8n somehow, but did not find zapier and make, which looks like a bit better. And to do it for make will be even more challenging for me, because it requires .Net for plugins, while node-red, n8n, and zapier are with NodeJS
Announcement
Anastasia Dyubaylo · Nov 3, 2022

[Webinar] What’s New in InterSystems IRIS 2022.2

Hello Community, We're happy to announce that the InterSystems IRIS, IRIS for Health, HealthShare Health Connect, & InterSystems IRIS Studio 2022.2 is out now! And to discuss all the new and enhanced features of it we'd like to invite you to our webinar What’s New in InterSystems IRIS 2022.2. In this webinar, we’ll highlight some of the new capabilities of InterSystems IRIS® and InterSystems IRIS for Health™, version 2022.2, including: Columnar storage - beta release New rule editor SQL enhancements Support for new and updated platforms Cloud storage connectors for Microsoft Azure and Google Cloud Speakers:🗣 @Robert.Kuszewski, Product Manager, Developer Experience🗣 @Benjamin.DeBoe, Product Manager This time around, we've decided to host these webinars on different days for different timezones, so that everyone is able to choose the time to his or her liking. Here they are: 📅 Date: Tuesday, November 15, 2022⏱ Time: 1:00 PM Australian Eastern Daylight Time (New South Wales)🔗 Register here 📅 Date: Thursday, November 17, 2022⏱ Time: 1:00 PM Eastern Standard Time🔗 Register here 📅 Date: Tuesday, November 22, 2022⏱ Time: 10:00 AM Greenwich Mean Time🔗 Register here Don't forget to register via the links above and see you there!
Article
Evgeniy Potapov · Nov 2, 2022

How to develop an InterSystems Adaptive Analytics (AtScale) cube

Today we will talk about Adaptive Analytics. This is a system that allows you to receive data from various sources with a relativistic data structure and create OLAP cubes based on this data. This system also provides the ability to filter and aggregate data and has mechanisms to speed up the work of analytical queries. Let's take a look at the path that data takes from input to output in Adaptive Analytics. We will start by connecting to a data source - our instance of IRIS. In order to create a connection to the source, you need to go to the Settings tab of the top menu and select the Data Warehouses section. Here we click the “Create Data Warehouse” button and pick “InterSystems IRIS” as the source. Next, we will need to fill in the Name and External Connection ID fields (use the name of our connection to do that), and the Namespace (corresponds to the desired Namespace in IRIS). Since we will talk about the Aggregate Schema and Custom Function Installation Mode fields later, we will leave them by default for now. When Adaptive Analytics creates our Data Warehouse, we need to establish a connection with IRIS for it. To do this, open the Data Warehouse with a white arrow and click the “Create Connection” button. Here we should fill in the data of our IRIS server (host, port, username, and password) as well as the name of the connection. Please note that the Namespace is filled in automatically from the Data Warehouse and cannot be changed in the connection settings. After the data has entered our system, it must be processed somewhere. To make it happen, we will create a project. The project processes data from only one connection. However, one connection can be involved in several projects. If you have multiple data sources for a report, you will need to create a project for each of them. All entity names in a project must be unique. The cubes in the project (more on them later) are interconnected not only by links explicitly configured by the user, but also if they use the same table from the data source. To create a project, go to the Projects tab and click the “New Project” button. Now you can create OLAP cubes in the project. To do that, we will need to use the “New Cube” button, fill in the name of the cube and proceed to its development. Let's dwell on the rest of the project's functionality. Under the name of the project, we can see a menu of tabs, out of which it is worth elaborating on the Update, Export, and Snapshots tabs. On the Export tab, we can save the project structure as an XML file. In this way, you can migrate projects from one Adaptive Analytics server to another or clone projects to connect to multiple data sources with the same structure. On the Update tab, we can insert text from the XML document and bring the cube to the structure that is described in this document. On the Snapshots tab, we can do version control of the project, switching between different versions if desired. Now let's talk about what the Adaptive Analytics cube contains. Upon entering the cube, we are greeted by a description of its contents which shows us the type and number of entities that are present in it. To view its structure, press the “Enter model” button. It will bring you to the Cube Canvas tab, which contains all the data tables added to the cube, dimensions, and relationships between them. In order to get data into the cube, we need to go to the Data Sources tab on the right control panel. The icon of this tab looks like a tablet. Here we should click on the “hamburger” icon and select Remap Data Source. We select the data source we need by name. Congratulations, the data has arrived in the project and is now available in all its cubes. You can see the namespace of the IRIS structure on this tab and what the data looks like in the tables. Now it’s time to talk about each entity that makes up the structure of the cube. We will start with separate tables with data from the namespace of IRIS, which we can add to our cube using the same Data Sources tab. Drag the table from this tab to the project workspace. Now we can see a table with all the fields that are in the data source. We can enter the query editing window by clicking on the “hamburger” icon in the upper right corner of the table and after that going to the “Edit dataset” item. In this window, you can see that the default option is loading the entire table. In this mode, we can add calculated columns to the table. Adaptive Analytics has its own syntax for creating them. Another way to get data into a table is to write an SQL query to the database in Query mode. In this query, we must write a single Select statement, where we can use almost any language construct. Query mode gives us a more flexible way to get data from a source into a cube. Based on columns from data tables, we can create measures. Measures are an aggregation of data in a column that includes the calculation of the number of records, sum of numbers in a column, maximum, minimum and average values, etc. Measures are created with the help of the Measures tab on the right menu. We should select from which table and which of its columns we will use the data to create the measure, as well as the aggregation function applied to those columns. Each measure has 2 names. The first one is displayed in the Adaptive Analytics interface. The second name is generated automatically by the column name and aggregation type and is indicated in the BI systems. We can change the second name of measure to our own choice, and it is a good idea to take this opportunity. Using the same principle, we also can build dimensions with non-aggregated data from one column. Adaptive Analytics has two types of dimensions - actual and degenerate ones. Degenerate dimensions include all records from the columns bound to them, while not linking the tables to each other. Normal dimensions are based on one column of one table, that is why they allow us to select only unique values from the column. However, other tables can be linked to this dimension too. When the data for records has no key in the dimension, it is simply ignored. For example, if the main table does not have a specific date, then data from related tables for this date will be skipped in calculations since there is no such member in the dimension.From a usability point of view, degenerate dimensions are more convenient compared to actual ones. It happens because they make it impossible to lose data or establish unintended relationships between cubes in a project. However, from a performance point of view, the use of normal dimensions is preferable. Dimensions are created in the corresponding tab on the right panel. We should specify the table and its column, from where we will get all unique values to fill the dimension. At the same time, we can use one column as a source of keys for the dimension, whereas the data from another one will fall into the actual dimension. For example, we can use the user's ID as a key, and moderately send his name. Therefore, users with the same name will be different entities for the measure. Degenerate dimensions are created by dragging a column from a table from the workspace to the Dimensions tab. After that, the corresponding dimension is automatically assembled in the workspace. All dimensions are organized in a hierarchical structure, even if there is only one of them. The structure has three levels. The first one is the name of the structure itself. The second one is the name of the hierarchy. The third level is the actual dimension in the hierarchy. A structure can have multiple hierarchies. Using the created measures and dimensions, we can develop calculated measures. These are measures that were made with the help of the cut-off MDX language. They can do simple transformations with data in an OLAP structure, which is sometimes a practical feature. Once you have assembled data structure, you can test it using a simple built-in previewer. To do this, go to the Cube Data Preview tab on the top menu of the workspace. Enter measures in Rows and dimensions in Columns or vice versa. This viewer is similar to Analysts in IRIS but with less functionality. Knowing that our data structure works, we can set up our project to return data. To do this, click the “Publish” button on the main screen of the project. After that, the project immediately becomes available via the generated link. To get this link, we need to go to the published version of any of the cubes. To do that, open the cube in the Published section on the left menu. Go to the Connect tab and copy the link for the JDBC connection from the cube. It will be different for each project but the same for all the cubes in a project. When you finish editing cubes and want to save changes, go to the export tab of the project and download the XML representation of your cube. Then put this file in the “/atscale-server/src/cubes/” folder of the repository (the file name doesn't matter) and delete the existing XML file of the project. If you don't delete the original file, Adaptive Analytics will not publish the updated project with the same name and ID. At the next build, a new version of the project will be automatically passed to Adaptive Analytics and will be ready for use as a default project. We have figured out the basic functionality of Adaptive Analytics for now so let's talk about optimizing the execution time of analytical queries using UDAF. I will explain what benefits it gives us, and what problems might arise in this case. UDAF stands for USER Defined aggregate functions. UDAF gives AtScale 2 main advantages. The first one is the ability to store query cash (they call it Aggregate Tables). It allows the next query to take already pre-calculated results from the database, using aggregation of data. The second one is the ability to use additional functions (actually User-Defined Aggregate Functions) and data processing algorithms that Adaptive Analytics is forced to store in the data source. They are kept in the database in a separate table, and Adaptive Analytics can call them by name in auto-generated queries. When Adaptive Analytics can use these functions, the performance of analytics queries increases dramatically. The UDAF component must be installed in IRIS. It can be done manually (check the documentation about UDAF on https://docs.intersystems.com/irisforhealthlatest/csp/docbook/DocBook.UI.Page.cls?KEY=AADAN#AADAN_config) or by installing a UDAF package from IPM (InterSystems Package Manager). At UDAF Adaptive Analytics Data Warehouse settings, change Custom Function Installation Mode to Custom Managed value. The problem that appears when using aggregates is that such tables store outdated information at the time of the request. After the aggregate table is built, new values ​​that come to the data source are not added to aggregation tables. In order for aggregate tables to contain the freshest data possible, queries for them must be restarted, and new results should be written in the table. Adaptive Analytics has an internal logic for updating aggregate tables, but it is much more convenient to control this process yourself. You can configure updates on a per-cube basis in the web interface of Adaptive Analytics and then use scripts from repository DC-analytics (https://github.com/teccod/Public-InterSystems-Developer-Community-analytics/tree/main/iris/src/aggregate_tables_update_shedule_scripts) to export schedules and import them to another instance, or use the exported schedule file as a backup. You will also find a script to set all cubes to the same update schedule if you do not want to configure each one individually. To set the schedule for updating aggregates in the Adaptive Analytics interface, we need to get into the published cube of the project (the procedure was described earlier). In the cube, go to the Build tab and find the window for managing the aggregation update schedule for this cube using the “Edin schedules” link. An easy-to-use editor will open up. Use it to set up a schedule for periodically updating data in the aggregate tables. Thus, we have considered all main aspects of working with Adaptive Analytics. Of course, there are quite a lot of features and settings that we have not reviewed in this article. However, I am sure that if you need to use some of the options we haven't examined, it will not be difficult for you to figure things out on your own.