Search

Clear filter
Announcement
Evgeny Shvarov · Jul 6, 2023

Technological Bonuses Results for InterSystems Grand Prix Contest 2023

Hi Developers! Here is the bonus results for the applications in InterSystems Grand Prix Programming Contest 2023: Project LLM AI or LangChain FHIR SQL Builder FHIR IntegratedML Native API Embedded Python Interoperability PEX Adaptive Analytics Tableau, PowerBI, Logi IRIS BI Columnar Index Docker ZPM Online Demo Unit Testing Community Idea Implementation First Article on DC Second Article on DC Code Quality First Time Contribution Video on YouTube Total Bonus Nominal 6 5 3 4 3 4 3 2 3 3 3 1 2 2 2 2 4 2 1 1 3 3 62 oex-mapping 4 3 2 2 2 2 2 1 1 3 22 appmsw-warm-home 2 2 2 2 1 9 RDUH Interface Analyst HL7v2 Browser Extension 3 3 6 irisapitester 4 2 2 2 1 1 3 15 oex-vscode-snippets-template 2 2 4 1 9 IRIS FHIR Transcribe Summarize Export 6 3 4 2 2 2 2 1 1 3 3 29 IntegratedMLandDashboardSample 4 3 2 2 1 12 iris-user-manager 2 2 1 5 irisChatGPT 6 5 4 2 2 2 2 1 1 3 28 fhir-chatGPT 6 3 4 2 1 16 iris-fhir-generative-ai 6 3 4 3 2 2 2 2 1 1 3 29 IRIS Data Migration Manager - - - 0 password-app-iris-db 3 2 2 2 3 3 15 interoperability_GPT 6 4 3 2 1 16 FHIR Editor 3 2 5 Recycler 3 - - 3 ZProfile 2 2 2 2 3 11 DevBox 6 2 3 11 FHIR - AI and OpenAPI Chain 6 3 2 2 2 2 1 1 3 3 25 IntegratedML-IRIS-PlatformEntryPrediction 4 3 3 10 Please apply with your comments for new implementations and corrections to be made here in the comments or in Discord. Hi @Evgeny.Shvarov ! I used Java to connect to IRIS in the application and associated an article with it, but I did not see it in bonus points. Can they be added to bonus points? Hi Zhang! We don't have points for using Java. What bonus are you talking about? If you mean Native API you haven't use it. You used only jdbc in your project without Native SDK . Hi @Evgeny.Shvarov Thanks for publishing the bonuses.Please note that I have added FHIR SQL Builder functionality in my new release of irisChatGPT application. So pls consider it.Thanks Hi Muhammad! Your points were added to the table! Thank you! Hi @Semion.Makarov I added a BI dashboard to do analytics on the app logs of iris-fhir-generative-ai to the release 1.0.9, and a second article explaining about such analytics. So, I'd like to ask for IRIS BI and Second article bonuses. PS: Sorry for publish this so late, but I had this idea just Sunday late. 😄 Thanks! Hi Jose! I've applied these bonuses to your app.
Announcement
Evgeny Shvarov · Jun 10, 2023

Technology Bonuses for InterSystems Grand Prix 23 Programming Contest

Hi colleagues! InterSystems Grand Prix 2023 unites all the key features of InterSystems IRIS Data Platform! Thus we invite you to use the following features and collect additional technical bonuses that will help you to win the prize! Here we go! LLM AI or LangChain usage: Chat GPT, Bard and others - 6 InterSystems FHIR SQL Builder- 5 InterSystems FHIR - 3 IntegratedML - 4 Native API - 3 Embedded Python - 4 Interoperability - 3 Production EXtension(PEX) - 2 Adaptive Analytics (AtScale) Cubes usage - 3 Tableau, PowerBI, Logi usage - 3 InterSystems IRIS BI - 3 Columnar Index Usage - 1 Docker container usage - 2 ZPM Package deployment - 2 Online Demo - 2 Unit Testing - 2 Implement InterSystems Community Idea - 4 First Article on Developer Community - 2 Second Article On DC - 1 Code Quality pass - 1 First Time Contribution - 3 Video on YouTube - 3 LLM AI or LangChain usage: Chat GPT, Bard and others - 6 points Collect 6 bonus expert points for building a solution that uses LangChain libs or Large Language Models (LLM) such as ChatGPT, Bard and other AI engines like PaLM, LLaMA and more. AutoGPT usage counts too. A few examples already could be found in Open Exchange: iris-openai, chatGPT telegram bot. Here is an article with langchain usage example. InterSystems FHIR SQL Builder - 5 points InterSystems FHIR SQL Builder is a feature of InterSystems IRIS for Health that helps to map FHIR resources to SQL tables and consume it via SQL queries in your application. Learn more in the documentation. Online course.Here is an example on Open Exchange. NB: If you implement InterSystems FHIR SQL Builder the bonus 3 points for InterSystems FHIR as a Service and IRIS For Health is not included. InterSystems FHIR as a Service and IRIS For Health - 3 points We invite all developers to build new or test existing applications using InterSystems FHIR Server (FHIRaaS). Sign in to the portal, make the deployment and start using your InterSystems FHIR server on AWS in your application for the programming contest. You can also build an FHIR application using InterSystems IRIS for Health, docker version. You can take the IRIS-FHIR-Template which prepares the FHIR server during the docker image building. The documentation for FHIR API 4.0.1 could be found here. Learn more in InterSystems IRIS for Health documentation. IntegratedML usage - 4 points 1. Use InterSystems IntegratedML in your AI/ML solution. Here is the template that uses it. InterSystems IntegratedML template 2. Data import tools: Data Import Wizard CSVGEN - CSV import util CSVGEN-UI - the web UI for CSVGEN 3. Documentation: Using IntegratedML 4. Online courses & videos: Learn IntegratedML in InterSystems IRIS Preparing Your Data for Machine Learning Predictive Modeling with the Machine Learning Toolkit IntegratedML Resource Guide Getting Started with IntegratedML Machine Learning with IntegratedML & Data Robot InterSystems Native API usage - 3 points You get this bonus if you access the data in your Full-Stack application using any of the InterSystems Native API options: .NET, Java, Python, Node.js. Learn more here. Embedded Python - 4 points Use Embedded Python in your application and collect 4 extra points. You'll need at least InterSystems IRIS 2021.2 for it. NB: If you also use Native API for Python only Embedded Python bonus counts. Interoperability Productions with BPL or DTL - 3 points One of the key features of IRIS Interoperability Productions is a business process, which could be described by BPL (Business Process Language). Learn more about Business Processes in the documentation. Business Rule is a no-code/low-code approach to managing the processing logic of the interoperability production. In InterSystems IRIS you can create a business rule which you can create visually or via the ObjectScript representation. You can collect the Business Process/Business Rule bonus if you create and use the business process or business rule in your interoperability production. Business Rule Example Learn more on Business Rules in the documentation Production EXtension (PEX) Usage - 2 points PEX is a Python, Java or .NET extension of Interoperability productions. You get this bonus if you use PEX with Python, JAVA or .NET in your interoperability production. PEX Demo. Learn more on PEX in Documentation. InterSystems IRIS has Python Pex module that provides the option to develop InterSystems Interoperability productions from Python. Use it and collect 3 extra points for your application. It's OK also to use alternative python.pex wheel introduced by Guillaume Ronguier. You can also use Python Interoperability which is a PEX addon module for InterSystems IRIS on python provided by @Guillaume.Rongier7183 that gives the opportunity to develop InterSystems IRIS interoperability solutions in clear python. Article to use PEX for Hugging Face, example. Adaptive Analytics (AtScale) Cubes usage - 3 pointsInterSystems Adaptive Analytics provides the option to create and use AtScale cubes for analytics solutions. You can use the AtScale server we set up for the contest (URL and credentials can be collected in the Discord Channel) to use cubes or create a new one and connect to your IRIS server via JDBC. The visualization layer for your Analytics solution with AtScale can be crafted with Tableau, PowerBI, Excel, or Logi. Documentation, AtScale documentation Training Tableau, PowerBI, Logi usage - 3 points Collect 3 points for the visualization you made with Tableau, PowerBI, or Logi - 3 points per each. Visualization can be made vs a direct IRIS BI server or via the connection with AtScale. Logi is available on behalf of the InterSystems Reports solution - you can download the composer on InterSystems WRC. A temporary license can be collected in the discord channel. Documentation Training InterSystems IRIS BI - 3 points InterSystems IRIS Business Intelligence is a feature of IRIS which gives you the option to create BI cubes and pivots against persistent data in IRIS and deliver then this information to users using interactive dashboards. Learn more The basic iris-analytics-template contains examples of an IRIS BI cube, pivot, and a dashboard. Here is the set of examples of IRIS BI solutions: Samples BI Covid19 analytics Analyze This Game of Throne Analytics Pivot Subscriptions Error Globals Analytics Creating InterSystems IRIS BI Solutions Using Docker & VSCode (video) The Freedom of Visualization Choice: InterSystems BI (video) InterSystems BI(DeepSee) Overview (online course) InterSystems BI(DeepSee) Analyzer Basics (online course) Columnar Index Usage - 1 point Columnar Index feature can significantly improve the performance of analytics queries. Use columnar indexes in your solution's persistent data model and collect 1 extra bonus point. Learn more about Columnar Indexes. Docker container usage - 2 points The application gets a 'Docker container' bonus if it uses InterSystems IRIS running in a docker container. Here is the simplest template to start from. ZPM Package deployment - 2 points You can collect the bonus if you build and publish the ZPM(InterSystems Package Manager) package for your Full-Stack application so it could be deployed with: zpm "install your-multi-model-solution" command on IRIS with ZPM client installed. ZPM client. Documentation. Online Demo of your project - 2 pointsCollect 2 more bonus points if you provision your project to the cloud as an online demo. You can do it on your own or you can use this template - here is an Example. Here is the video on how to use it. Unit Testing - 2 points Applications that have Unit Testing for the InterSystems IRIS code will collect the bonus. Learn more about ObjectScript Unit Testing in Documentation and on Developer Community. Implement Community Opportunity Idea - 4 points Implement any idea from the InterSystems Community Ideas portal which has the "Community Opportunity" status. This will give you 4 additional bonus points. Article on Developer Community - 2 points Post an article on Developer Community that describes the features of your project and collect 2 points for the article. The Second article on Developer Community - 1 point You can collect one more bonus point for the second article or the translation regarding the application. The 3rd and more will not bring more points but the attention will all be yours. Code quality pass with zero bugs - 1 point Include the code quality Github action for code static control and make it show 0 bugs for ObjectScript. First Time Contribution - 3 points Collect 3 bonus points if you participate in InterSystems Open Exchange contests for the first time! Video on YouTube - 3 points Make the Youtube video that demonstrates your product in action and collect 3 bonus points per each. The list of bonuses is subject to change. Stay tuned! The bonus set is updated. Two bonuses added: 4 points for the community opportunity implementation. 1 point for the columnar index usage. @Evgeny.Shvarov - I submitted my application for the contest and I'm really excited! Could you please let me know how I can claim the bonus points? Thanks in advance. Hi @Ikram.Shah3431 ! Tomorrow we'll publish the bonus table for all the applications. If something is not accurate you comment here or in Discord May I ask if this score is an expert score or a community score?
Announcement
Anastasia Dyubaylo · Jan 10, 2023

[Video] Modern(izing) Full Stack Development on InterSystems IRIS

Hi Developers, Enjoy watching the new video on InterSystems Developers YouTube: ⏯ Modern(izing) Full Stack Development on InterSystems IRIS @ Global Summit 2022 Want to move on from CSP/Zen, but not sure how? Leverage InterSystems-backed tools available on the Open Exchange for rapid REST API development for new and existing data models, along with unified packaging via the InterSystems Package Manager. We'll cover Open Exchange packages, such as isc-json, isc-rest, and isc-ipm-js, plus one or two demo applications to tie it all together. 🗣 Presenter: @Timothy.Leavitt, Application Services Development Manager, InterSystems Enjoy watching and stay tuned! 👍 This is very good. A glimpse into the future!. Highly recommended if you have time. yes ... @Timothy.Leavitt did a great job!!!
Announcement
Bob Kuszewski · Aug 11, 2023

IKO (InterSystems Kubernetes Operator) 3.6 Release Announcement

InterSystems Kubernetes Operator (IKO) 3.6 is now Generally Available. IKO 3.6 adds significant new functionality along with numerous bug fixes. Highlights include: Easily include Web Gateway sidecars for compute and data nodes. Kubernetes secret for Web Gateway authentication Define Databases in the Data section Define Namespaces in the Data section Ephemeral Web Gateways for cases where everything you need is in the container image Upgrade Horizontal Pod Autoscaler support to version 2 of the HPA spec Follow the Installation Guide for guidance on how to download, install, and get started with IKO. The complete IKO 3.6 documentation gives you more information about IKO and using it with InterSystems IRIS and InterSystems IRIS for Health. IKO can be downloaded from the WRC download page (search for Kubernetes). The container is available from the InterSystems Container Registry. IKO simplifies working with InterSystems IRIS or InterSystems IRIS for Health in Kubernetes by providing an easy-to-use irisCluster resource definition. See the documentation for a full list of features, including easy sharding, mirroring, and configuration of ECP.
Announcement
Benjamin De Boe · Sep 21, 2023

Deprecation of InterSystems IRIS NLP, formerly known as iKnow

InterSystems has decided to stop further development of the InterSystems IRIS Natural Language Processing, formerly known as iKnow, technology and label it as deprecated as of the 2023.3 release of InterSystems IRIS. InterSystems will continue to support existing customers using the technology, but does not recommend starting new development projects outside of the core text exploration use cases it was originally designed for. Other use cases involving natural language are increasingly well-served using novel techniques based on Large Language Models, an area InterSystems is also investigating in the context of specific applications. Customers with questions on their current or planned use of InterSystems IRIS NLP are invited to reach out to their account team, or get in touch with @Benjamin.DeBoe The open-source version of the core iKnow engine, packaged as a Python module, can be used independently of InterSystems IRIS and will continue to be available. Please note the InterSystems IRIS SQL Search feature, also known as iFind, is only partially affected. Only the Semantic and Analytic index types make use of the iKnow engine and therefore are deprecated. All other functionality and index types are not affected by this decision, and continue to be the recommended choice for applications requiring a flexible and high-performance full text search capability. I think it is worth mentioning, that the alternative is possible The latest technologies based on Vectors, may help to replace it, with even more capabilities. Can you please share some more details about it? Well, I did some notes about Vectors in my article, about the project I tried to implement. Basically, it's possible by using neural network based algorithms calculate vectors for any texts, index them in the database, and search using vector search for any text query. The results in this case will not find texts which are exact to the search query, but with using similarity, the closest to the query. And it can be used with mostly any language, types of the texts, files and so on, even pictures, or videos. FYI - we plan a native datatype for vector content and fast similarity functions in 2024.1, with deeper integration planned for the next few releases. Stay tuned... Does this deprecation include iFind text search? Enrico If I'm not mistaken, iFind is a part of 'Basic' text search which and according to documentation it is not deprecated indeed. please see the bottom paragraph of the post above, and feel free to reach out to me directly if you have any specific questions about your use of the technology.
Announcement
Dmitry Maslennikov · Oct 28, 2022

InterSystems Package Manager ZPM 0.5.0 Release

A new release of ZPM has been published 0.5.0 New in this release Added support for Python's requirements.txt file Using tokens for publishing packages Fixed various issues Python's requirements.txt Now, if your project uses Python embedded and requires some Python's dependencies, you can add requirements.txt file to the project, as usual for any Python project, file have to be in the root of a project next to module.xml. And with load command or install command, ZPM will install dependencies from that file with using pip. USER>zpm "install python-faker" [USER|python-faker] Reload START (/usr/irissys/mgr/.modules/USER/python-faker/0.0.2/) [USER|python-faker] requirements.txt START [USER|python-faker] requirements.txt SUCCESS [USER|python-faker] Reload SUCCESS [python-faker] Module object refreshed. [USER|python-faker] Validate START [USER|python-faker] Validate SUCCESS [USER|python-faker] Compile START [USER|python-faker] Compile SUCCESS [USER|python-faker] Activate START [USER|python-faker] Configure START [USER|python-faker] Configure SUCCESS [USER|python-faker] Activate SUCCESS Great feature, @Dmitry.Maslennikov ! Thank you! [USER|python-faker] Reload START (/usr/irissys/mgr/.modules/USER/python-faker/0.0.2/) [USER|python-faker] requirements.txt START [USER|python-faker] requirements.txt SUCCESS Is it possible with -v tag to see what packages were installed? Yeah, sure, -v will show the actual output from pip
Discussion
Ben Spead · Dec 9, 2022

Can ChatGPT be helpful to an developer using InterSystems technology?

After seeing several article raving about how ground-breaking the recent release of ChatGPT is, I thought I would try asking it to help with a Caché newbie question: How do you find the version of InterSystems Caché? To be honest, I was quite surprised at what the chat bot told me: Not going to lie - I was impressed! I tried several other searches with regard to InterSystems IRIS version number and was told to use $zv. I did a google search for part of the answer it gave me and I came up with zero results - this is synthesized information and not just copied and pasted from InterSystems docs (I couldn't find the string in our docs either). What do you want to try asking ChatGPT about technological know-how? 12/11/22 UPDATE: As is clear from the screenshot, I was playing with this on my phone and didn't properly vet the answer on an actual Caché instance. I fell prey to the observation made in the article linked in the comments w.r.r. ChatGPT answers being banned from StackOverflow: "But Open AI also notes that ChatGPT sometimes writes "plausible-sounding but incorrect or nonsensical answers." " So my respect for its technical answers has diminished a bit, but overall I am still rather impressed with the system as a whole and think it is a pretty big leap forward technologically. Thanks for those who chimed in!! To be honest, I was also impressed, but not about ChatGPT rather about the suggested solution! I have no idea, who at ISC wrote this recommendation, but the five clicks on my IRIS-2021.1 end up in NOTHING. On the right-side panel I see 'System Information' but this can't be clicked, below is 'View System Dashboard', here you see everithing, except a version info. So dear ChatGPT, if someone asks you again, the (better) answer is:- login to management portal- click on 'About' (here you see the version and much much more)This works for all IRIS versions (until now) and for all Cache versions which have a Management Portal. For the older (over 20 years) Cache versions right-click on the cube and select 'Config Manager' (on Windows, of course. In that times, I didn't worked on other platforms, hence no advices). For business scenarios will be very useful, for developers is better to use docs, learning, discord and community tools minor convincing example ????? You can't access the class name alone write $system.Version // is just a class name, something like write ##class(SYSTEM.Version) // (hence both lines gives you a syntax error) You have to specify a property or a method. Because $System.Version is an abstract class, you can specify a method only write $system.Version.GetVersion() Hope this clarifies things "Hope this clarifies things" Indeed: ChatGPT has not qualified for me.I'll stay with standard docs and learning. Seems to be kind a controversial topic: Stack Overflow temporarily bans answers from OpenAI's ChatGPT chatbot yup - I saw that this morning as well "I have no idea, who at ISC wrote this recommendation " - this is the whole point ... this wasn't written by anyone at ISC, but is inferred / synthesized / arrived at by the ChatGPT algorithm. I did several google searched on strings from the result I was given and couldn't find this anywhere. So ChatGPT isn't copying content, it is generating it ... albeit incorrectly in this case :) But the way that this IA understands and creates text is impressive, no doubts. I think this is something we'll learn how to deal with our daily tasks. As the zdnet article says, Stack Overflow removes **temporarily**, so it may be a matter of time until we get handed by IA in our development tasks, with services like GitHub copilot. So thank you for bringing this topic to discussion! Oh, this is a misunderstanding. I thought, the screenshot is coming from a (not showed) link, and anticipated the link points to some ISC site.Anyway, ChatGPT and other ChatBots (nowadays, they often pop up on various sites, mainly on those of big companies) tries to mimic a human and often end up only with an reference to FAQs or with a wrong (inappropriate) answer.They all base upon AI (some says: AI=artificial intelligence, others say: AI=absent intelligence). My bottom line is, AI is for some areas already "usable" and for others, it still "will require some more time".
Announcement
Anastasia Dyubaylo · Jan 24, 2023

Webinar in Spanish: "Validating FHIR profiles with InterSystems IRIS for Health"

Hi Community, We're pleased to invite you to the upcoming webinar in Spanish called "Validating FHIR profiles with InterSystems IRIS for Health". Date & time: February 2, 3:00 PM CET Speaker: @Ariel.Arias, Sales Engineer, InterSystems Chile The webinar is aimed at developers and entrepreneurs. During the webinar, we will build a FHIR server and repository. We will also add a local profile with its extensions, to validate resources upon that guide. We will do it by using InterSystems IRIS, the IRIS validator (Java), and SUSHI. With all of this, we will have all we need to validate profiles before sending them to a central repository and test the FHIR applications by consuming those resources stored on the InterSystems IRIS for Health's FHIR Repository. ➡️ Register today and enjoy! >> Is the video of the webinar available? calling @Esther.Sanchez ;) Hi @Evgeny.Shvarov! The recording of the webinar is on the Spanish DC YouTube: https://www.youtube.com/watch?v=tCWoOfNcaQ4&t=270s
Announcement
Bob Kuszewski · Apr 14, 2023

IKO (InterSystems Kubernetes Operator) 3.5 Release Announcement

InterSystems Kubernetes Operator (IKO) 3.5 is now Generally Available. IKO 3.5 adds significant new functionality along with numerous bug fixes. Highlights include: Simplified setup of TLS across the Web Gateway, ECP, Mirroring, Super Server, and IAM The ability to run container sidecars along with compute or data nodes – perfect for scaling web gateways with your compute nodes. Changes to the CPF configmap and IRIS key secret are automatically processed by the IRIS instances when using IKO 3.5 with IRIS 2023.1 and up. The initContainer is now configurable with both the UID/GID and image. IKO supports topologySpreadConstraints to let you more easily control scheduling of pods Compatibility Version to support a wider breadth of IRIS instances Autoscale of compute nodes (Experimental) IKO is now available for ARM Follow the Installation Guide for guidance on how to download, install, and get started with IKO. The complete IKO 3.5 documentation gives you more information about IKO and using it with InterSystems IRIS and InterSystems IRIS for Health. IKO can be downloaded from the WRC download page (search for Kubernetes). The container is available from the InterSystems Container Registry. IKO simplifies working with InterSystems IRIS or InterSystems IRIS for Health in Kubernetes by providing an easy-to-use irisCluster resource definition. See the documentation for a full list of features, including easy sharding, mirroring, and configuration of ECP.
Article
Evgeniy Potapov · Nov 2, 2022

How to develop an InterSystems Adaptive Analytics (AtScale) cube

Today we will talk about Adaptive Analytics. This is a system that allows you to receive data from various sources with a relativistic data structure and create OLAP cubes based on this data. This system also provides the ability to filter and aggregate data and has mechanisms to speed up the work of analytical queries. Let's take a look at the path that data takes from input to output in Adaptive Analytics. We will start by connecting to a data source - our instance of IRIS. In order to create a connection to the source, you need to go to the Settings tab of the top menu and select the Data Warehouses section. Here we click the “Create Data Warehouse” button and pick “InterSystems IRIS” as the source. Next, we will need to fill in the Name and External Connection ID fields (use the name of our connection to do that), and the Namespace (corresponds to the desired Namespace in IRIS). Since we will talk about the Aggregate Schema and Custom Function Installation Mode fields later, we will leave them by default for now. When Adaptive Analytics creates our Data Warehouse, we need to establish a connection with IRIS for it. To do this, open the Data Warehouse with a white arrow and click the “Create Connection” button. Here we should fill in the data of our IRIS server (host, port, username, and password) as well as the name of the connection. Please note that the Namespace is filled in automatically from the Data Warehouse and cannot be changed in the connection settings. After the data has entered our system, it must be processed somewhere. To make it happen, we will create a project. The project processes data from only one connection. However, one connection can be involved in several projects. If you have multiple data sources for a report, you will need to create a project for each of them. All entity names in a project must be unique. The cubes in the project (more on them later) are interconnected not only by links explicitly configured by the user, but also if they use the same table from the data source. To create a project, go to the Projects tab and click the “New Project” button. Now you can create OLAP cubes in the project. To do that, we will need to use the “New Cube” button, fill in the name of the cube and proceed to its development. Let's dwell on the rest of the project's functionality. Under the name of the project, we can see a menu of tabs, out of which it is worth elaborating on the Update, Export, and Snapshots tabs. On the Export tab, we can save the project structure as an XML file. In this way, you can migrate projects from one Adaptive Analytics server to another or clone projects to connect to multiple data sources with the same structure. On the Update tab, we can insert text from the XML document and bring the cube to the structure that is described in this document. On the Snapshots tab, we can do version control of the project, switching between different versions if desired. Now let's talk about what the Adaptive Analytics cube contains. Upon entering the cube, we are greeted by a description of its contents which shows us the type and number of entities that are present in it. To view its structure, press the “Enter model” button. It will bring you to the Cube Canvas tab, which contains all the data tables added to the cube, dimensions, and relationships between them. In order to get data into the cube, we need to go to the Data Sources tab on the right control panel. The icon of this tab looks like a tablet. Here we should click on the “hamburger” icon and select Remap Data Source. We select the data source we need by name. Congratulations, the data has arrived in the project and is now available in all its cubes. You can see the namespace of the IRIS structure on this tab and what the data looks like in the tables. Now it’s time to talk about each entity that makes up the structure of the cube. We will start with separate tables with data from the namespace of IRIS, which we can add to our cube using the same Data Sources tab. Drag the table from this tab to the project workspace. Now we can see a table with all the fields that are in the data source. We can enter the query editing window by clicking on the “hamburger” icon in the upper right corner of the table and after that going to the “Edit dataset” item. In this window, you can see that the default option is loading the entire table. In this mode, we can add calculated columns to the table. Adaptive Analytics has its own syntax for creating them. Another way to get data into a table is to write an SQL query to the database in Query mode. In this query, we must write a single Select statement, where we can use almost any language construct. Query mode gives us a more flexible way to get data from a source into a cube. Based on columns from data tables, we can create measures. Measures are an aggregation of data in a column that includes the calculation of the number of records, sum of numbers in a column, maximum, minimum and average values, etc. Measures are created with the help of the Measures tab on the right menu. We should select from which table and which of its columns we will use the data to create the measure, as well as the aggregation function applied to those columns. Each measure has 2 names. The first one is displayed in the Adaptive Analytics interface. The second name is generated automatically by the column name and aggregation type and is indicated in the BI systems. We can change the second name of measure to our own choice, and it is a good idea to take this opportunity. Using the same principle, we also can build dimensions with non-aggregated data from one column. Adaptive Analytics has two types of dimensions - actual and degenerate ones. Degenerate dimensions include all records from the columns bound to them, while not linking the tables to each other. Normal dimensions are based on one column of one table, that is why they allow us to select only unique values from the column. However, other tables can be linked to this dimension too. When the data for records has no key in the dimension, it is simply ignored. For example, if the main table does not have a specific date, then data from related tables for this date will be skipped in calculations since there is no such member in the dimension.From a usability point of view, degenerate dimensions are more convenient compared to actual ones. It happens because they make it impossible to lose data or establish unintended relationships between cubes in a project. However, from a performance point of view, the use of normal dimensions is preferable. Dimensions are created in the corresponding tab on the right panel. We should specify the table and its column, from where we will get all unique values to fill the dimension. At the same time, we can use one column as a source of keys for the dimension, whereas the data from another one will fall into the actual dimension. For example, we can use the user's ID as a key, and moderately send his name. Therefore, users with the same name will be different entities for the measure. Degenerate dimensions are created by dragging a column from a table from the workspace to the Dimensions tab. After that, the corresponding dimension is automatically assembled in the workspace. All dimensions are organized in a hierarchical structure, even if there is only one of them. The structure has three levels. The first one is the name of the structure itself. The second one is the name of the hierarchy. The third level is the actual dimension in the hierarchy. A structure can have multiple hierarchies. Using the created measures and dimensions, we can develop calculated measures. These are measures that were made with the help of the cut-off MDX language. They can do simple transformations with data in an OLAP structure, which is sometimes a practical feature. Once you have assembled data structure, you can test it using a simple built-in previewer. To do this, go to the Cube Data Preview tab on the top menu of the workspace. Enter measures in Rows and dimensions in Columns or vice versa. This viewer is similar to Analysts in IRIS but with less functionality. Knowing that our data structure works, we can set up our project to return data. To do this, click the “Publish” button on the main screen of the project. After that, the project immediately becomes available via the generated link. To get this link, we need to go to the published version of any of the cubes. To do that, open the cube in the Published section on the left menu. Go to the Connect tab and copy the link for the JDBC connection from the cube. It will be different for each project but the same for all the cubes in a project. When you finish editing cubes and want to save changes, go to the export tab of the project and download the XML representation of your cube. Then put this file in the “/atscale-server/src/cubes/” folder of the repository (the file name doesn't matter) and delete the existing XML file of the project. If you don't delete the original file, Adaptive Analytics will not publish the updated project with the same name and ID. At the next build, a new version of the project will be automatically passed to Adaptive Analytics and will be ready for use as a default project. We have figured out the basic functionality of Adaptive Analytics for now so let's talk about optimizing the execution time of analytical queries using UDAF. I will explain what benefits it gives us, and what problems might arise in this case. UDAF stands for USER Defined aggregate functions. UDAF gives AtScale 2 main advantages. The first one is the ability to store query cash (they call it Aggregate Tables). It allows the next query to take already pre-calculated results from the database, using aggregation of data. The second one is the ability to use additional functions (actually User-Defined Aggregate Functions) and data processing algorithms that Adaptive Analytics is forced to store in the data source. They are kept in the database in a separate table, and Adaptive Analytics can call them by name in auto-generated queries. When Adaptive Analytics can use these functions, the performance of analytics queries increases dramatically. The UDAF component must be installed in IRIS. It can be done manually (check the documentation about UDAF on https://docs.intersystems.com/irisforhealthlatest/csp/docbook/DocBook.UI.Page.cls?KEY=AADAN#AADAN_config) or by installing a UDAF package from IPM (InterSystems Package Manager). At UDAF Adaptive Analytics Data Warehouse settings, change Custom Function Installation Mode to Custom Managed value. The problem that appears when using aggregates is that such tables store outdated information at the time of the request. After the aggregate table is built, new values ​​that come to the data source are not added to aggregation tables. In order for aggregate tables to contain the freshest data possible, queries for them must be restarted, and new results should be written in the table. Adaptive Analytics has an internal logic for updating aggregate tables, but it is much more convenient to control this process yourself. You can configure updates on a per-cube basis in the web interface of Adaptive Analytics and then use scripts from repository DC-analytics (https://github.com/teccod/Public-InterSystems-Developer-Community-analytics/tree/main/iris/src/aggregate_tables_update_shedule_scripts) to export schedules and import them to another instance, or use the exported schedule file as a backup. You will also find a script to set all cubes to the same update schedule if you do not want to configure each one individually. To set the schedule for updating aggregates in the Adaptive Analytics interface, we need to get into the published cube of the project (the procedure was described earlier). In the cube, go to the Build tab and find the window for managing the aggregation update schedule for this cube using the “Edin schedules” link. An easy-to-use editor will open up. Use it to set up a schedule for periodically updating data in the aggregate tables. Thus, we have considered all main aspects of working with Adaptive Analytics. Of course, there are quite a lot of features and settings that we have not reviewed in this article. However, I am sure that if you need to use some of the options we haven't examined, it will not be difficult for you to figure things out on your own.
Announcement
Evgeny Shvarov · Feb 9, 2023

Technical Bonuses Results for InterSystems Developer Tools Contest 2023

Hi Developers! Here is the score of technical bonuses for participants' applications in the InterSystems Developer Tools Contest 2023! Project Idea Implementation Python Docker ZPM Online Demo Code Quality First Article on DC Second Article on DC Video on YouTube First Time Contribution Total Bonus Nominal 3 3 2 2 2 1 2 1 3 3 22 gateway-sql 3 2 2 2 1 10 xml-to-udl 2 1 2 3 3 11 iris-persistent-class-audit 2 2 1 2 3 3 13 GlobalStreams-to-SQL 2 2 2 1 2 1 3 13 DX Jetpack for VS Code 2 2 2 1 3 10 JSONfile-to-Global 2 2 2 1 2 1 3 13 apptools-admin 3 2 5 irissqlcli 3 2 2 7 OpenAPI-Suite 3 2 2 2 1 2 1 13 iris-connections 2 2 1 2 1 3 11 Intersystems IRIS platform queue trend monitoring component 2 2 3 7 message_key_query 2 2 3 7 iris-log-viewer 3 2 2 2 1 2 1 3 16 iris-tripleslash 3 2 2 1 2 1 3 14 iris_log_analytics 3 3 iris-deploy-tools 2 2 2 6 blockchain - [ IRIS python ] 3 3 cos-url-shortener 3 2 2 1 2 3 3 16 iris-geo-map 3 2 2 2 1 2 1 3 16 ISC DEV 0 IRIS Data Migration Manager 3 3 Bonuses are subject to change upon the update. Please claim here in the comments below or in the Discord chat. Thanks @Evgeny.Shvarov for the sharing. Please note that iris-geo-map also has a video on YouTube Thank you @Evgeny.Shvarov ! Could you please consider there is two articles OpenAPI-Suite and the Idea implementation is DPI-I-226 I claim 2nd article for both GlobalStreams-to-SQLhttps://community.intersystems.com/post/global-streams-sql-2 and JSONfile-to-Globalhttps://community.intersystems.com/post/jsonfile-global-2THX Yes, you achieved the Video bonus. Thank you for help! I've added your bonuses. Thank you! Thank you! Now these bonuses have been applied. Thanks to you @Semion.Makarov ! Thanks @Semion.Makarov Hello, I tried to add the video link to iris-log-viewer within Open Exchange, but I am not sure why it is not showing. Here is the link: https://youtu.be/VSTVZaC-fp8 Thanks, Oliver Yes, I see the video on the video tab. I've added bonus points for this. Thank you! Second article related to DX Jetpack is at https://community.intersystems.com/post/gj-codespex-now-supports-exclusions, and it includes a link to a YouTube video. Also, there's a containerized demo of DX Jetpack at https://github.com/gjsjohnmurray/dx-jetpack-demo (link in the README of the extension too), which I hope will count for the Docker bonus. Hi John! I've added these bonuses to your app. Could you please add link to the container repo in your OpenExchange app description? Hi there, For iris-persistent-class-audi, I loaded a video on youtube a week ago and linked it on OEX. Hi Stefan! There is a special field in the app description on OEX regarding the video. If you do this it appears on the apps page in Videos section. e.g. like here. Otherwise it's not clear where to look for it. I updated it like that last week. Apparently, I did not send it for approval, which I thought i did. Hi Stefan! I've updated your points. irissqlcli has a docker version of the tool (even two versions, including for web), is it acceptable? And demo docker-compose-example.yml, which can be used to test it locally with IRIS @Evgeny.Shvarov I would like to request 1 point for our second articlehttps://community.intersystems.com/post/elevate-your-unit-tests-next-level Hi Henrique! Your points have been updated. Hi Dmitry! I have checked and updated the bonuses. Hello,Please add bonuses `Idea Implementation` to the `gateway-sql` project.https://ideas.intersystems.com/ideas/DPI-I-224 Hello,Please add bonuses `Second articule (translation) and YouTube vídeo to cos-url-shortener Project. I added embedded Python too (but maybe too late) xD Thanks!. Thanks Semion. I added that, and I have now set up an online demo and linked to it from OpenExchange. Please add this bonus. Hi Sergey! Your points were updated. Hi Daniel! Sorry, but to receive a bonus for the article, it must be an original piece and not a translation. All other bonuses have been added to your app. Thanks!! Thanks!! Hi @Semion.Makarov Sorry to bother you, but we add the Youtube video for our app. Here the link for the announcement https://community.intersystems.com/post/iris-tripleslash-video-announcement Hi Henrique! I've updated bonuses on your app. Could you please add the video to the app in OEX? Thanks @Evgeny Shvarov for the sharing.Message-key-query has online demo,https://ddmer547.github.io/message_key_query/#/iris, At the same time, the program is packaged in the form of ZPM Hi! You have received points for the online demo. However, you are unable to receive points for the ZPM bonus because you need to publish your package to the public registry. Hi @wang.zhe! Message key query has the online demo bonus in place. But on the IPM(ZPM) - you need to publish the app IPM module to make it available. See the documentation: https://docs.openexchange.intersystems.com/solutions/submit/#package-manager-checkbox Also, please follow the naming convention for ObjectScript classes and IPM modules. Sorry, I didn't see your message. I've already added points for this bonus.
Announcement
Evgeny Shvarov · Apr 19, 2023

Technical Bonuses Results for InterSystems IRIS Cloud SQL Contest

Hi Developers! Here is the score of technical bonuses for participants' applications in the InterSystems Developer Tools Contest 2023! Project IntegratedML usage Online Demo First Article on DC Second Article on DC Video on YouTube First Time Contribution Community Idea Implementation IRIS Cloud SQL Survey Total Bonus Nominal 5 2 2 1 3 3 3 2 21 IntegratedML-IRIS-Cloud-Height-prediction 5 2 3 10 audit-consolidator 2 2 3 2 9 Tokenizator 2 3 2 7 Sheep’s Galaxy 5 2 2 3 2 14 superset-iris 2 2 3 2 9 iris-mlm-explainer 5 2 2 9 Customer churn predictor 5 2 2 2 11 AI text detection 5 2 2 3 2 14 Bonuses are subject to change upon the update. Please claim here in the comments below or in the Discord chat. Apache Superset in ideas - https://ideas.intersystems.com/ideas/DPI-I-288 I've added idea implementation bonus to your app. Thank you for help! I do not see any bonus for audit-consolidator. I wrote 3 articles. I created online demo. I am trying to upload my video. I uploaded YouTube video about audit-consolidator: https://www.youtube.com/watch?v=KYen4hEZR9c Hi @Oliver.Wilms ! I answered you in DM. From all the three articles one received the bonus.
Announcement
Anastasia Dyubaylo · Nov 3, 2022

[Webinar] What’s New in InterSystems IRIS 2022.2

Hello Community, We're happy to announce that the InterSystems IRIS, IRIS for Health, HealthShare Health Connect, & InterSystems IRIS Studio 2022.2 is out now! And to discuss all the new and enhanced features of it we'd like to invite you to our webinar What’s New in InterSystems IRIS 2022.2. In this webinar, we’ll highlight some of the new capabilities of InterSystems IRIS® and InterSystems IRIS for Health™, version 2022.2, including: Columnar storage - beta release New rule editor SQL enhancements Support for new and updated platforms Cloud storage connectors for Microsoft Azure and Google Cloud Speakers:🗣 @Robert.Kuszewski, Product Manager, Developer Experience🗣 @Benjamin.DeBoe, Product Manager This time around, we've decided to host these webinars on different days for different timezones, so that everyone is able to choose the time to his or her liking. Here they are: 📅 Date: Tuesday, November 15, 2022⏱ Time: 1:00 PM Australian Eastern Daylight Time (New South Wales)🔗 Register here 📅 Date: Thursday, November 17, 2022⏱ Time: 1:00 PM Eastern Standard Time🔗 Register here 📅 Date: Tuesday, November 22, 2022⏱ Time: 10:00 AM Greenwich Mean Time🔗 Register here Don't forget to register via the links above and see you there!
Announcement
Olga Zavrazhnova · Nov 18, 2022

InterSystems at the European Healthcare Hackathon in Prague Nov 25-27

InterSystems team is heading up to our next hackathon stop - European Healthcare Hackathon in Prague Nov 25-27. Registration closes on November 20 - so don't hesitate to register. You can participate online or in-person! InterSystems will introduce the "Innovate with FHIR" challenge with prizes for the best use of InterSystems FHIR services.
Article
Anastasia Dyubaylo · Mar 16, 2023

How to embed video into your post on InterSystems Developer Community

Hey Community, Here is a short article about how to embed a video into your post. There are two approaches. 1️⃣ With the recent update to the UI/UX of the Developer Community we implemented a new approach. Now, to insert a video you need to click a button Youtube and fill in the form Either provide a URL to the video, or paste an Embed code from step 3 of the next approach. 2️⃣ Another approach is to do it manually, just follow the steps: 1. Open a video you wish to embed in YouTube: 2. Click Share and choose Embed: 3. Copy the contents of the upper right textbox or just click on the Copy button in the bottom right corner: 4. In your post on Community switch to Source view: 5. Insert the copied content from step 3 exactly where you want it to be: 6. Click on the Source button again to return to the WYSIWYG view and continue writing your post. This is it - this is how you embed a YouTube video into your community post. Hope it answers one of your questions on how to write on Community ;) Leave your thoughts on the subject in the comments section or propose another topic for an article on how to write posts on the Developer Community. Great instruction, Anastasia! Maybe we could add a new(sic!) button into editor that will expect Youtube URL and will transform it into the embedded form and insert the fragment into the post? Sounds easier than several operations? yeah, it's already in our development plan ;) stay tuned for the updates! great to know - thank you :) We've updated the tutorial on how to add videos to your posts with the new way to do it thanks to the new functionality introduced with the release here. Just click a button Youtube and fill in the form