Search

Clear filter
Announcement
Anastasia Dyubaylo · Aug 25, 2022

Check out InterSystems Ideas - Our Official Feedback Portal

Hey Community! We've always had this idea on the back burner about improving the process of collecting, analyzing, and responding to product enhancement requests from our members. We knew we needed a good user experience and even better internal processes to make sure the best ideas were gathered, heard, and responded to. And finally, this thought has come to fruition! So in case you missed it, let me introduce to you the Official InterSystems feedback portal: 💡 >> InterSystems Ideas << 💡 InterSystems Ideas is a new and improved method for you to submit product enhancement requests and ideas related to our services (Documentation, Dev Community, Global Masters, etc.), see what others have submitted, vote on your favorite ones, and get feedback from InterSystems. We are starting active development and promotion of both the Ideas portal and your ideas. We wish for you to have a public way to get feedback from our product managers and members of the Community. ✅ The ideas with the most votes will be sent to the Product Management Department for review. Share your ideas with the community, and contribute by voting and commenting on other ideas – the more votes the more influence! See you on the InterSystems Ideas portal!
Announcement
Anastasia Dyubaylo · Nov 2, 2022

[Video] InterSystems IRIS FHIR SQL Builder: Sneak Peek

Hi Community, In this video, you will learn about exciting new ways to perform analytics using data in your HL7® FHIR® repository: ⏯ InterSystems IRIS FHIR SQL Builder: Sneak Peek @ Global Summit 2022 🗣 Presenter: @Patrick.Jamieson3621, Product Manager, InterSystems Subscribe to InterSystems Developers YouTube to get updates about new videos!
Question
prashanth ponugoti · Oct 6, 2022

How to create desk top icon for InterSystems terminal command

Hi Friends, I have created an object script class method to anonymize live hl7 messages with some info masking. To anonymize files , I need to place live messages in d://input folder and need to execute the below command in the Intersystems terminal do ##class(prashanth.tool.HL7Annonymiser).processFilesInDir("D:\Input\") ANONYMIZED files will be generated in D:\output\" folder. everything is working fine. Here when i need to annonymise some files , 1) I need to open terminal (iris) 2) connected to user 3) change to my namespace 4) run classmethod command I am looking for a solution, where i will place files in input folder and , I will click some icon in desktop. Could you please guide me , how to create windows icon for above 4 steps? (need to create some bat file) I used to do create a desktop icons to run any specific java program. Is it possible to execute InterSystems terminal commands on windows CMD? Thanks in advance Prashanth Refer to this documentation https://docs.intersystems.com/iris20201/csp/docbook/DocBook.UI.Page.cls?KEY=GTER_batch#GTER_example_command_Line thanks, Deepak, it works for me
Announcement
Laurel James (GJS) · Oct 10, 2022

Managing InterSystems Interoperability Productions using Deltanji Source Control

Introducing a new component driver for Deltanji source control, which enables highly granular management of InterSystems Interoperability Productions with tight integration into the management portal InterSystems Interoperability Productions are defined in a single monolithic class definition. A production can contain many hundreds or thousands of configuration items. This presents a problem if multiple developers are working on different Business Processes within the same production simultaneously. Simultaneous development is almost inevitable for large productions containing many configuration items. Deltanji source control now addresses this problem by increasing the granularity with which items can be managed. Instead of versioning a single class definition containing all the configuration items, this new component driver allows individual configuration items to be managed separately. Each configuration item has its own versioning and revision history and can be checked-out, checked-in, and deployed independently of any other items in the same Production. Because each configuration item within a production class is managed by Deltanji as a first-class component in its own right, Deltanji provides all the source control, versioning, and workflow capabilities that it provides for any other component. We'll be showing a demo of this feature during our partner presentation at the InterSystems UKI Summit next week on Day 2 (19th Oct) @ 1.15pm. If you can't make it, you can also see it in action at our User Group Session on November 3rd at 3pm (GMT). Register your attendance on Eventbrite here - https://bit.ly/3yqzfvS To find out more about Deltanji, visit our website georgejames.com or drop us an email info@georgejames.com
Announcement
Evgeny Shvarov · Aug 29, 2022

Technology Bonuses for InterSystems "Sustainability" InterOperability Contest 2022

Hi Developers! Here're the technology bonuses for the InterSystems "Sustainability" Interoperability Contest 2022 that will give you extra points in the voting: Sustainability Topic Sustainability Dataset Business Process BPL or Business Rule DTL Usage Custom Interoperability Adapter Production EXtension(PEX) Python, Java, or .NET usage Embedded Python usage Docker container usage ZPM Package Deployment Online Demo Code Quality pass Article on Developer Community The second article on Developer Community Video on YouTube See the details below. Sustainability Solution - 5 points Collect the bonus of 5 points if your solution is related to solving the global challenge of Sustainability. Check the contest announcement for more details on the problem. Sustainability Dataset - 3 points each Get 3 bonus points per each sustainability dataset (for the first 3 datasets) submitted to Open Exchange and used in a solution. Dataset should be published as a ZPM package, check Datasets Contest for dataset publishing examples. Business Process BPL or Business Rules Usage - 3 points One of the key features of IRIS Interoperability Productions are business processes, which could be described by BPL (Business Process Language). Learn more about Business Processes in the documentation. Business Rules it's a no-code/low-code approach to manage the processing logic of the interoperability production. In InterSystems IRIS you can create a business rule which you can create visually or via the ObjectScript representation. You can collect the Business Process/Business Rule bonus if you create and use the business process or business rule in your interoperability production. Business Rule Example Learn more about Business Rules in the documentation. Custom Interoperability Adapter Usage - 2 points InterSystems Interoperability production can contain inbound or Outbound adapters which are being used to communicate with external systems by business services and operations of the production. You can use out-of-the-box adapters (like File, or Email) or develop your own. You get the bonus if you develop your own custom inbound or outbound adapter and use it in your production. Example of an adapter. Learn more on adapters. Production EXtension (PEX) Usage - 3 points PEX is a Python, Java, or .NET extension of Interoperability productions - see the documentation. You get this bonus if you use PEX with JAVA or .NET in your interoperability production. PEX Demo. Learn more on PEX in Documentation. Embedded Python - 3 points Use Embedded Python in your application and collect 3 extra points. You'll need at least InterSystems IRIS 2021.2 for it. Docker container usage - 2 points The application gets a 'Docker container' bonus if it uses InterSystems IRIS running in a docker container. Here is the simplest template to start from. ZPM Package deployment - 2 points You can collect the bonus if you build and publish the ZPM(ObjectScript Package Manager) package for your Full-Stack application so it could be deployed with: zpm "install your-multi-model-solution" command on IRIS with ZPM client installed. ZPM client. Documentation. Online Demo of your project - 2 pointsCollect 3 more bonus points if you provision your project to the cloud as an online demo. You can do it on your own or you can use this template - here is an Example. Here is the video on how to use it. Code quality pass with zero bugs - 1 point Include the code quality Github action for code static control and make it show 0 bugs for ObjectScript. Article on Developer Community - 2 points Post an article on Developer Community that describes the features of your project. Collect 2 points for each article. Translations to different languages work too. The Second article on Developer Community - 1 point You can collect one more bonus point for the second article or the translation regarding the application. The 3rd and more will not bring more points but the attention will all be yours. Video on YouTube - 3 points Make the Youtube video that demonstrates your product in action and collect 3 bonus points per each. Example. The list of bonuses is subject to change. Stay tuned! Good luck with the competition! Sustainability Datasets bonus is introduced - 3 points for a dataset, up to 3 datasets per project.
Announcement
Olga Zavrazhnova · Sep 19, 2022

InterSystems at Global DevSlam in Dubai, Oct 10-13

Hi Community! Are you in Dubai on October 10-13? Join us at the Global DevSlam conference for developers with 15,000 participants expected in person!📍Let's meet here: Hall 9, Stand № H9-E30🌟 We will host the event: "The hands-on workshop on InterSystems IRIS Data Platform"Speaker: @Guillaume.Rongier7183, Sales Engineer at InterSystemsOctober 10, 2pm - 3:30pm Register here ❕We have free promo passes for our Community Members, partners, and customers. Please drop me a line if you are willing to attend! I will be there! Looking forward to meeting everyone! me too! ;) I'll be there! Thanks all, now I'm feeling pressured ;)
Article
Dmitry Maslennikov · Sep 15, 2022

n8n workflow platform with InterSystems Cloud SQL

On the Latest GlobalSummit 2022, InterSystems Introduced Cloud SQL. So, you may have lightweight InterSystems IRIS with access to SQL only. Well, what if you would still need some Interoperability features in the cloud as well? There are various solutions on the market nowadays, which offer a bunch of integration adapters out of the box and can be extended with support from the community. Some time ago, I've implemented an adapter for the Node-RED project, which can be deployed manually everywhere you want. Now I would like to introduce a new integration with my recent discovery, n8n.io n8n.io is a workflow automation platform, that supports over 200 different integrations out of the box and from a community, and now including InterSystems IRIS. Let's Install it. n8n is a NodeJS application, and can be installed directly on a host machine with npm $ npm install -g n8n $ n8n s Or it just with docker image. InterSystems IRIS node package requires debian, so, we need debian version of n8n $ docker run -it --rm \ --name n8n \ -p 5678:5678 \ -v ~/.n8n:/home/node/.n8n \ n8nio/n8n:latest-debian Now we can open http://localhost:5678/ in a browser. With the first start it will offer to register the owner of this instance, you may skip it. So, it's the main window. Where you can use quite simply add new nodes in a workflow, all nodes accessible through the Plus buttons. By default only out-of-the box integrations available there, InterSystems IRIS integration is not in the list, yet. Let's install it. Settings -> Community Nodes -> Install, enter n8n-nodes-iris, check consent checkbox, and press Install again Now this node is installed And can be added to a workflow InterSystems Cloud SQL For the demo I will use Cloud SQL, but you may use any IRIS instance. n8n-nodes-iris does not support encrypted connection, yet. So, we would need to use unecrypted connection for this case. I have Cloud SQL instance up and running, I have hostname, port, and Password which I set manually. Back to n8n instance, and add this server there. Credentials on the left panel, New, find IRIS here and Continue You may create as many Credentials as you need, if say about IRIS, you would need it per any Server and Namespace, you would want to connect with. Not so much fields here, fill them and Save Let's query The idea of n8n nodes is quite simple. There are two types of nodes, triggers - can start the workflow, manually or by some event regular - just do some work, they get input from any previous node represented as JSON in UI, and may or may not return some other object after work. If node returns nothing workflow stops, after that node. IRIS node is a regular one, and it may accept data from the previous node, but can't trigger any actions (but probably may get this functionality in the future), Add new IRIS node, select previously created credentials from the list, or create a new one right from here. The only supported features is Execute Query and Insert, at the moment. And this can be extended in the future Execute Query, allows to execute any SQL query, as parameters placements allows to use $1, $2 and so on, values for parameters can be added too, and with help from n8n itself, it can just easily insert needed values from the input data. Insert, allows to insert data to the selected table. It shows the list of available tables, so, no need to retype it, as well as available columns for the selected table For the demo, I decided to request reddit for a new topics and place them in IRIS SQL Table. To simplify the process, I've created table Sample.reddit with three fields subreddit, title and created. I've added HTTP Request node, and configured it to request reddit's new feed in JSON format, and I can test the node with Execute node button, and the the real output right the way It's a whole output as one, so, I need to split the posts here, with Items list node, 1 item in input, and 20 items in output And finally it's time to insert data to SQL Table. When node executed it, executes the query as many as many items in input, and automatically places the data, as configured And let's check whats in our table now form Cloud SQL UI So, as you can see, even with simple version of IRIS, we still may get some integration, and possible to do more with this. Check the video with demo of this project in action Do not hesitate to vote for my application on the contest, and for sure you may support my work here Hi Dmitry, Your video is now on InterSystems Developers YouTube: ⏯ InterSystems IRIS with n8n.io Enjoy! Great initiative ! I love this approach, building plug-ins for third-party software to facilitate integration with IRIS. You have done it too for node red ? Next one, i vote for make.com Yeah, node-red was my first attempt. And now, I only found n8n somehow, but did not find zapier and make, which looks like a bit better. And to do it for make will be even more challenging for me, because it requires .Net for plugins, while node-red, n8n, and zapier are with NodeJS
Announcement
Anastasia Dyubaylo · Nov 3, 2022

[Webinar] What’s New in InterSystems IRIS 2022.2

Hello Community, We're happy to announce that the InterSystems IRIS, IRIS for Health, HealthShare Health Connect, & InterSystems IRIS Studio 2022.2 is out now! And to discuss all the new and enhanced features of it we'd like to invite you to our webinar What’s New in InterSystems IRIS 2022.2. In this webinar, we’ll highlight some of the new capabilities of InterSystems IRIS® and InterSystems IRIS for Health™, version 2022.2, including: Columnar storage - beta release New rule editor SQL enhancements Support for new and updated platforms Cloud storage connectors for Microsoft Azure and Google Cloud Speakers:🗣 @Robert.Kuszewski, Product Manager, Developer Experience🗣 @Benjamin.DeBoe, Product Manager This time around, we've decided to host these webinars on different days for different timezones, so that everyone is able to choose the time to his or her liking. Here they are: 📅 Date: Tuesday, November 15, 2022⏱ Time: 1:00 PM Australian Eastern Daylight Time (New South Wales)🔗 Register here 📅 Date: Thursday, November 17, 2022⏱ Time: 1:00 PM Eastern Standard Time🔗 Register here 📅 Date: Tuesday, November 22, 2022⏱ Time: 10:00 AM Greenwich Mean Time🔗 Register here Don't forget to register via the links above and see you there!
Article
Evgeniy Potapov · Nov 2, 2022

How to develop an InterSystems Adaptive Analytics (AtScale) cube

Today we will talk about Adaptive Analytics. This is a system that allows you to receive data from various sources with a relativistic data structure and create OLAP cubes based on this data. This system also provides the ability to filter and aggregate data and has mechanisms to speed up the work of analytical queries. Let's take a look at the path that data takes from input to output in Adaptive Analytics. We will start by connecting to a data source - our instance of IRIS. In order to create a connection to the source, you need to go to the Settings tab of the top menu and select the Data Warehouses section. Here we click the “Create Data Warehouse” button and pick “InterSystems IRIS” as the source. Next, we will need to fill in the Name and External Connection ID fields (use the name of our connection to do that), and the Namespace (corresponds to the desired Namespace in IRIS). Since we will talk about the Aggregate Schema and Custom Function Installation Mode fields later, we will leave them by default for now. When Adaptive Analytics creates our Data Warehouse, we need to establish a connection with IRIS for it. To do this, open the Data Warehouse with a white arrow and click the “Create Connection” button. Here we should fill in the data of our IRIS server (host, port, username, and password) as well as the name of the connection. Please note that the Namespace is filled in automatically from the Data Warehouse and cannot be changed in the connection settings. After the data has entered our system, it must be processed somewhere. To make it happen, we will create a project. The project processes data from only one connection. However, one connection can be involved in several projects. If you have multiple data sources for a report, you will need to create a project for each of them. All entity names in a project must be unique. The cubes in the project (more on them later) are interconnected not only by links explicitly configured by the user, but also if they use the same table from the data source. To create a project, go to the Projects tab and click the “New Project” button. Now you can create OLAP cubes in the project. To do that, we will need to use the “New Cube” button, fill in the name of the cube and proceed to its development. Let's dwell on the rest of the project's functionality. Under the name of the project, we can see a menu of tabs, out of which it is worth elaborating on the Update, Export, and Snapshots tabs. On the Export tab, we can save the project structure as an XML file. In this way, you can migrate projects from one Adaptive Analytics server to another or clone projects to connect to multiple data sources with the same structure. On the Update tab, we can insert text from the XML document and bring the cube to the structure that is described in this document. On the Snapshots tab, we can do version control of the project, switching between different versions if desired. Now let's talk about what the Adaptive Analytics cube contains. Upon entering the cube, we are greeted by a description of its contents which shows us the type and number of entities that are present in it. To view its structure, press the “Enter model” button. It will bring you to the Cube Canvas tab, which contains all the data tables added to the cube, dimensions, and relationships between them. In order to get data into the cube, we need to go to the Data Sources tab on the right control panel. The icon of this tab looks like a tablet. Here we should click on the “hamburger” icon and select Remap Data Source. We select the data source we need by name. Congratulations, the data has arrived in the project and is now available in all its cubes. You can see the namespace of the IRIS structure on this tab and what the data looks like in the tables. Now it’s time to talk about each entity that makes up the structure of the cube. We will start with separate tables with data from the namespace of IRIS, which we can add to our cube using the same Data Sources tab. Drag the table from this tab to the project workspace. Now we can see a table with all the fields that are in the data source. We can enter the query editing window by clicking on the “hamburger” icon in the upper right corner of the table and after that going to the “Edit dataset” item. In this window, you can see that the default option is loading the entire table. In this mode, we can add calculated columns to the table. Adaptive Analytics has its own syntax for creating them. Another way to get data into a table is to write an SQL query to the database in Query mode. In this query, we must write a single Select statement, where we can use almost any language construct. Query mode gives us a more flexible way to get data from a source into a cube. Based on columns from data tables, we can create measures. Measures are an aggregation of data in a column that includes the calculation of the number of records, sum of numbers in a column, maximum, minimum and average values, etc. Measures are created with the help of the Measures tab on the right menu. We should select from which table and which of its columns we will use the data to create the measure, as well as the aggregation function applied to those columns. Each measure has 2 names. The first one is displayed in the Adaptive Analytics interface. The second name is generated automatically by the column name and aggregation type and is indicated in the BI systems. We can change the second name of measure to our own choice, and it is a good idea to take this opportunity. Using the same principle, we also can build dimensions with non-aggregated data from one column. Adaptive Analytics has two types of dimensions - actual and degenerate ones. Degenerate dimensions include all records from the columns bound to them, while not linking the tables to each other. Normal dimensions are based on one column of one table, that is why they allow us to select only unique values from the column. However, other tables can be linked to this dimension too. When the data for records has no key in the dimension, it is simply ignored. For example, if the main table does not have a specific date, then data from related tables for this date will be skipped in calculations since there is no such member in the dimension.From a usability point of view, degenerate dimensions are more convenient compared to actual ones. It happens because they make it impossible to lose data or establish unintended relationships between cubes in a project. However, from a performance point of view, the use of normal dimensions is preferable. Dimensions are created in the corresponding tab on the right panel. We should specify the table and its column, from where we will get all unique values to fill the dimension. At the same time, we can use one column as a source of keys for the dimension, whereas the data from another one will fall into the actual dimension. For example, we can use the user's ID as a key, and moderately send his name. Therefore, users with the same name will be different entities for the measure. Degenerate dimensions are created by dragging a column from a table from the workspace to the Dimensions tab. After that, the corresponding dimension is automatically assembled in the workspace. All dimensions are organized in a hierarchical structure, even if there is only one of them. The structure has three levels. The first one is the name of the structure itself. The second one is the name of the hierarchy. The third level is the actual dimension in the hierarchy. A structure can have multiple hierarchies. Using the created measures and dimensions, we can develop calculated measures. These are measures that were made with the help of the cut-off MDX language. They can do simple transformations with data in an OLAP structure, which is sometimes a practical feature. Once you have assembled data structure, you can test it using a simple built-in previewer. To do this, go to the Cube Data Preview tab on the top menu of the workspace. Enter measures in Rows and dimensions in Columns or vice versa. This viewer is similar to Analysts in IRIS but with less functionality. Knowing that our data structure works, we can set up our project to return data. To do this, click the “Publish” button on the main screen of the project. After that, the project immediately becomes available via the generated link. To get this link, we need to go to the published version of any of the cubes. To do that, open the cube in the Published section on the left menu. Go to the Connect tab and copy the link for the JDBC connection from the cube. It will be different for each project but the same for all the cubes in a project. When you finish editing cubes and want to save changes, go to the export tab of the project and download the XML representation of your cube. Then put this file in the “/atscale-server/src/cubes/” folder of the repository (the file name doesn't matter) and delete the existing XML file of the project. If you don't delete the original file, Adaptive Analytics will not publish the updated project with the same name and ID. At the next build, a new version of the project will be automatically passed to Adaptive Analytics and will be ready for use as a default project. We have figured out the basic functionality of Adaptive Analytics for now so let's talk about optimizing the execution time of analytical queries using UDAF. I will explain what benefits it gives us, and what problems might arise in this case. UDAF stands for USER Defined aggregate functions. UDAF gives AtScale 2 main advantages. The first one is the ability to store query cash (they call it Aggregate Tables). It allows the next query to take already pre-calculated results from the database, using aggregation of data. The second one is the ability to use additional functions (actually User-Defined Aggregate Functions) and data processing algorithms that Adaptive Analytics is forced to store in the data source. They are kept in the database in a separate table, and Adaptive Analytics can call them by name in auto-generated queries. When Adaptive Analytics can use these functions, the performance of analytics queries increases dramatically. The UDAF component must be installed in IRIS. It can be done manually (check the documentation about UDAF on https://docs.intersystems.com/irisforhealthlatest/csp/docbook/DocBook.UI.Page.cls?KEY=AADAN#AADAN_config) or by installing a UDAF package from IPM (InterSystems Package Manager). At UDAF Adaptive Analytics Data Warehouse settings, change Custom Function Installation Mode to Custom Managed value. The problem that appears when using aggregates is that such tables store outdated information at the time of the request. After the aggregate table is built, new values ​​that come to the data source are not added to aggregation tables. In order for aggregate tables to contain the freshest data possible, queries for them must be restarted, and new results should be written in the table. Adaptive Analytics has an internal logic for updating aggregate tables, but it is much more convenient to control this process yourself. You can configure updates on a per-cube basis in the web interface of Adaptive Analytics and then use scripts from repository DC-analytics (https://github.com/teccod/Public-InterSystems-Developer-Community-analytics/tree/main/iris/src/aggregate_tables_update_shedule_scripts) to export schedules and import them to another instance, or use the exported schedule file as a backup. You will also find a script to set all cubes to the same update schedule if you do not want to configure each one individually. To set the schedule for updating aggregates in the Adaptive Analytics interface, we need to get into the published cube of the project (the procedure was described earlier). In the cube, go to the Build tab and find the window for managing the aggregation update schedule for this cube using the “Edin schedules” link. An easy-to-use editor will open up. Use it to set up a schedule for periodically updating data in the aggregate tables. Thus, we have considered all main aspects of working with Adaptive Analytics. Of course, there are quite a lot of features and settings that we have not reviewed in this article. However, I am sure that if you need to use some of the options we haven't examined, it will not be difficult for you to figure things out on your own.
Article
Anastasia Dyubaylo · Mar 16, 2023

How to embed video into your post on InterSystems Developer Community

Hey Community, Here is a short article about how to embed a video into your post. It's actually pretty simple. You just need to follow the following steps. 1. Open a video you wish to embed in YouTube: 2. Click Share and choose Embed: 3. Copy the contents of the upper right textbox or just click on the Copy button in the bottom right corner: 4. In your post on Community switch to Source view: 5. Insert the copied content from step 3 exactly where you want it to be: 6. Click on the Source button again to return to the WYSIWYG view and continue writing your post. This is it - this is how you embed a YouTube video into your community post. Hope it answers one of your questions on how to write on Community ;) Leave your thoughts on the subject in the comments section or propose another topic for an article on how to write posts on the Developer Community. Great instruction, Anastasia! Maybe we could add a new(sic!) button into editor that will expect Youtube URL and will transform it into the embedded form and insert the fragment into the post? Sounds easier than several operations? yeah, it's already in our development plan ;) stay tuned for the updates! great to know - thank you :)
Question
Ben Spead · Dec 9, 2022

Can ChatGPT be helpful to an developer using InterSystems technology?

After seeing several article raving about how ground-breaking the recent release of ChatGPT is, I thought I would try asking it to help with a Caché newbie question: How do you find the version of InterSystems Caché? To be honest, I was quite surprised at what the chat bot told me: Not going to lie - I was impressed! I tried several other searches with regard to InterSystems IRIS version number and was told to use $zv. I did a google search for part of the answer it gave me and I came up with zero results - this is synthesized information and not just copied and pasted from InterSystems docs (I couldn't find the string in our docs either). What do you want to try asking ChatGPT about technological know-how? 12/11/22 UPDATE: As is clear from the screenshot, I was playing with this on my phone and didn't properly vet the answer on an actual Caché instance. I fell prey to the observation made in the article linked in the comments w.r.r. ChatGPT answers being banned from StackOverflow: "But Open AI also notes that ChatGPT sometimes writes "plausible-sounding but incorrect or nonsensical answers." " So my respect for its technical answers has diminished a bit, but overall I am still rather impressed with the system as a whole and think it is a pretty big leap forward technologically. Thanks for those who chimed in!! To be honest, I was also impressed, but not about ChatGPT rather about the suggested solution! I have no idea, who at ISC wrote this recommendation, but the five clicks on my IRIS-2021.1 end up in NOTHING. On the right-side panel I see 'System Information' but this can't be clicked, below is 'View System Dashboard', here you see everithing, except a version info. So dear ChatGPT, if someone asks you again, the (better) answer is:- login to management portal- click on 'About' (here you see the version and much much more)This works for all IRIS versions (until now) and for all Cache versions which have a Management Portal. For the older (over 20 years) Cache versions right-click on the cube and select 'Config Manager' (on Windows, of course. In that times, I didn't worked on other platforms, hence no advices). For business scenarios will be very useful, for developers is better to use docs, learning, discord and community tools minor convincing example ????? You can't access the class name alone write $system.Version // is just a class name, something like write ##class(SYSTEM.Version) // (hence both lines gives you a syntax error) You have to specify a property or a method. Because $System.Version is an abstract class, you can specify a method only write $system.Version.GetVersion() Hope this clarifies things "Hope this clarifies things" Indeed: ChatGPT has not qualified for me.I'll stay with standard docs and learning. Seems to be kind a controversial topic: Stack Overflow temporarily bans answers from OpenAI's ChatGPT chatbot yup - I saw that this morning as well "I have no idea, who at ISC wrote this recommendation " - this is the whole point ... this wasn't written by anyone at ISC, but is inferred / synthesized / arrived at by the ChatGPT algorithm. I did several google searched on strings from the result I was given and couldn't find this anywhere. So ChatGPT isn't copying content, it is generating it ... albeit incorrectly in this case :) But the way that this IA understands and creates text is impressive, no doubts. I think this is something we'll learn how to deal with our daily tasks. As the zdnet article says, Stack Overflow removes **temporarily**, so it may be a matter of time until we get handed by IA in our development tasks, with services like GitHub copilot. So thank you for bringing this topic to discussion! Oh, this is a misunderstanding. I thought, the screenshot is coming from a (not showed) link, and anticipated the link points to some ISC site.Anyway, ChatGPT and other ChatBots (nowadays, they often pop up on various sites, mainly on those of big companies) tries to mimic a human and often end up only with an reference to FAQs or with a wrong (inappropriate) answer.They all base upon AI (some says: AI=artificial intelligence, others say: AI=absent intelligence). My bottom line is, AI is for some areas already "usable" and for others, it still "will require some more time".
Announcement
Elena E · Mar 6, 2023

InterSystems Open Exchange Survey 2022 - 10 questions that matter!

Greetings, Community! We are grateful for your participation in InterSystems Open Exchange. Your feedback is essential to us. Please take a moment to complete this brief survey and let us know what you like about Open Exchange and how we can make it better in 2023. ➡️ Open Exchange Survey 2022 (3 minutes, 10 questions)
Question
Tom Philippi · Feb 25, 2017

How do i change the language of my InterSystems studio

I did a clean install of InterSystems ensemble on a new computer. However, even though my OS and my browser are set the English, the Ensemble installation is in dutch. Does anyone know how I can change the language of my InterSystems Studio so that it is in English? In the old way you can choose any language to work. Studio uses language from Regional Settings. Even when system's language is English, but you can use your regional settings (clock, number formats etc). And you can change to use default language inside Studio. Nice, I didn't know that option.In the "old days" I simply rename or delete the CStudioXXX.dll (where XXX denote the language) at bin directory. Then the default english configuration was used.Thanks
Article
Mark Bolinsky · Mar 21, 2017

InterSystems IRIS and Caché Application Consistent Backups with Azure Backup

Database systems have very specific backup requirements that in enterprise deployments require forethought and planning. For database systems, the operational goal of a backup solution is to create a copy of the data in a state that is equivalent to when application is shut down gracefully. Application consistent backups meet these requirements and Caché provides a set of APIs that facilitate the integration with external solutions to achieve this level of backup consistency. These APIs are ExternalFreeze and ExternalThaw. ExternalFreeze temporarily pauses writes to disk and during this period Caché commits the changes in memory. During this period the backup operation must complete and be followed by a call to ExternalThaw. This call engages the write daemons to write the cached updated in the global buffer pool (database cache) to disk and resumes normal Caché database write daemon operations. This process is transparent to user processes with Caché. The specific API class methods are: ##Class(Backup.General).ExternalFreeze() ##Class(Backup.General).ExternalThaw() These APIs in conjunction with the new capability of Azure Backup to execute a script prior and after the execution of a snapshot operation, provide a comprehensive backup solution for deployments of Caché on Azure. The pre/post scripting capability of Azure Backup is currently available only on Linux VMs. Prerequisites At the high level, there are three steps that you need to perform before you can backup a VM using Azure Backup: Create a Recovery Services vault Install has the latest version of the VM Agent. Check network access to the Azure services from your VM. The Recovery Services vault manages the backup goals, policies and the items to protect. You can create a Recovery Services vault via the Azure Portal or via scripting using PowerShell. Azure Backup requires an extension that runs in your VM, is controlled by the Linux VM agent and the latest version of the agent is also required. The extension interacts with the external facing HTTPS endpoints of Azure Storage and the Recovery Services vault. Secure access to those services from the VM can be configured using a proxy and network rules in an Azure Network Security Group. For more information about these steps visit Prepare your environment to back up Resource Manager-deployed virtual machines. Pre and Post Scripting Configuration The ability to call a script prior to the backup operation and after is, included in the latest version of the Azure Backup Extension (Microsoft.Azure.RecoveryServices.VMSnapshotLinux). For information about how to install the extension please check the detailed feature documentation. By default, the extension included sample pre and pot scripts located in your Linux VM at: /var/lib/waagent/Microsoft.Azure.RecoveryServices.VMSnapshotLinux-1.0.9110.0/main/tempPlugin And needs to be copied to the following locations respectively. /etc/azure/prescript.sh /etc/azure/postScript.sh You can also download the script template from GitHub. For Caché, the prescript.sh script where a call to the ExternalFreeze API can be implemented and the postScript.sh should contain the implementation that executes ExternalThaw. The following is a sample prescript.sh implementation for Caché. #!/bin/bash # variables used for returning the status of the script success=0 error=1 warning=2 status=$success log_path="/etc/preScript.log" #path of log file printf "Logs:\n" > $log_path # TODO: Replace <CACHE INSTANCE> with the name of the running instance csession <CACHE INSTANCE> -U%SYS "##Class(Backup.General).ExternalFreeze()" >> $log_path status=$? if [ $status -eq 5 ]; then echo "SYSTEM IS FROZEN" printf "SYSTEM IS FROZEN\n" >> $log_path elif [ $status -eq 3 ]; then echo "SYSTEM FREEZE FAILED" printf "SYSTEM FREEZE FAILED\n" >> $log_path status=$error csession <CACHE INSTANCE> -U%SYS "##Class(Backup.General).ExternalThaw()" fi exit $status The following is a sample postScript.sh implementation for Caché. #!/bin/bash # variables used for returning the status of the script success=0 error=1 warning=2 status=$success log_path="/etc/postScript.log" #path of log file printf "Logs:\n" > $log_path # TODO: Replace <CACHE INSTANCE> with the name of the running instance csession <CACHE INSTANCE> -U%SYS "##class(Backup.General).ExternalThaw()" status=$? if [ $status req 5]; then echo "SYSTEM IS UNFROZEN" printf "SYSTEM IS UNFROZEN\n" >> $log_path elif [ $status -eq 3 ]; then echo "SYSTEM UNFREEZE FAILED" printf "SYSTEM UNFREEZE FAILED\n" >> $log_path status=$error fi exit $status Executing a Backup In the Azure Portal, you can trigger the first backup by navigating to the Recovery Service. Please consider that the VM snapshot time should be few seconds irrespective of first backup or subsequent backup. Data transfer of first backup will take longer but data transfer will start after executing post-script to thaw database and should not have any impact on the time between pre & post script. It is highly recommended to regularly restore your backup in a non-production setting and perform database integrity checks to ensure your data protection operations are effective. For more information about how to trigger the backup and other topics such as backup scheduling, please check Back up Azure virtual machines to a Recovery Services vault. I see this was written in March 2017. By chance has this ability to Freeze / Thaw Cache on Windows VM's in Azure been implemented yet?Can a brief description of why this cannot be performed on Windows VM's in Azure be given?Thanks for the excellent research and information, always appreciated. Hi Dean - thanks for the comment. There are no changes required from a Caché standpoint, however Microsoft would need to add the similar functionality to Windows to allow for Azure Backup to call a script within the target Windows VM similar to how it is done with Linux. The scripting from Caché would be exactly the same on Windows except for using .BAT syntax rather then Linux shell scripting once Microsoft provides that capability. Microsoft may already have it this capability? I'll have to look to see if they have extended it to Windows as well.Regards,Mark B- Microsoft only added this functionality to Linux VMs to get around the lack of a VSS-equivalent technology in Linux.They expect Windows applications to be compatible with VSS.We have previously opened a request for InterSystems to add VSS support to Caché but I don't believe progress has been made on it.Am I right in understanding that IF we are happy with crash-consistent backups, as long as a backup solution is a point-in-time snapshot of the whole disk system (including journals and database files) then said backup solution should be safe to use with Caché?Obviously application consistent is better than crash consistent, but with WIJ in there we should be safe. We are receiving more and more requests for VSS integration, so there may be some movement on it, however no guarantees or commitments at this time. In regards to the alternative as a crash consistent backup, yes it would be safe as long as the databases, WIJ, and journals are all included and have a consistent point-in-time snapshot. The databases in the backup archive may be "corrupt", and not until after starting Caché for the WIJ and journals to be applied will it be physically accurate. Just like you said - a crash consistent backup and the WIJ recovery is key to the successful recovery. I will post back if I hear of changes coming with VSS integration. Thanks for the reply Mark, that confirms our understanding. Glad we're not the only people asking for VSS support! For those watching this thread. We have introduced VSS integration starting with version 2018.1. Here is a link to our VSS support announcement. Hi all, Please note that these scripts are also usable with IRIS. In each of the 'pre' and 'post' scripts you only need to change each of the "csession <CACHE INSTANCE> ..." references to "iris <IRIS INSTANCE> ..." Regards,Mark B-
Article
Luca Ravazzolo · Sep 21, 2017

InterSystems Cloud Manager and Containers at GS2017 XP-Lab

Last week saw the launch of the InterSystems IRIS Data Platform in sunny California. For the engaging eXPerience Labs (XP-Labs) training sessions, my first customer and favourite department (Learning Services), was working hard assisting and supporting us all behind the scene. Before the event, Learning Services set up the most complicated part of public cloud :) "credentials-for-free" for a smooth and fast experience for all our customers at the summit. They did extensive testing before the event so that we could all spin up cloud infrastructures to test the various new features of the new InterSystems IRIS data platform without glitches. The reason why they were so agile, nimble & fast in setting up all those complex environments is that they used technologies we provided straight out of our development furnace. OK, I'll be honest, our Online Education Manager, Douglas Foster and his team have worked hard too and deserve a special mention. :-) Last week, at our Global Summit 2017, we had nine XP-Labs over three days. More than 180 people had the opportunity to test-drive new products & features. The labs were repeated each day of the summit and customers had the chance to follow the training courses with a BYOD approach as everything worked (and works in the online training courses that will be provided at https://learning.intersystems.com/) inside a browser. Here is the list of the XP-Lab given and some facts: 1) Build your own cloud Cloud is about taking advantage of the on-demand resources available and the scalability, flexibility, and agility that they offer. The XP-Lab focused on the process of quickly defining and creating a multi-node infrastructure on GCP. Using InterSystems Cloud Manager, students provisioned a multi-node infrastructure which had a dynamically configured InterSystems IRIS data platform cluster that they could test by running few commands. They also had the opportunity to unprovision it all with one single command without having to click all over a time-consuming web portal. I think it is important to understand that each student was actually creating her/his own virtual private cloud (VPC) with her or his dedicated resources and her/his dedicated InterSystems IRIS instances. Everybody was independent of each other. Every student had her or his own cloud solution. There was no sharing of resources. Numbers: we had more than a dozen students per session. Each student had his own VPC with 3 compute-node each. With the largest class of 15 people we ended up with 15 individual clusters. There was then a total of 45 compute-nodes provisioned during the class with 45 InterSystems IRIS instances running & configured in a small shard cluster. There were a total of 225 storage volumes. Respecting our best practices, we provide default volumes for a sharded DB, the JRN & the WIJ files and the Durable %SYS feature (more on this in another post later) + the default boot OS volume. 2) Hands-On with Spark Apache Spark is an open-source cluster-computing framework that is gaining popularity for analytics, particularly predictive analytics and machine learning. In this XP-Lab students used InterSystems' connector for Apache Spark to analyze data that was spread over a multi-node sharded architecture of the new InterSystems IRIS data platform. Numbers: 42 spark cluster were pre-provisioned by 1 person (thank you, Douglas again). Each cluster consisted of 3 compute-nodes for a total of 126 node instances. There were 630 storage volumes for a total of 6.3TB of storage used.The InterSystems person that pre-provisioned the clusters run multiple InterSystems Cloud Manager instances in parallel to pre-provision all 42 clusters. The same Cloud Manager tool was also used to re-set the InterSystems IRIS containers (stop/start/drop_table) and, at the end of the summit, to unprovision/destroy all clusters so to avoid un-necessary charges. 3) RESTful FHIR & Messaging in Health Connect. Students used Health Connect messaging and FHIR data models to transform and search for clinical data. Various transformations were applied to various messages. Numbers: two paired containers per student were used for this class. On one container we provided the web-based Eclipse Orion editor and on the other the actual Health Connect instance. Containers were running over 6 different nodes managed by the orchestrator Docker Swarm. Q&A So how did our team achieve all the above? How were they able to run all those training labs on the Google Compute Platform? Did you know there was a backup plan (you never know in the cloud) to run on AWS? And did you know we could just as easily run on Microsoft Azure? How could all those infrastructures & instances run and be configured so quickly over the practical lab-session of no more than 20 minutes? Furthermore, how can we quickly and efficiently remove hundreds or thousands of resources without wasting hours clicking on web portals? As you must have gathered by now, our Online Education team used the new InterSystems Cloud Manager to define, create, provision, deploy and unprovision the cloud infrastructures and services running on top of it.Secondly, everything customers saw, touched & experienced run in containers. What else these days? :-) Summary InterSystems Cloud Manager is a public, private and on-premises cloud tool that allows you to provision the infrastructure + configure + run InterSystems IRIS data platform instances. Out of the box Cloud Manager supports the top three public IaaS providers AWS GCP and Azure but it can also assist you with a private and/or on-premise solution as it supports the VMware vSphere API and Pre-Existing server nodes (either virtual or physical) When I said "out of the box" above, I did not lie :)InterSystems Cloud Manager comes packaged in a container so that you do not have to install anything and don't have to configure any software or set any variable in your environment. You just run the container, and you're ready to provision your cloud. Don't forget your credentials, though ;-) The InterSystems Cloud Manager, although in the infancy of its MVP (minimum viable product) version, has already proven itself. It allows us to run on and test various IaaS providers quickly, provision a solution on-premise or just carve out a cloud infrastructure according to our definition. I like to define it as a "battery included but swappable" solution.If you already have your installation and configuration solution developed with configuration management (CM) tools (Ansible, Puppet, Chef, Salt or others) and perhaps you want to test an alternative cloud provider, Cloud Manager allows you to create just the cloud infrastructure, while you can still build your systems with your CM tool. Just be careful of the unavoidable system drifts over time.On the other hand, if you want to start embracing a more DevOps type approach, appreciate the difference between the build phase and the run phase of your artefact, become more agile, support multiple deliveries and possibly deployments per day, you can use InterSystems' containers together with the Cloud Manager. The tool can provision and configure both the new InterSystems IRIS data platform sharded cluster and traditional architectures (ECP client-servers + Data server with or without InterSystems Mirroring). At the summit, we also had several technical sessions on Docker containers and two on the Cloud Manager tool itself. All sessions registered a full house. I also heard that many other sessions were packed. I was particularly impressed with the Docker container introductory session on Sunday afternoon where I counted 75 people. I don't think we could have fitted anybody else in the room. I thought people would have gone to the swimming pool :) instead, I think we had a clear sign telling us that our customers like innovation and are keen to learn. Below is a picture depicting how our Learning Services department allowed us to test-drive the Cloud Manager at the XP-Lab. They run a container based on the InterSystems Cloud Manager; they added a nginx web server so that we can http-connect to it. The web server delivers a simple single page where they load a browser-based editor (Eclipse Orion) and at the bottom of the screen, the student is connected directly to the shell (GoTTY via websocket) of the same container so that she or he can run the provisioning & deployment commands. This training container, with all these goodies :) runs on a cloud -of course- and thanks to the pre-installed InterSystems Cloud Manager, students can provision and deploy a cluster solution on any cloud (just provide credentials). To learn more about InterSystems Cloud Manager here is an introductory video https://learning.intersystems.com/course/view.php?id=756and the global summit session https://learning.intersystems.com/mod/page/view.php?id=2864 and InterSystems & Containers here are some of the sessions from GS2017https://learning.intersystems.com/course/view.php?id=685https://learning.intersystems.com/course/view.php?id=696https://learning.intersystems.com/course/view.php?id=737 -- When would Experience Labs be available on learning.intersystems.com? We are currently working to convert the experience labs into online experiences. The FHIR Experience will be ready in the next two weeks, closely followed by the Spark Experience. We have additional details to work out for the Build Your Own Cloud experience as it runs by building in our InterSystems cloud and can consume a lot of resources, but we expect to get that worked out in the next 4 - 6 weeks.Thanks Luca for the mention above, but it was a large team effort with several people from the Online Learning team as well as product manager, sales engineers, etc. @Eduard Lebedyuk & allYou can find a "getting started" course at https://learning.intersystems.com/HTH