Search

Clear filter
Article
Yuri Marx · Jan 4, 2021

Big Data 5V with InterSystems IRIS

Big Data 5V with InterSystems IRIS See the table below: Velocity: Elastic velocity delivered with horizontal and vertical node scalingEnablers: Distributed memory cache, Distributed processing, Sharding and Multimodel Architecturehttps://www.intersystems.com/isc-resources/wp-content/uploads/sites/24/ESG_Technical_Review-InterSystems-IRIS.pdf and https://learning.intersystems.com/course/view.php?id=1254&ssoPass=1 Value: exponential data value produced by Analytics and IAEnablers: BI, NLP, ML, AutoML and Multimodel Architecturehttps://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=SETAnalytics e https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GIML_AUTOML Veracity: single data source of truth unified at a corporate levelEnablers: Connectors, Data BUS, BPL to Data Integration and API Managementhttps://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=PAGE_interoperability and https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=AFL_iam Volume: many tera/petabytes data repositories with great performanceEnablers: Distributed memory cache, Distributed processing, Sharding and Multimodel Architecturehttps://www.intersystems.com/isc-resources/wp-content/uploads/sites/24/ESG_Technical_Review-InterSystems-IRIS.pdf and https://learning.intersystems.com/course/view.php?id=1254&ssoPass=1 Variety: the data formats in the same place (XML, JSON, SQL, Object)Enablers: Repositório e Arquitetura Multimodelohttps://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=PAGE_multimodel
Announcement
Anastasia Dyubaylo · Dec 29, 2020

New Video: Validating Your InterSystems Productions

Hi Community, Learn how to use validation tools in InterSystems IRIS to assess and validate the behavior of a production: ⏯ Validating Your InterSystems Productions Subscribe to InterSystems Developers YouTube and stay tuned!
Article
Yuri Marx · Feb 23, 2021

Day 1: Developing with InterSystems Objects and SQL

I'm participating in the Developing with InterSystems Objects and SQL with Joel Solon. The course is very nice and I will share with you some tips I got during the training. Tips presented in the day 1: InterSystems IRIS unifies: InterSystems IRIS Database (Caché), IRIS Interoperability (Ensemble), IRIS Business Intelligence (DeepSee) and IRIS Text Analytics (iKnow). IRIS is multimodel: object, relational, document and multidimensional. Interoperable: native access from Java, .NET, other languages, like ObjectScript; ODBC and JDBC data access, SOAP/REST service access; message driven with data routing, transformations and workflows; SOA architecture with ESB. IRIS is Transactions and analytics together. IRIS scales Horizontal with ECP (distributed cache for user volume) and Sharding for data volume. Deploy in public or private containers with Cloud Manager. 3 IDE options to develop: VSCode (most popular), Studio (windows only), Atelier (deprecated). Terminal tool for CLI commands. Management Portal to browser commands. IRIS is multiplatform (UNIX, Linux, Windows) with docker option to Linux. It has year releases 20##.1 (EM - Extended Maintenance) and Quarterly releases (CD - continuous delivery). IRIS is case sensitive and case camel notation is a good practice. Classes are a containers to methods and properties. Methods performs specific tasks and is not allowed method overload (two methods with same name in a class). There are 2 types of methods: ClassMethod (action not associated with object instance) and Method (action associated with object instance). Use ##class() to run class methods and create an instance (with %New or %OpenId) to execute Methods. The default type to method arguments is %String. The notation ... indicates variable arguments. Example: Method Sample(a As %String, b... as %String) as %Status. When you pass arguments as method caller: If you use . you pass as reference; The arguments are optional, and you can use $data() to test with the caller passed the argument; string is the default type to variables. ObjectScript supports dynamic types. In ObjectScript 0 is for false and other values is true. Packages allows you organize classes into folders. If you use import in a class or method you don't need reference the qualified name to a class. Persistent classes (stored in the disk) extends %Persistent. Persistent classes have properties to persist class attributes/values. Each persistent class has an unique ID number immutable. PS 1: I redeemed this 5 day course (if you pay $2800) with 40,000 points (https://globalmasters.intersystems.com/rewards/34/reward_redemptions/new) PS 2: Joel Solon is an excellent instructor (great tips to iris certification) PS 3: the course material is excellent and the course resources, tools and support are fantastic. Tomorrow I will post day 2 resume. Good to hear that! I was thinking about this reward on Global Masters. Maybe I'll redeem my 40k on this. 😃 Good to know. I'll be taking the course before I try for my certification, I'll just need to save up the $2,800 since my points game isn't quite up to snuff! Thanks Yuri, really great information! Yuri, This is great information, however, on your point "PS 2:" I'd be surprised to hear if your instructor's name wasn't actually @Joel.Solon -- but you're right, he's a great instructor, and one heckuva nice guy to boot! That looks like a fun course! One correction on point 20: the default datatype for variables is string, not %String. And there's more to know about what that means, but come and take the course to find out... A great teaser for the course, thanks for sharing, @Yuri.Gomes! Thanks Joel! The course is great, exercises are great to learn more, I loved! I fixed Joel name, thanks the alert and sorry joel. really excellent course! and Joel is a great teacher!!!
Article
Yuri Marx · Feb 25, 2021

Day 3: Developing with InterSystems Objects and SQL

I'm participating in the Developing with InterSystems Objects and SQL with Joel Solon. The course is very nice and I will share with you some tips I got during the training. Tips presented in the day 3: You can see your class catalog using %Dictionary classes and see your sql objects into INFORMATION_SCHEMA table. Is possible use SQL inside your ObjectScript methods using Dynamic SQL or Embedded SQL. You can pass parameters into Dynamic SQL string using ? (eg.: where country = ?) and pass parameters to Embedded SQL using colon (ed.: where country = :variable). Dynamic SQL Sample (from Intersystems documentation): SET tStatement = ##class(%SQL.Statement).%New(,"Sample") SET myquery = 3 SET myquery(1) = "SELECT TOP ? Name,DOB,Home_State" SET myquery(2) = "FROM Person" SET myquery(3) = "WHERE Age > 60 AND Age < 65" SET qStatus = tStatement.%Prepare(.myquery) IF qStatus'=1 {WRITE "%Prepare failed:" DO $System.Status.DisplayError(qStatus) QUIT} DO tStatement.%Display() WRITE !,"End of %Prepare display" Embedded SQL Sample (from Intersystems documentation): #SQLCompile Select=Display &sql(SELECT DOB INTO :a FROM Sample.Person) IF SQLCODE<0 {WRITE "SQLCODE error ",SQLCODE," ",%msg QUIT} ELSEIF SQLCODE=100 {WRITE "Query returns no results" QUIT} WRITE "1st date of birth is ",a,! DO $SYSTEM.SQL.Util.SetOption("SelectMode",1) WRITE "changed select mode to: ",$SYSTEM.SQL.Util.GetOption("SelectMode"),! &sql(SELECT DOB INTO :b FROM Sample.Person) WRITE "2nd date of birth is ",b Embedded SQL Sample - Insert: &sql(INSERT INTO Sample.Person (Name, Age, Phone) VALUES (:name, :age, :phone) If you need process data in batch use SQL, if you process a single record, use Persistent Object API. You can create SQLQuery methods and if you use [SqlProc] in the method, will be created a SQL procedure in the SQL side. From terminal is possible go to SQL Shell, a terminal to SQL commands, from terminal, execute do $system.SQL.Shell(). Persistent classes have a system generated ID, if you need ID controled by you, use IDKEY index with one or more properties. Eg: Index Key on SocialNumber [IdKey, PrimaryKey, Unique]. There two strategies to control concurrency when two or more process try process the same data at same time: Pessimistic and Optimistic. To acquire a pessimistic control, lock the object with %OpenId(ID, 4), where 4 lock the table to exclusive access. After process ran the lock can be released. To do optimistic control (indicated to web apps), create in your persistent class Parameter VERSIONPROPERTY = "Version"; Property Version as %Integer [ InitialExpression = 1 ]. IRIS will increment property version in each instance update, allowing coordinate the order of updates, instead lock table. When you have methods that update, insert or delete data, use transactions, to keep the data consistency. Example: Transfer(from,to,amount) // Transfer funds from one account to another { TSTART &SQL(UPDATE A.Account SET A.Account.Balance = A.Account.Balance - :amount WHERE A.Account.AccountNum = :from) If SQLCODE TRollBack Quit "Cannot withdraw, SQLCODE = "_SQLCODE &SQL(UPDATE A.Account SET A.Account.Balance = A.Account.Balance + :amount WHERE A.Account.AccountNum = :to) If SQLCODE TROLLBACK QUIT "Cannot deposit, SQLCODE = "_SQLCODE TCOMMIT QUIT "Transfer succeeded" } InterSystems IRIS has an Architecture based in Namespaces (logical groups of databases) and Databases. There two types of data to hold in the databases: for data (globals) and for code (source code - procedures). You can do horizontal processing scaling to your databases using ECP - Enterprise Cache Protocol, allowing see different databases in several servers in the same namespace. You can do horizontal data volume scaling (distributed database partitions) using Sharding (only IRIS), allowing partitioning data into distributed nodes (like MongoDB). The maximum size to a database is 32TB. To change from a namespace to another do zn "Namespace" or set $namespace = "Namespace". PS 1: the course show in details how to do transactions control, this is very important. Tomorrow I will post day 4 resume. Point #16: There is only one type of database. Databases can hold code only, data only, or code and data together. Fixed, thanks!
Article
Yuri Marx · Feb 24, 2021

Day 2: Developing with InterSystems Objects and SQL

I'm participating in the Developing with InterSystems Objects and SQL with Joel Solon. The course is very nice and I will share with you some tips I got during the training. Tips presented in the day 2: You can create persistent classes (classes with a correspondent table in the database to persist class properties). An Persistent class example: Class dc.Person extends (%Persistent) { Property Name As %String; Property BirthDate As %Date; } When you extends %Persistent, you get %New() to create a new instance in the memory, %Save() to save in the database and %Id() to get the unique ID to the instance in the Database and %OpenId() to load the instance with database values. Persistent classes allow you calls %Deleteid() to delete an instance from the database, %DeleteExtent() to delete all saved objects (delete without where!) and %ValidateObject() to validate data passed before save (validate required, sizes, etc). Persistent classes have %IsModified() to check with the data changed in the memory (see joel tip in the comments) and %Reload() to get these changes. To get possible errors when you try to %Save() or %Delete(): set status = person.%Save(), write status. If success in save will return 1. We can use do $system.Status.DisplayError(status) to see error details. To call persistent class methods do: ##class(dc.Person).%Save(). To call a persistent instance method do: ..Method(). It's the same to refer a properties, do: write ..Name. To remove an object ou variable from program or terminal memory use kill person or set person = "". If you use kill only, all references wiil be removed from memory (not from database, to database use killextent). If you want a utility method to populate test data, extend your persistent class with %Populate and call Populate(number of rows). You can create embedded classes to a persistent class extendind %SerialObject (a persistent class without Id, because is linked with a persistent class). Example: Class PackageName.ClassName Extends %SerialObject { Property Phone As %String; Property Email As %String; } This serial will be a property into your persistent class: Class dc.Person extends (%Persistent) { Property Name As %String; Property BirthDate As %Date; Property Contact As dc.Contact; } In IRIS Database will be create only one table with Person and Contact properties. You can create Indexes to get uniqueness or to tunning queries. Example: Index NameIndex On Name [Unique]. If you create an index you need to build the index in the Management Portal if table is not empty. To create a constructor method override %OnNew(). This is a callback method called when call %New(). There are other callback methods. IRIS has a great support to JSON. You can load a JSON to a object calling set name = {}.%FromJSON("{""Name"":""Yuri""}"). You can write a JSON from the object executing: name.%ToJSON(). To JSON arrays exists in the IRIS and Caché (thanks @Robertcc.Cemper alert), but only in IRIS we have fomatter and zwrite to JSON. Tomorrow I will publish the day 3. PS.: this is a resume, but there are more content learned in the course. #20. Is slightly exaggeratedofficial docs present it differently.https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=ITECHREF_json https://docs.intersystems.com/latest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&CLASSNAME=%25Library.DynamicArray Fixed! Thanks On #5: %IsModified() and propertynameIsModified() methods tell you whether the object in memory has been changed in memory (not on disk). Thanks the revision, fixed!
Article
Bob Binstock · Apr 26, 2021

Scaling Cloud Hosts and Reconfiguring InterSystems IRIS

Like hardware hosts, virtual hosts in public and private clouds can develop resource bottlenecks as workloads increase. If you are using and managing InterSystems IRIS instances deployed in public or private clouds, you may have encountered a situation in which addressing performance or other issues requires increasing the capacity of an instance's host (that is, vertically scaling). One common reason to scale is insufficient memory. As described in Memory Management and Scaling for InterSystems IRIS in the Scalability Guide, providing enough memory for all of the entities running on the host of an InterSystems IRIS instance under all normal operating circumstances is a critical factor in both performance and availability. In one common scenario, as the workload of an InterSystems IRIS instance increases, its working set becomes too large to be held by the instance's allocated database cache. This forces some queries to fall back to disk, greatly increasing the number of disk reads required and creating a major performance problem. Increasing the size of the cache solves that problem, but if doing so would leave insufficient memory remaining for other purposes, you also need to increase the total physical memory on the host to avoid pushing the bottleneck to another part of the system. Fortunately, scaling a virtual host is typically a lot easier than scaling hardware. This post discusses the two stages of the process: Scaling the cloud host's resources You can change the resource specification of a virtual host on AWS, GCP, and Azure, using the platform's command line, API, or portal. VMWare vSphere allows you to easily change a number of resource parameters for a VM through its vSphere Client interface.. Reconfiguring InterSystems IRIS to take advantage of the scaled resources There are a number of ways to reconfigure InterSystems IRIS to take advantage of scaled host resources. This document describes the use of the configuration merge feature, which merges new parameter values, specified in a merge file, into an instance's CPF. Configuration merge is an easy and effective method because it lets you address only the configuration settings you want to modify, make multiple changes to an instance's configuration in one operation, and easily make the same set of changes to multiple instances. The procedures described here are manual, but in production they would very likely be automated, for example using a script that would apply a specific merge file in an accessible location to a list of instances. Scaling Cloud Host Resources Public cloud platforms provide a range of resource templates to choose from that specify CPU, memory, network interfaces, and other resources for virtual hosts (storage is provisioned and sized separately). To resize a host, you change the template selected when the host was provisioned to one that specifies more of the resources you want to increase. On Amazon Web Services, the resource template is called an instance type; for example, the t3.large instance type specifies 2 CPUs and 8 GB of memory. On Google Cloud Platform it's a machine type, such as the e2-standard-2 (which also includes 2 CPUs and 8 GB), and on Microsoft Azure it's a size (the Standard_B2ms calls for the same 2 CPUs and 8 GB). By redefining the instance type, machine type, or size of an existing public cloud host, you can scale its resource specifications. In a VMware vSphere private cloud, you can use the vSphere Client interface to the vCenter Server management console to directly modify one or more individual resource settings of an existing virtual machine. (You can also simultaneously scale groups of hosts on each platform.) The following sections provide brief examples of resizing individual virtual hosts on the various platforms, with links to the documentation for all available methods. Please note that these methods (APIs, command line interfaces, and portal interfaces) are offered and maintained by the cloud vendors, and the examples included here are for informational purposes, to illustrate how easily you can adapt InterSystems IRIS to take advantage of increased resources AWS To modify the instance type of an AWS host (called an instance, not to be confused with an InterSystems IRIS instance), you can use the modify-instance-attribute CLI command, as shown in the following example: $ aws ec2 describe-instances --instance-ids i-01519f663af48a55e { "Instances": [ { "AmiLaunchIndex": 0, "ImageId": "ami-0abcdef1234567890, "InstanceId": "i-1234567890abcdef0, "InstanceType": "m5n.large", ... $ aws ec2 stop-instances --instance-ids i-01519f663af48a55e { "StoppingInstances": [ { "InstanceId": "i-1234567890abcdef0", ... $ aws ec2 describe-instances --instance-ids i-01519f663af48a55e { "Instances": [ { ... "State": { "Code": 80, "Name": "stopped" } ... $ aws ec2 modify-instance-attribute --instance-ids i-01519f663af48a55e \ --instance-type "{\"Value\": \"m5n.xlarge\"}" $ aws ec2 start-instances --instance-ids i-01519f663af48a55e { "StartingInstances": [ { "InstanceId": "i-1234567890abcdef0", "CurrentState": { "Code": 0, "Name": "pending" }, "PreviousState": { "Code": 80, "Name": "stopped" ... $ aws ec2 describe-instances --instance-ids i-01519f663af48a55e { "Instances": [ { "AmiLaunchIndex": 0, "ImageId": "ami-0abcdef1234567890, "InstanceId": "i-1234567890abcdef0, "InstanceType": "m5n.xlarge", ... You can also make this change using the ModifyInstanceAttribute AWS API call or the AWS EC2 console. GCP To modify the machine type of a GCP host (also known as an instance), you can use the gcloud CLI to stop, modify, and restart the instance. For example, you would use the following commands to change the machine type of an instance named scalingTest to n1-highmem-96: $ gcloud compute instances stop scalingTest $ gcloud compute instances set-machine-type scalingTest --machine-type n1-highmem-32 $ gcloud compute instances start scalingTest You can also make this change using the Google Cloud Console or GCP API. Azure When you use the Azure CLI to modify the size of a Linux VM , you can view a list of the sizes available on the hardware cluster where the VM is hosted using the list-vm-resize-options command, for example: az vm list-vm-resize-options --resource-group testingGroup --name scalingTest --output table You can then use the resize command to change the VM's size to one of the listed options, as shown. This command restarts the VM automatically. az vm resize --resource-group testingGroup --name scalingTest --size Standard_E32d_v4 If the size you want to change the VM to is not available, you can deallocate the VM, which can then be resized to any size supported by the region and restarted; the commands involved are illustrated below: az vm deallocate --resource-group testingGroup --name scalingTest az vm resize --resource-group testingGroup --name scalingTest --size Standard_M128s az vm start --resource-group testingGroup --name scalingTest You can resize a Windows VM on Azure using either the Azure portal or Powershell commands. vSphere To resize a VMware vSphere VM, do the following: Open vSphere Client or Web Client and display the VM inventory. Right-click the VM you want to modify and select Edit Settings. On the Virtual Hardware tab, Expand Memory and change the amount of RAM configured for the VM. Expand CPU and change the number of cores and optionally the number of cores per socket. Make any other desired changes to the hardware resources allocated to the VM. Reconfiguring InterSystems IRIS for Scaled Resources Once you have scaled the host, the next step is to reconfigure InterSystems IRIS to take advantage of the increased resources by changing one or more parameters in the instance's configuration parameter file (CPF). For example, to continue with the common scenario mentioned at the start of this post, now that you have increased the host's memory resources, you will want to take advantage of this by increasing the size of the InterSystems IRIS instance's database cache (which is done by changing the value of the globals parameter) so it can keep more data in memory. An easy way to make such a change, and far the easiest and most repeatable way to make multiple changes to an instance's configuration in one operation or make the same changes to multiple instances, is to use the configuration merge feature, which is available on UNIX® and Linux systems. As described in Using Configuration Merge to Deploy Customized InterSystems IRIS Instances in Running InterSystems Products in Containers and Using the Configuration Merge Feature in the Configuration Parameter File Reference, configuration merge lets you specify a merge file containing the settings you want merged into an instance's CPF immediately prior to a restart. (In release 2021.1 you'll be able to do this on a running instance without restarting it.) Not only is this more convenient than editing an instance's CPF directly, but it is highly repeatable across multiple instances, and supports reliable change management by enabling you to keep an accurate record of changes simply by versioning the configuration merge files you apply. To execute a configuration merge, you need to do the following: Create the merge file with the parameters you want to modify. Stage the merge file in a location accessible to the instance. If the instance you are modifying is in a container (which is likely on a cloud host), you can stage the file in the instance's durable %SYS directory (see Durable %SYS for Persistent Instance Data in Running InterSystems Products in Containers). Specify the merge file's location using the ISC_CPF_MERGE_FILE environment variable before restarting the instance. For example, continuing with the case of the database cache that needs updating, suppose you wanted a containerized instance's database cache size increased to 100 GB. The setting for this, in the [config] section of the CPF, would be globals=102400, which sets the database cache for 8-kilobyte blocks to 102,400 MB, or 100 GB. (As explained in the globals description in the Configuration Parameter File Reference, the parameter sets the size of the cache for multiple block sizes; if only one value is provided, however, it is applied to the 8-kilobyte block size, and 0 [zero] is assumed for the other sizes; globals=102400 is therefore the equivalent of globals=0,0,102400,0,0,0.) To make this change, you might do the following on the cloud host: 1. Create a configuration merge file, called for example mergefile2021.06.30.cpf, containing these lines: [config] globals=102400 2. Stage the merge file in the durable %SYS directory on the host's file system, which if you mounted the external volume /data as /external in the container and used the ISC_DATA_DIRECTORY variable to specify /external/iris_durable as the durable %SYS directory for the instance, would be /data/iris_durable. 3. Use the docker exec command on the host's command line to specify the variable and restart the instance with the iris command; if the instance's container is named iris and the instance is named IRIS, for example, the command would look like this: docker exec iris ISC_CPF_MERGE_FILE=/data/iris_durable/mergefile2021.06.30.cpf iris stop IRIS restart 4. When the instance is restarted, you can confirm the new globals setting with this command: docker exec iris grep globals /data/iris_durable/iris.cpf 💡 This article is considered as InterSystems Data Platform Best Practice.
Announcement
Evgeny Shvarov · Aug 11, 2020

Proposed Topics for InterSystems IRIS FHIR Contest

Hi Developers! InterSystems FHIR contest has been started and in this post, I want to introduce several topics you may find interesting and use to build FHIR applications with InterSystems IRIS for Health. Amongst them are: a frontend mobile or web app for FHIR server FHIR REST API or SQL; Backend apps to perform complex queries and maintain rules; work with different HL7 data standards. See the details below. Mobile or Web application Mobile and web frontend applications are perhaps the most typical use cases to work with a FHIR server. If you use InterSystems IRIS for Health e.g. with the template we provide for the contest you get a FHIR server which exposes REST API ready to service the API calls to any FHIR resources according to FHIR R4 documentation. The related task. Tagging @Patrick.Jamieson3621 to provide more information. Also, you can consume FHIR data using SQL working with HSFHIR_I0001_R schema for full FHIR resources and HSFHIR_I0001_S schema to have SQL queries for certain entities of resources. E.g. you can visualize the blood pressure change of a patient with the time, the related task request. Another idea of the application is to maintain electronic immunization history, the request, tagging @Yuri.Gomes to provide more info. Examples of complex queries The medical information is very complex and thus implementations and examples of queries for FHIR data to support different real-life scenarios seem very helpful and demanded. E.g. the example of a query which looks for patients that have diabetes medications but no diabetes diagnosis. Tagging @Qi.Li to provide more information. Backend app upon rulesYou could build a backend application which analyzes the FHIR data and send alerts or perform different business operations. For example, consider the rule where the patient meets criteria (new lab result and new diagnosis) and send notifications. E.g. COVID-19 positive results. Related task requests: One, two. Tag @Rasha.Sadiq and @Qi.Li for more details. Data transformation apps As you know there are numerous legacy and comprehensive data formats for health data and all of them even the oldest are still in use or could be met. Thus the application which can deal with them and transfer it into FHIR R4 format and back could be always in demand. InterSystems IRIS for Health supports the majority of health standards and its data transformations so use it and introduce apps that can work with it handily and robustly. The related task request, tagging @Udo.Leimberger3230 for more details. CDA to FHIR data transformation iPhone can export health data to CDA, see the details. Transform this data into FHIR format using IRIS for Health and submit to the FHIR server. Here is an example of how you can transform CDA to FHIR data using IRIS for HEalth. Tagging @Guillaume.Rongier7183 for more details. A current set of proposed topics could be found here. Great topics! Thanks, @Evgeny.Shvarov for organizing everything Here is the new topic for the FHIR contest: CDA to FHIR data transformation iPhone can export health data to CDA, see the details. Transform this data into FHIR format using IRIS for Health and submit to the FHIR server. Here is an example of how you can transform CDA to FHIR data using IRIS for HEalth. Tagging @Guillaume.Rongier7183 for more details. The main announcement is updated too. Hey @Evgeny.Shvarov, I had an idea about creating a heat map using google or bing maps based on fhir data that shows a geographic representation of certain observation results? Looks like a great idea, please go ahead, @Rehman.Masood!
Announcement
Anastasia Dyubaylo · Oct 26, 2020

Virtual Summit 2020: InterSystems Developer Ecosystem

Hi Developers, If you are planning to attend the Focus Sessions of InterSystems Virtual Summit 2020, please do not miss the session dedicated to InterSystems Developer Community and Open Exchange! ⚡️ "Developer Ecosystem: Developer Community forum and Open Exchange applications gallery" session ⚡️ Speakers: 🗣 @Anastasia.Dyubaylo, Community Manager, InterSystems 🗣 @Evgeny.Shvarov, Startups and Community Manager, InterSystems What awaits you? We will talk about the key features of the InterSystems Developer Ecosystem. We’ll focus on the online forum for InterSystems developers, what activities, events, and opportunities await partners and customers there. Then we'll talk about the benefits for developers of the Open Exchange apps gallery. Date & Time: ➡️ Day 1: Tuesday, October 27 (Boston starts Monday, October 26) APAC UTC Time Boston Time Developer Ecosystem: Developer Community and Open Exchange 3:15 AM 11:15 PM NA/LATAM/EMEA UTC Time Boston Time Developer Ecosystem: Developer Community and Open Exchange 4:15 PM 12:15 PM So! We will be happy to answer your questions in a virtual chat on the conference platform – please join! Our session will begin in half an hour! Please join! 📍 https://intersystems.6connex.com/event/virtual-summit/en-us/contents/433631/share?rid=FocusSessions&nid=850273 We'll start in a half an hour! Join us here: 👉🏼 https://intersystems.6connex.com/event/virtual-summit/en-us/contents/433662/share?rid=FocusSessions&nid=850273
Announcement
Anastasia Dyubaylo · Oct 29, 2020

InterSystems Interoperability Contest Kick-off Webinar

Hi Community! We are pleased to invite all the developers to the upcoming InterSystems Interoperability Contest Kick-off Webinar! The topic of this webinar is dedicated to the Interoperability Contest. In this webinar, we will talk about the interoperability capabilities of InterSystems IRIS, will do a demo of building the basic IRIS interoperability solution, and demo how to use the PEX. Also, we’ll discuss and answer the questions on how to build interoperability solutions using InterSystems IRIS and IRIS for Health. Date & Time: Monday, November 2 — 10:00 AM EDT Speakers: 🗣 @Stefan.Wittmann, InterSystems Product Manager 🗣 @Eduard.Lebedyuk, InterSystems Sales Engineer🗣 @Evgeny.Shvarov, InterSystems Developer Ecosystem Manager So! We will be happy to talk to you at our webinar! ✅ JOIN THE KICK-OFF WEBINAR! TODAY! Don't miss the kick-off! ➡️ JOIN US HERE ⬅️ Please check the time of the event: 🗓 Today at 10:00 AM EDT! Hey Developers! The recording of this webinar is available on InterSystems Developers YouTube! Please welcome: ⏯ InterSystems Interoperability Contest Kick-off Webinar Big applause to our speakers! 👏🏼 And thanks to everyone for joining our webinar!
Announcement
Daniel Kutac · Aug 27, 2020

InterSystems CZ & SK Webinar - August 2020

A Webinar was held today for our Czech and Slovak partners and end users. This webinar was an online version of what we originally planned to present earlier this year in Fabrika hotel, Humpolec as a workshop. Due to the current epidemiologic situation a decision was made to move the workshop into the virtual space. The webinar took about 2 1/2 hours and we covered the following areas of interest: Good news from market analyst firms and how it can help our partners to make selling easier. New features and functionality available with InterSystems IRIS 2020.1 and later InterSystems API Manager, InterSystems System Alerting and Monitoring Artificial Intelligence and Machine Learning with InterSystems IRIS Cloud deployments Transition from Cache / Ensemble to InterSystems IRIS The webinar used PowerPoint slides combined with live demos. We had an audience of almost 50 online participants; considering short notice and vacation time, this is a nice number for our region. Audience was actively asking questions and proposals that would likely end up in some follow up webinar(s) targeted at individual specific topis as indicated by our audience. This was our (Prague office) first virtual event so we learned new procedures and tools but everything worked very well. This, together with interest from our partners, is promising and we look forward to organize other webinars in some near future. Presentation slides are available for download. Please beware, slides are in Czech language only! On behalf of InterSystems Prague team Dan Kutac Nice! Is there any video recording, Dan? It's great to see the ZPM slide! like it! BTW, there is URL on OEX, which shows only the applications which could be deployed via ZPM. Hi Evgeny, yes, we do have video recording available. just need to make sure it's good to publish it. it's in Czech language, though. Hey Developers! Now this webinar recording is available on InterSystems Developers YouTube Channel: Enjoy watching this video!
Announcement
Anastasia Dyubaylo · Oct 3, 2023

[Video] Migrating to InterSystems IRIS Made Easy

Hey Developers, Watch this video to learn how an InterSystems partner based in Ostfildern, Germany, made a simple switch to InterSystems IRIS and rolled it out to 2,500 end-users by using the in-place conversion: ⏯ Migrating to InterSystems IRIS Made Easy @ Global Summit 2023 🗣 Presenter: Michael Brhel, CEO, Simba Computer Systeme Subscribe to our YouTube channel InterSystems Developers to get the latest updates!
Article
Muhammad Waseem · Sep 18, 2023

InterSystems IRIS Flask Generative AI application

Hi CommunityIn this article, I will introduce my application IRIS-GenLab.IRIS-GenLab is a generative AI Application that leverages the functionality of Flask web framework, SQLALchemy ORM, and InterSystems IRIS to demonstrate Machine Learning, LLM, NLP, Generative AI API, Google AI LLM, Flan-T5-XXL model, Flask Login and OpenAI ChatGPT use cases. Application Features User registration and authentication Chatbot functionality with the help of Torch (python machine learning library) Named entity recognition (NER), natural language processing (NLP) method for text information extraction Sentiment analysis, NLP approch that identifies the emotional tone of the message HuggingFace Text generation with the help of GPT2 LLM (Large Language Model) model and Hugging Face pipeline Google PALM API, to access the advanced capabilities of Google’s large language models (LLM) like PaLM2 Google Flan-T5 XXL, a fine-tuned on a large corpus of text data that was not filtered for explicit contents. OpenAI is a private research laboratory that aims to develop and direct artificial intelligence (AI) Application Flow Python app.py file import #import genlab application from genlab import create_app from genlab.myconfig import * from flaskext.markdown import Markdown if __name__ == "__main__": # get db info from config file database_uri = f'iris://{DB_USER}:{DB_PASS}@{DB_URL}:{DB_PORT}/{DB_NAMESPACE}' # Invokes create_app function app = create_app(database_uri) Markdown(app) #Run flask application on 4040 port app.run('0.0.0.0', port="4040", debug=False) The above code invokes create_app() function and then runs the application on port 4040create_app() function is defined in __init__.py file, which create/modify database and initilize views from flask import Flask from flask_sqlalchemy import SQLAlchemy from flask_login import LoginManager from .myconfig import * #init SQLAlChemy reference db = SQLAlchemy() def create_app(database_uri): app = Flask(__name__) app.config['SECRET_KEY'] = "iris-genlab" # Getting DB parameters from myconfig.py file app.config['SQLALCHEMY_DATABASE_URI'] = database_uri app.app_context().push() from .views import views from .auth import auth from .models import User #register blueprints app.register_blueprint(views, url_prefix="/") app.register_blueprint(auth, url_prefix="/") #init datbase db.init_app(app) with app.app_context(): db.create_all() # Assign Login View login_manager = LoginManager() login_manager.login_view = "auth.login" login_manager.init_app(app) @login_manager.user_loader def load_user(id): return User.query.get(int(id)) return app The above code creates the database by invoking SQLAlchemy create_all() function which will create user table based on structure defined in the models.py file from . import db from flask_login import UserMixin from sqlalchemy.sql import func #User table class User(db.Model, UserMixin): id = db.Column(db.Integer, primary_key=True) email = db.Column(db.String(150), unique=True) username = db.Column(db.String(150), unique=True) password = db.Column(db.String(150)) date_created = db.Column(db.DateTime(timezone=True), default=func.now()) def __repr__(self): return f'{self.username}' Named entity recognition (NER) Named entity recognition with spaCy, a open-source library for Natural Language Processing (NLP) in PythonNavigate to to http://localhost:4040/ner, enter text and click on submit button to view the results Above URL invoces ner() methon from views.py file from flask import Blueprint, render_template, request from flask_login import login_required, current_user from spacy import displacy import spacy HTML_WRAPPER = """<div style="overflow-x: auto; border: 1px solid #e6e9ef; border-radius: 0.25rem; padding: 1rem">{}</div>""" views = Blueprint("views", __name__) #Named Entitiy Recognition @views.route('/ner', methods=["GET", "POST"]) @login_required def ner(): if request.method == 'POST': raw_text = request.form['rawtext'] result = '' if len(raw_text.strip()) > 0: # Load English tokenizer, tagger, parser and NER nlp = spacy.load('en_core_web_sm') docx = nlp(raw_text) html = displacy.render(docx, style="ent") html = html.replace("\n\n", "\n") result = HTML_WRAPPER.format(html) return render_template('ner.html', user=current_user, result=result,rawtext = raw_text, pst=True ) return render_template('ner.html', user=current_user, pst=False) Below is the ner.html template file which inhertied from base.html {% extends "base.html" %} {% block title %}Home{% endblock %} {% block head %} <h2 class="display-4">Named entity recognition</h2> <p>with spaCy, a open-source library for Natural Language Processing (NLP) in Python</p> {% endblock %} {% block content %} <form method="POST"> <textarea rows="7" required="true" name="rawtext" class="form-control txtarea-main"> {{ rawtext }} </textarea> <button type="submit" class="btn btn-info"><i class="fa fa-database"></i> Submit</button> <a class="btn btn-primary waves-effect" href="/" role="button"> <i class="fa fa-eraser"></i> Refresh</a> </form> {% if pst %} {% filter markdown %} {% endfilter %} <hr/> <div class="card shadow-sm" id="custom_card2"> <h4>Result</h4> <p>{{ result|markdown }}</p> </div> {% endif %} {% endblock %} Application Database SQLALchemy will create below tables: user: To store User information To view table details, navigate to http://localhost:52775/csp/sys/exp/%25CSP.UI.Portal.SQL.Home.zen?$NAMESPACE=USER# For more details please visit IRIS-GenLab open exchange application pageThanks
Discussion
Evgeny Shvarov · Sep 18, 2023

Inbound Requests as a part of InterSystems Interoperability production

Hi Interoperability experts! Recently noticed an interesting conceptual discussion in our Interoperability Discord channel to which I want to give more exposure. All we know that typical InterSystems Interoperability production consists of the following chain: Inbound adapter->Business Service->Business Process->Business Operation->Outbound adapter. And Business Process (BO) here is always considered as a passive "listener" either on port/folder/rest API for an incoming data. But often the reason to initiate the production process can be a data that could be retrieved by an active request to some port/rest api/s3 bucket. And the documentation says if a developer wants to have an http request in a production it should be implemented via business operation and outbound adapter pair that will receive the data and send it to business service. So the diagram looks like that: So it is kind of a reverse logic here. Which is not something we teach in learning.intersystems.com and documentation. Should we introduce an idea of Inbound Request adapter? How do you manage it in real-life productions? Any other thoughts? also submitted an idea No discussion: Business Operation and Outbound adapter is a combination you should not break But to trigger a second Business OP You just need a Business Service that you kick,no need for a Busines Process in between. Old ENSDEMO shows such examples.eg. DemoRecodMapperHere the FileService is the driving part.another example uses a service that triggers itself DemoDashboard It just lives on his timeout settingHere it has nothing to do then updating some propertiesBut it could be anything. eg Kicking another Business Operation A bunch of services with their InboundAdapters, such as FTP, Email, SQL, Kafka, and so on, connects to external server using this InboundAdapter directly, collect data and use it in Service. And only for TCP, HTTP, SOAP, REST by some reason decided that InboundAdapter now should start our own server, so, external services should connect to us. It's useful for sure, but the why we can't use it the other way too, is it somehow completely different? The logic in the workflow is still the same, it's Input data, which have to start the workflow. The whole question is "Why Business Operation and Outbound Adapter" and not "Inbound Adapter and Business Service" for the incoming data for the production obtained from REST API?
Question
Rathinakumar S · Nov 7, 2022

How InterSystems cache calculate the license count

Hi Team, How InterSystems cache calculate the license count based on process? How to we determine the product required license count ? Thanks Rathinakumar License counting depends on how you access Cache/IRIS (i.e. Web interface or some kind of client). there is a quite wide selection of licenses.the best for details is to contact your local sales rep from InterSystems to find your optimal solution
Question
Tyffany Coimbra · Nov 10, 2022

download InterSystems Caché ODBC Data Source

I need to download InterSystems Caché ODBC Data Source 2018 and I can't.I want to know where I can download it. Have you looked in ftp://ftp.intersystems.com/pub/ ? Try here: ftp://ftp.intersystems.com/pub/cache/odbc/ Hello guys!I can't download from these links. Hi. If you mean the ODBC driver, then it gets installed when you install Cache. So, any Cache install file for that version has it. I don't know if you can select to only install the driver and nothing else as I always want the full lot on my PC. (... just tried and a "custom" setup allows you to remove everything but the ODBC driver, but it's fiddly.) ODBC drivers are available for download from the InterSystems Components Distribution page here: https://wrc.intersystems.com/wrc/coDistGen.csp Howdy all, Iain is right that you can get the ODBC driver from the WRC site directly if you are a customer of ours, but the new spot InterSystems hosts drivers for ODBC and so on is here: https://intersystems-community.github.io/iris-driver-distribution/ edit: I realized this was asking for Cache, not IRIS drivers, so my answer doesn't address it. How are you trying to access the ftp link? I tested and it should be publicly available. Try pasting the link into your file explorer on Windows, for example. When I go to this website, it lists the components, including the component I am looking for (Cache ODBC Driver). However, there is no button on key to download the component. Hi Wayne, Please try scrolling to the right. If it still doesn't work I'd contact InterSystems support to see what the issue is. There may be a network security issue preventing parts of the web page from loading. Vic Sun are you suggesting the IRIS drivers can be used to connect to Cache - that would be helpful if so? The original post was specifically asking about Cache.