Clear filter
Announcement
Anastasia Dyubaylo · Sep 13, 2019
Hi Everyone!
New video, recorded by @Stefan.Wittmann, is already on InterSystems Developers YouTube:
JSON and XML persistent data serialization in InterSystems IRIS
Need to work with JSON or XML data?
InterSystems IRIS supports multiple inheritance and provides several built-in tools to easily convert between XML, JSON, and objects as you go.
Learn more about the multi-model development capabilities of InterSystems IRIS on Learning Services sites.
Enjoy watching the video! Can confirm that the %JSON.Adaptor tool is extremely useful! This was such a great addition to the product.In Application Services, we've used it to build a framework which allows us to not only expose our persistent classes via REST but also authorize different levels of access for different representations of each class (for example, all the properties, vs just the Name and the Id). The "Mappings and Parameters" feature is especially useful:https://irisdocs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GJSON_adaptorAlso, @Stefan are you writing backwards while you talk? That's impressive. Anyone who is doubting multiple-inheritance is insane.Although calling this kind of inheritance 'mixin-classes' helps I've noticed, mixing in additional features. https://hackaday.com/tag/see-through-whiteboard/
Announcement
Fabiano Sanches · Apr 26, 2023
Be in touch with InterSystems and receive alerts, advisories and product news quickly. The process is really simple:
Click in this link: https://www.intersystems.com/support/product-alerts-advisories/
Fill in the form with your contact information and
You're all set!
As you can see, it takes less than a minute to keep informed about the news!
Announcement
Anastasia Dyubaylo · Jun 14, 2017
Hi Community!
Enjoy the video of the week about InterSystems iKnow Technology:
A Cure for Clinician Frustration
In this video, learn why iKnow capabilities are critical for getting the most out of your investments in electronic health records and improving information access for clinicians.
You are very welcome to watch all the videos about iKnow in a dedicated iKnow playlist on the InterSystems Developers YouTube Channel.
Enjoy!
Article
Константин Ерёмин · Sep 18, 2017
The InterSystems DBMS has a built-in technology for working with non-structured data called iKnow and a full-text search technology called iFind. We decided to take a dive into both and make something useful. As the result, we have DocSearch — a web application for searching in InterSystems documentation using iKnow and iFind.
How Caché Documentation works
Caché documentation is based on the Docbook technology. It has a web interface (which includes a search that uses neither iFind nor iKnow). The articles themselves are stored in Caché classes, which allows us to run queries against this data and, of course, to create our own search tool.
What is iKnow and iFind
Intersystems iKnow is a technology for analyzing unstructured data, which provides access to this data by indexing sentences and instances in it. To start the analysis, you first need to create a domain — a storage for unstructured data, and load a text to it.
The iFind technology is a module of the Caché DBMS for performing full-text search in Caché classes. iFind uses many iKnow classes for intelligent text search. To use iFind in your queries, you need to introduce a special iFind index in your Caché class.
There are three types of iFind indexes, each offering all the functions of the previous type, plus some additional ones:
The main index (%iFind.Index.Basic): supports the search for words and word combinations.Semantic index (%iFind.Index.Semantic): supports the search for iKnow objects.Analytic search (%iFind.Index.Analytic): supports all iKnow functions of the semantic search, as well as information about paths and word proximity.
Since documentation classes are stored in a separate namespace, if you want to make classes available in ours, the installer also performs mapping of packages and globals.
Installer code for mapping
XData Install [ XMLNamespace = INSTALLER ]
{
<Manifest>
// Specify the name of the namespace
<IfNotDef Var="Namespace">
<Var Name="Namespace" Value="DOCSEARCH"/>
<Log Text="Set namespace to ${Namespace}" Level="0"/>
</IfNotDef>
// Check if the area exists
<If Condition='(##class(Config.Namespaces).Exists("${Namespace}")=1)'>
<Log Text="Namespace ${Namespace} already exists" Level="0"/>
</If>
// Creating the namespace
<If Condition='(##class(Config.Namespaces).Exists("${Namespace}")=0)'>
<Log Text="Creating namespace ${Namespace}" Level="0"/>
// Creating a database
<Namespace Name="${Namespace}" Create="yes" Code="${Namespace}" Ensemble="" Data="${Namespace}">
<Log Text="Creating database ${Namespace}" Level="0"/>
// Map the specified classes and globals to a new namespace
<Configuration> <Database Name="${Namespace}" Dir="${MGRDIR}/${Namespace}" Create="yes" MountRequired="false"
Resource="%DB_${Namespace}" PublicPermissions="RW" MountAtStartup="false"/>
<Log Text="Mapping DOCBOOK to ${Namespace}" Level="0"/>
<GlobalMapping Global="Cache*" From="DOCBOOK" Collation="5"/>
<GlobalMapping Global="D*" From="DOCBOOK" Collation="5"/>
<GlobalMapping Global="XML*" From="DOCBOOK" Collation="5"/>
<ClassMapping Package="DocBook" From="DOCBOOK"/>
<ClassMapping Package="DocBook.UI" From="DOCBOOK"/>
<ClassMapping Package="csp" From="DOCBOOK"/>
</Configuration>
<Log Text="End creating database ${Namespace}" Level="0"/>
</Namespace> <Log Text="End creating namespace ${Namespace}" Level="0"/>
</If>
</Manifest>
}
The domain required for iKnow is built upon the table containing the documentation. Since we use a table as the data source, we'll use SQL.Lister. The content field contains the documentation text, so let's specify it as the data field. The rest of the fields will be described in the metadata.
Installer code for creating a domain
ClassMethod Domain(ByRef pVars, pLogLevel As %String, tInstaller As %Installer.Installer) As %Status
{
#Include %IKInclude
#Include %IKPublic
set ns = $Namespace
znspace "DOCSEARCH"
// Create a domain or open it if it exists
set dname="DocSearch"
if (##class(%iKnow.Domain).Exists(dname)=1){
write "The ",dname," domain already exists",!
zn ns
quit
}
else {
write "The ",dname," domain does not exist",!
set domoref=##class(%iKnow.Domain).%New(dname)
do domoref.%Save()
}
set domId=domoref.Id
// Lister is used for searching for sources corresponding to the records in query results
set flister=##class(%iKnow.Source.SQL.Lister).%New(domId)
set myloader=##class(%iKnow.Source.Loader).%New(domId)
// Building a query
set myquery="SELECT id, docKey, title, bookKey, bookTitle, content, textKey FROM SQLUser.DocBook"
set idfld="id"
set grpfld="id"
// Specifying the fields for data and metadata
set dataflds=$LB("content")
set metaflds=$LB("docKey", "title", "bookKey", "bookTitle", "textKey")
// Putting all data into Lister
set stat=flister.AddListToBatch(myquery,idfld,grpfld,dataflds,metaflds)
if stat '= 1 {write "The lister failed: ",$System.Status.DisplayError(stat) quit }
//Starting the analysis process
set stat=myloader.ProcessBatch()
if stat '= 1 {
quit
}
set numSrcD=##class(%iKnow.Queries.SourceQAPI).GetCountByDomain(domId)
write "Done",!
write "Domain cointains ",numSrcD," source(s)",!
zn ns
quit
}
To search in documentation, we use the %iFind.Index.Analytic index:
Index contentInd On (content) As %iFind.Index.Analytic(LANGUAGE = "en",
LOWER = 1, RANKERCLASS = "%iFind.Rank.Analytic");
Where contentInd is the name of the index and content is the name of the field that we are creating an index for. The LANGUAGE = “en” parameter sets the language of the text The LOWER = 1 parameter turns off case sensitivity The RANKERCLASS = "%iFind.Rank.Analytic" parameter allows to use the TF-IDF result ranking algorithm
After adding and building such an index, it can be used in SQL queries, for example. The general syntax for using iFind in SQL:
SELECT * FROM TABLE WHERE %ID %FIND
search_index(indexname,'search_items',search_option)
After creating the %iFind.Index.Analytic index with such parameters, several SQL procedures of the following type are generated: [table_name]_[index name]Procedure name
In our project, we use two of them:
DocBook_contentIndRank — returns the result of the TF-IDF ranking algorithm for a request The procedure has the following syntax:
SELECT DocBook_contentIndRank(%ID, ‘SearchString’, ‘SearchOption’) Rank FROM DocBook
WHERE %ID %FIND search_index(contentInd,‘SearchString’, ‘SearchOption’)
DocBook_contentIndHighlight — returns the search results, where the searched words are wrapped into the specified tag:
SELECT DocBook_contentIndHighlight(%ID, ‘SearchString’, ‘SearchOption’,’Tags’) Text FROM DocBook
WHERE %ID %FIND search_index(contentInd,‘SearchString’, ‘SearchOption’)
I will go into more detail later in the article.
What do we have in the end:
Autocomplete in the search field
As you start entering text into the search field, the system will suggest possible query variants to help you find the necessary information quicker. These suggestions are generated on the basis of the word (or its beginning) that you types. The system shows ten best matching words or phrases. This process uses iKnow, the %iKnow.Queries.Entity.GetSimilar method Fuzzy string search
iFind supports fuzzy search for finding words that almost match the search query. This is achieved by measuring the Levenshtein distance between two words. Levenshtein distance is the minimal number of one-character changes (inserts, removals or replacements) necessary for turning one word into another. It can be used for correcting typis, small variations in writing, different grammatic forms (plural and singular, for exampe).
In iFind SQL queries, the search_option parameter is responsible for the fuzzy search. search_option = 3 denotes a Levenshtein distance of 2. To set a Levenshtein distance equal to n, you need to set the search_option parameter to ‘3:n’ Documentation search uses a Levenshtein distance of 1, so let's demonstrate how it works: Let's type “ifind” in the search field:
Let's try a fuzzy search by intentionally making a typo. As we can see, the search corrected the typo and found the necessary articles. Complex searches
Thanks to the fact that iFind supports complex queries with brackets and AND OR NOT operators, we were able to implement complex search functionality. Here's what you can specify in your query: word, word combination, one of several words, exceptions. Fields can be filled one by one, or all at once.
For example, let's find articles containing the word “iknow”, the combination “rest api” and those that contain either “domain” or “UI”.
We can see that there are two such articles:
Please note that the second one mentions Swagger UI, so we can modify the query to make it exclude those ones that do not contain the word Swagger.
As the result, we will only find one article: Search results highlighting
As stated above, the use of an iFind index creates the DocBook_contentIndHighlight procedure. Let's use the following:
SELECT DocBook_contentIndHighlight(%ID, 'search_items', '0', '<span class=""Illumination"">', 0) Text FROM DocBook
To get the resulting text wrapped into a tag
<span class=""Illumination"">
This helps you to visually mark search results on the front-end.Search results ranking
Find is capable of ranking results using the TF-IDF algorithm. TF-IDF is often used in text analysis and data search tasks – for example, as a criterion of relevance of a document to a search query.
As the result of the SQL query, the Rank field will contain the weight of the word that will be proportionate to the number of times the word was used in an article, and reversely proportionate to the frequency of the word’s occurrence in other articles.
SELECT DocBook_contentIndRank(%ID, ‘SearchString’, ‘SearchOption’) Rank FROM DocBook
WHERE %ID %FIND search_index(contentInd,‘SearchString’, ‘SearchOption’)
Integration with the official documentation search
After installation, a “Search using iFind” button will be added to the official documentation search.
If the “Search words” field is filled, you will be taken to the search results page after clicking the “Search using iFind” button. If the field is empty, you will be taken to the new search page.
Installation
Download the Installer.xml file from the latest release available on the corresponding page.Import the loaded Installer.xml file into the %SYS namespace and compile it.Enter the following command in the terminal in the %SYS namespace:
do ##class(Docsearch.Installer).setup(.pVars)
After that, the search will be available at the following address localhost:[port]/csp/docsearch/index.html
Demo
An online demo of the search is available here.
Conclusion
This project demonstrates interesting and useful capabilities of iFind and iKnow technologies that make data search more relevant. Any comments or suggestions will be highly appreciated. The entire source code with the installer and the deployment guide is available on github Hi Konstantin,thanks for sharing your work, a nice application of iFind technology! If I can add a few ideas to make this more lightweight:Rather than creating a domain programmatically, the recommended approach for a few versions now has been to use Domain Definitions. They allow you to declare a domain in an XML format (not much unlike the %Installer approach) and avoid a number of inconveniences in managing your domain in a reproducible way.From reading the article, I believe you're just using the iKnow domain for that one EntityAPI:GetSimilar() call to generate search suggestions. iFind has a similar feature, also exposed through SQL, through %iFind.FindEntities() and %iFind.FindWords(), depending on what kind of results you're looking for. See also this iFind demo. With that in place, you may even be able to skip those domains altogether :-)thanks,benjamin Thank you, Benjamin.I will keep your ideas in mind.Thanks, Konstntin Eremin. Thanks for posting this Konstantin. For a long time I have been wondering why InterSystems hadn't done this already. I've had something simple running on my laptop already a long time ago, but the internal discussion on how to package it proved a little more complicated. Among other things, an iFind index requires an iKnow-enabled license (and more space!), which meant you couldn't simply include it in every kit.Also, for the ranking of docbook results, applying proper weights based on the type of content (title / paragraph / sample / ...) was at least as important as the text search capabilities themselves. That latter piece has been well-addressed in 2017.1, so docbook search is in pretty good shape now. Blending in an easily-deployable iFind option as Konstantin published can only add to this!Thanks,benjamin Hi, Konstantin!I tried to search $Case word it finds, but it shows strange option in a dropdown list of a search field. See the screenshot:What does it mean? Hi, Evgeny!I used iKnow Entities as words in a dropdown list of a search field. iKnow thinks "$case( $extract( units, 1" is entity, because it look some strange. But I would like to use %iFind.FindEntities() (Idea from first Benjamin DeBoe's comment) for words in dropdown list of a search field after a short time. I think it will fix this iKnow was written to analyze English rather than ObjectScript, so you may see a few odd results coming out of code blocks. I believe you can add a where clause excluding those records from the block table to avoid them. Now I use %iFind.FindEntities to get words in a dropdown list of a search field. Installation has become faster than before, because I don't use domain builiding process Hi, Konstantin!The problem with strange suggestions fixed, but it doesn't suggest anything for $CASE now ) Did you introduce $CASE in a blacklist? )I think suggestions on all COS commands and functions is a good option for the search field (if possible of course). Hi, Evgeny!Yes, I agree with you about COS commands in a dropdown list of a search field.I had some problems with COS commands and functions. But now I fixed it: Hi Konstantin,Can we install this project on Cache 2016.2 or does it need 2017 ?I tried to install offline (becuse my server cannot get through to GITHUB(443)) and the installation failed on several errors.Maybe I need more specific instructions for offline install ?Uri Hi Uri!You need Cache 2017Konstantin Hi, Constantin!When I search documentation with your online tool what is the version of documentation it works with?Would you please add the version of the product in the results or somewhere?Thanks in advance! Hi, Evgeny!I will add the version of the product in the results in the near future. Hi, Evgeny!I added the version of documentation in resultsKonstantin Thanks, Konstantin!And here is the link to the demo.Do you want to add an option to share the search? E.g. introduce some share results button in UI which would provide an URL with added search option in URL? It would be very handy if you want to share search results with a colleague. Good day,I would very much like to install this example on my local instance. However, I cannot find installer.xml on "corrresponding page". Which is the "corresponding page" please? I downloaded the solution from Github, but also there is no installer.xml. I will apprecitae it if you can point me to the "corresponding page" where the installer.xml is please.Thank you in advance. Hi Elize!It's in releases
Announcement
Evgeny Shvarov · Sep 15, 2017
Hi, Community!We are pleased to invite you to the InterSystems UK Developer Community Meetup on 17th of October!The UK Developer Community Meetup is an informal meeting of developers, engineers, and devops to discuss successes and lessons learnt from those building and supporting solutions with InterSystems products. An excellent opportunity to meet and discuss new solutions with like-minded peers and to find out what's new in InterSystems technology.The Meetup will take place on 17th of October from 5pm to 8pm at The Belfry, Sutton Coldfield with food and beverages supplied.Your stories are very welcome! Here is the current agenda for the event:TimeSessionPresenterSite5:00 pmDependencies and Complexity@John.Murraygeorgejames.com5:30 pmDeveloping modern web applications with Caché, Web Components & JSON-RPC@Sean.Connellymemcog.com6:00 pmNetworking Coffee break 6:30 pmUp Arrow Redux: Persistence as a Language Feature@Rob.Tweedmgateway.com7:00 pmFirst class citizens of the container world@Luca.RavazzoloInterSystems Product Manager If you want to be a presenter, please comment on this post below, we'll contact you. All sessions are now filled.Attendees are also invited to join us the following day for the UK Technology Summit - which is the annual gathering of the InterSystems community to discuss the technologies, strategies, and methodologies that will leverage what matters – competitive advantage and business growth.Register for the Meet Up here (link to http://www3.intersystems.com/its2017/registration) and select UK Developer Community Meet Up. Topic from @rob.tweed is introduced. We have one free slot available! And we would have a session regarding containers from @Luca.Ravazzolo, InterSystems Product Manager.Come to InterSystems Data Platform UK Meetup and InterSystems UK Summit! We would have a live stream in two hours. Join! We live now! If you have any questions for presenter you can ask it online. To accompany the YouTube video I have posted the slide deck for my talk (the first one) here. Slides from @Luca.Ravazzolo session are available here. Slide deck for my presentation on "data persistence as a language feature" is here:https://www.slideshare.net/robtweed/data-persistence-as-a-language-feature Slide #7.
You touched on a very sore subject. As I understand you! My presentation made reference to a Google V8 API bottleneck issue. Here's the link to the bug tracker report:https://bugs.chromium.org/p/v8/issues/detail?id=5144#c1and the detailed benchmark tests that illustrate the problemhttps://bugs.chromium.org/p/v8/issues/attachmentText?aid=240024 Here are the slides on DeepSee Web session
Article
Developer Community Admin · Oct 21, 2015
IntroductionThis document is intended to provide a survey of various High Availability (HA) strategies that can be used in conjunction with InterSystems Caché, Ensemble, and HealthShare Foundation. This document also provides an overview of the various types of system outages that can occur, as well as how each strategy would handle a given outage, with the goal of helping you choose the right strategy for your specific deployment.The strategies surveyed in this document are based on three different HA technologies:Operating System Failover ClustersVirtualization-Based HACaché Database Mirroring
Article
Developer Community Admin · Oct 21, 2015
AbstractThe European Space Agency (ESA) has chosen InterSystems Caché as the database technology for the AGIS astrometric solution that will be used to analyze the celestial data captured by the Gaia satellite.The Gaia mission is to create an accurate phase-map of about a billion celestial objects. During the mission, the AGIS solution will iteratively refine the accuracy of Gaia's spatial observations, ultimately achieving accuracies that are on the order of 20 microarcseconds.In preparation of the extreme data requirements for this project, InterSystems recently engaged in a proof-of-concept project which required 5 billion discrete Java objects of about 600 bytes each to be inserted in the Caché database within a span of 24 hours. Running on one 8-core Intel 64-bit processor with Red Hat Enterprise Linux 5.5, Caché successfully ingested all the data in 12 hours and 18 minutes, at an average insertion rate of 112,000 objects/second.
Question
Tom Philippi · Jan 31, 2018
I am running InterSystems Ensemble 2016.2 on ubuntu and trying to connect to a remote MS SQL server database.Insofar, I have successfully configured my ubuntu machine to connect to the remote MS SQL server database using unix-odbc. That is:Telnet connection workstsql (test sql) connection worksisql command succesfully connects to sql server and I am able to execute queries on ubuntu.The DSN for the isql command are defined in /etc/odbc.ini and /etc/odbcinst.ini and should be available systemwide.The DSN in the odbcinst.ini uses the microsoft odbc driver 13 for Sql Server for linux. However, when I access the sql gateway in the management portal the DSN configured in /etc/odbc.ini does not show up. Does anyone know how I can expose my DSN defined in /etc/obdc.ini to Ensemble? I already tried creating a shortcut in /intersystems/mgr directory named cacheodbc.ini (as described here: https://groups.google.com/forum/#!topic/intersystems-public-cache/4__XchiaCQU), but insofar no success :(. The first thing I'd check are the permissions on these files. If you created them as root, they might not be readable for other users?
Question
Tom Philippi · Apr 20, 2017
We are in the process of setting enabling SSL on a soap web service exposed via InterSystems, but are running into trouble. We have installed our certificates on our webserver (Apache 2.4) and enabled SSL over the default port 57772. However, we now get an error when sending a soap message to the web service (it used to work over http). Specifically the CSP gateway refuses to route te emssage the soap web service:<SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:s="http://www.w3.org/2001/XMLSchema"> <SOAP-ENV:Body> <SOAP-ENV:Fault> <faultcode>SOAP-ENV:Server</faultcode> <faultstring>CSP Gateway Error (version:2016.1.2.209.0 build:1601.1554e)</faultstring> <detail> <error xmlns="http://tempuri.org"> <special>Systems Management</special> <text>Invalid Request : Cannot identify application path</text> </error> </detail> </SOAP-ENV:Fault> </SOAP-ENV:Body></SOAP-ENV:Envelope>Probably either the CSP gateway or the web server was misconfigured. Anyone an idea in which direction we might look. (BTW accessing the management port now returns the same error as does using SSL port 443).PS this issue was also submitted to WRC Tom,I presume by now you've had this answered by the WRC, but the issue is most likely that the private Apache web server that ships with Caché/Ensemble does not currently support SSL. In order to configure SSL, you would need to configure a full Apache or IIS web server, which is typically recommended for any public-facing, production-level deployment anyway.-Steve
Announcement
Steve Brunner · Sep 4, 2018
InterSystems is pleased to announce the availability of InterSystems IRIS Data Platform 2018.1.2 maintenance release For information about the corrections in this release, refer to the release notes.This release is supported on the same platforms as InterSystems IRIS 2018.1.1. You can see details, including cloud platforms and docker containers supported, in this Supported Platforms document. The build corresponding to this release is 2018.1.2.609.0 If you have not visited our Learning Services site recently, I encourage you to try the InterSystems IRIS sandbox and Experiences.
Article
Vasiliy Bondar · Oct 14, 2018
From the first glance, the task of configuring LDAP authentication in Caché is not hard at all – the manual describes this process in just 6 paragraphs. On the other hand, if the LDAP server uses Microsoft Active Directory, there a few non-evident things that need to be configured on the LDAP server side. Those who don’t do anything like that on a regular basis may get lost in Caché settings. In this article, we will describe the step-by-step process of setting up LDAP authentication and cover the diagnostic methods that can be used if something doesn’t work as expected.Configuration of the LDAP server1. Create a user in ActiveDirectory that we will use to connect to Caché and search for information in the LDAP database. This user must be located in the domain’s root.2. Let’s create a special unit for users who will be connecting to Caché and call it IdapCacheUsers.3. Register users there.4. Let’s test the availability of the LDAP database using a tool called ldapAdmin. You can download it here.5. Configure the connection to the LDAP server:6. All right, we are connected now. Let’s take a look at how it all works:7. Since users that will be connecting to Caché are in the ldapCacheUsers unit, let’s limit our search to this unit only.Settings on the Caché side8. The LDAP server is ready, so let’s proceed to configuring the settings on the Caché side. Go to Management Portal -> System Administration -> Security -> System Security -> LDAP Options. Let’s clear the “User attribute to retrieve default namespace”, “User attribute to retrieve default routine” and “User attribute to retrieve roles” fields, since these attributes are not in the LDAP database yet.9. Enable LDAP authentication in System Administration -> Security -> System security -> Authentication/CSP Session Settings10. Enable LDAP authentication in services. The %Service_CSP service is responsible for connecting web applications, %Service_Console handles connections through the terminal.11. Configure LDAP authentication in web applications.12. For the time being and for testing the connection, let’s configure everything so that new users in Caché have full rights. To do this, assign the %All role to the user _PUBLIC. We will address this aspect in the future ……13. Let’s try opening the configured web application, it should open without problems.14. The terminal also opens15. After connecting, LDAP users will appear on the Caché users list16. The truth is, this configuration gives all new users complete access to the system. To close this security hole, we need to modify the LDAP database by adding an attribute that we will use to store the name of the role that will be assigned to users after connecting to Caché. Prior to that, we need to make a backup copy of the domain controller to ensure that we don’t break the entire network if something goes wrong with the configuration process.17. To modify the ActiveDirectory schema, let’s install the Active Directory snap-in on the server where ActiveDirectory is installed (it is not installed by default). Read the instruction here.18. Let’s create an attribute called intersystems-Roles, OID 1.2.840.113556.1.8000.2448.2.3, a case-sensitive string, a multi-value attribute.19. Then add this attribute to the class “user”.20. Let’s now make it so that when we view the list of unit users, we can see a “Role in InterSystems Cache” column. To do that, click Start -> Run and type “adsiedit.msc”. We are connecting to “Configuration” naming context.21. Let’s go to the CN=409, CN=DisplaySpecifiers, CN=Configuration container and choose a container type that will show additional user attributes when we view it. Let’s choose unit-level display (OU) provided by the organisationalUnit-Display container. We need to find the extraColumns attribute in its properties and change its value to ”intersystems-Roles, Role in IntersystemsCache,1,200,0”. The rule of composing the attribute is as follows: attribute name, name of the destination column, display by default or not, column width in pixels, reserved value. One more comment: CN=409 denotes a language code (CN=409 for the English version, CN=419 for the Russian version of the console).22. We can now fill out the name of the role that will be assigned to all users connecting to Caché. If your Active Directory is running on Windows Server 2003, you won’t have any built-in tools for editing this field. You can use a tool called ldapAdmin (see item 4) for editing the value of this attribute. If you have a newer version of Windows, this attribute can be edited in the “Additional functions” mode – the user will see an additional tab for editing attributes.23. After that, let’s specify the name of this attribute in the LDAP options on the Caché management portal. 24. Let’s create an ldapRole with the necessary privileges25. Remove the %ALL role from the user _PUBLIC26. Everything is set up, let’s try connecting to the system27. If it doesn’t work right away, enable and set up an audit28. Audit settings29. Look at the error log in Audit Database.ConclusionIn reality, it often happens that the configuration of different roles for different users is not required for working in an application. If you only need to assign a particular set of permissions to users logging in to a web application, you can skip steps 16 through 23. All you will need to do is to add these roles and remove all types of authentication except for LDAP on the “Application roles” tab in the web application settings. In this case, only users registered on the LDAP sever can log in. When such a user logs in, Caché automatically assigns the roles required for working in this application. I wanted to add that you certainly can create an attribute to list a user's roles as described here, and some sites do, but it's not the only way to configure LDAP authentication.
Many administrators find the group-based behavior enabled by the "Use LDAP Groups for Roles/Routine/Namespace" option easier to configure, so you should consider that option if you're setting up LDAP authentication. If you do use that option, many of the steps here will be different, including at least steps 17-23 where the attribute is created and configured. Yes, I agree. Thanks for the addition Thank you for sharing. Good job.
Article
Niyaz Khafizov · Oct 8, 2018
Hi all. We are going to find duplicates in a dataset using Apache Spark Machine Learning algorithms.
Note: I have done the following on Ubuntu 18.04, Python 3.6.5, Zeppelin 0.8.0, Spark 2.1.1
Introduction
In previous articles we have done the following:
The way to launch Jupyter Notebook + Apache Spark + InterSystems IRIS
Load a ML model into InterSystems IRIS
K-Means clustering of the Iris Dataset
The way to launch Apache Spark + Apache Zeppelin + InterSystems IRIS
In this series of articles, we explore Machine Learning and record linkage.
Imagine that we merged databases of neighboring shops. Most probably there will be records that are very similar to each over. Some records will be of the same person and we call them duplicates. Our purpose is to find duplicates.
Why is this necessary? First of all, to combine data from many different operational source systems into one logical data model, which can then be subsequently fed into a business intelligence system for reporting and analytics. Secondly, to reduce data storage costs. There are some additional use cases.
Approach
What data do we have? Each row contains different anonymized information about one person. There are family names, given names, middle names, date of births, several documents, etc.
The first step is to look at the number of records because we are going to make pairs. The number of pairs equals n*(n-1)/2. So, if you have less than 5000 records, than the number of pairs would be 12.497.500. It is not that many, so we can pair each record. But if you have 50.000, 100.000 or more, the number of pairs more than a billion. This number of pairs is hard to store and work with.
So, if you have a lot of records, it would be a good idea to reduce this number. We will do it by selecting potential duplicates. A potential duplicate is a pair, that might be a duplicate. We will detect them based on several simple conditions. A specific condition might be like:
(record1.family_name == record2.familyName) & (record1.givenName == record2.givenName) & (record1.dateOfBirth == record2.dateOfBirth)
but keep in mind that you can miss duplicates because of strict logical conditions. I think the optimal solution is to choose important conditions and use no more than two of them with & operator. But you should convert each feature into one record shape beforehand. For example, there are several ways to store dates: 1985-10-10, 10/10/1985, etc convert to 10-10-1985(month-day-year).
The next step is to label the part of the dataset. We will randomly choose, for example, 5000-10000 pairs (or more, if you are sure that you can label all of them). We will save them to IRIS and label these pairs in Jupyter (Unfortunately, I didn't find an easy and convenient way to do it. Also, you can label them in PySpark console or wherever you want).
After that, we will make a feature vector for each pair. During the labeling process probably you noticed which features are important and what they equal. So, test different approaches to creating feature vectors.
Test different machine learning models. I chose a random forest model because of tests (accuracy/precision/recall/etc). Also, you can try decision trees, Naive Bayes, other classification model and choose the one that will be the best.
Test the result. If you are not satisfied with the result, try to change feature vectors or change a ML model.
Finally, fit all pairs into the model and look at the result.
Implementation
Load a dataset:
%pysparkdataFrame=spark.read.format("com.intersystems.spark").option("url", "IRIS://localhost:51773/******").option("user", "*******").option("password", "*********************").option("dbtable", "**************").load()
Clean the dataset. For example, null (check every row) or useless columns:
%pysparkcolumns_to_drop = ['allIdentityDocuments', 'birthCertificate_docSource', 'birthCertificate_expirationDate', 'identityDocument_expirationDate', 'fullName']droppedDF = dataFrame.drop(*columns_to_drop)
Prepare the dataset for making pairs:
%pysparkfrom pyspark.sql.functions import col# rename columns namesreplacements1 = {c : c + '1' for c in droppedDF.columns}df1 = droppedDF.select([col(c).alias(replacements1.get(c, c)) for c in droppedDF.columns])replacements2 = {c : c + '2' for c in droppedDF.columns}df2 = droppedDF.select([col(c).alias(replacements2.get(c, c)) for c in droppedDF.columns])
To make pairs we will use join function with several conditions.
%pysparktestTable = (df1.join(df2, (df1.ID1 < df2.ID2) & (
(df1.familyName1 == df2.familyName2) & (df1.givenName1 == df2.givenName2) | (df1.familyName1 == df2.familyName2) & (df1.middleName1 == df2.middleName2) | (df1.familyName1 == df2.familyName2) & (df1.dob1 == df2.dob2) | (df1.familyName1 == df2.familyName2) & (df1.snils1 == df2.snils2) | (df1.familyName1 == df2.familyName2) & (df1.addr_addressLine1 == df2.addr_addressLine2) | (df1.familyName1 == df2.familyName2) & (df1.addr_okato1 == df2.addr_okato2) | (df1.givenName1 == df2.givenName2) & (df1.middleName1 == df2.middleName2) | (df1.givenName1 == df2.givenName2) & (df1.dob1 == df2.dob2) | (df1.givenName1 == df2.givenName2) & (df1.snils1 == df2.snils2) | (df1.givenName1 == df2.givenName2) & (df1.addr_addressLine1 == df2.addr_addressLine2) | (df1.givenName1 == df2.givenName2) & (df1.addr_okato1 == df2.addr_okato2) | (df1.middleName1 == df2.middleName2) & (df1.dob1 == df2.dob2) | (df1.middleName1 == df2.middleName2) & (df1.snils1 == df2.snils2) | (df1.middleName1 == df2.middleName2) & (df1.addr_addressLine1 == df2.addr_addressLine2) | (df1.middleName1 == df2.middleName2) & (df1.addr_okato1 == df2.addr_okato2) | (df1.dob1 == df2.dob2) & (df1.snils1 == df2.snils2) | (df1.dob1 == df2.dob2) & (df1.addr_addressLine1 == df2.addr_addressLine2) | (df1.dob1 == df2.dob2) & (df1.addr_okato1 == df2.addr_okato2) | (df1.snils1 == df2.snils2) & (df1.addr_addressLine1 == df2.addr_addressLine2) | (df1.snils1 == df2.snils2) & (df1.addr_okato1 == df2.addr_okato2) | (df1.addr_addressLine1 == df2.addr_addressLine2) & (df1.addr_okato1 == df2.addr_okato2)
)))
Check the size of returned dataframe:
%pysparkdroppedColumns = ['prevIdentityDocuments1', 'birthCertificate_docDate1', 'birthCertificate_docNum1', 'birthCertificate_docSer1', 'birthCertificate_docType1', 'identityDocument_docDate1', 'identityDocument_docNum1', 'identityDocument_docSer1', 'identityDocument_docSource1', 'identityDocument_docType1', 'prevIdentityDocuments2', 'birthCertificate_docDate2', 'birthCertificate_docNum2', 'birthCertificate_docSer2', 'birthCertificate_docType2', 'identityDocument_docDate2', 'identityDocument_docNum2', 'identityDocument_docSer2', 'identityDocument_docSource2', 'identityDocument_docType2']
print(testTable.count())testTable.drop(*droppedColumns).show() # I dropped several columns just for show() function
Randomly take a part of the dataframe:
%pysparkrandomDF = testTable.sample(False, 0.33, 0)randomDF.write.format("com.intersystems.spark").\option("url", "IRIS://localhost:51773/DEDUPL").\option("user", "*****").option("password", "***********").\option("dbtable", "deduplication.unlabeledData").save()
Label pairs in Jupyter
Run the following (it will widen the cells).
from IPython.core.display import display, HTMLdisplay(HTML("<style>.container { width:100% !important; border-left-width: 1px !important; resize: vertical}</style>"))
Load dataframe:
unlabeledDF = spark.read.format("com.intersystems.spark").option("url", "IRIS://localhost:51773/DEDUPL").option("user", "********").option("password", "**************").option("dbtable", "deduplication.unlabeledData").load()
Return all the elements of the dataset as a list:
rows = labelledDF.collect()
The convenient way to display pairs:
from IPython.display import clear_outputfrom prettytable import PrettyTablefrom collections import OrderedDict
def printTable(row): row = OrderedDict((k, row.asDict()[k]) for k in newColumns) table = PrettyTable() column_names = ['Person1', 'Person2'] column1 = [] column2 = [] i = 0 for key, value in row.items(): if key != 'ID1' and key != 'ID2' and key != "prevIdentityDocuments1" and key != 'prevIdentityDocuments2' and key != "features": if (i < 20): column1.append(value) else: column2.append(value) i += 1 table.add_column(column_names[0], column1) table.add_column(column_names[1], column2) print(table)
List where we will store rows:
listDF = []
The labeling process:
from pyspark.sql import Rowfrom IPython.display import clear_outputimport time# 3000 - 4020for number in range(3000 + len(listDF), len(rows)): row = rows[number] if (len(listDF) % 10) == 0: print(3000 + len(listDF)) printTable(row) result = 0 label = 123 while True: result = input('duplicate? y|n|stop') if (result == 'stop'): break elif result == 'y': label = 1.0 break elif result == 'n': label = 0.0 break else: print('only y|n|stop') continue if result == 'stop': break tmp = row.asDict() tmp['label'] = label newRow = Row(**tmp) listDF.append(newRow) time.sleep(0.2) clear_output()
Create a dataframe again:
newColumns.append('label')labelledDF = spark.createDataFrame(listDF).select(*newColumns)
Save it to IRIS:
labeledDF.write.format("com.intersystems.spark").\option("url", "IRIS://localhost:51773/DEDUPL").\option("user", "***********").option("password", "**********").\option("dbtable", "deduplication.labeledData").save()
Feature vector and ML model
Load a dataframe into Zeppelin:
%pysparklabeledDF = spark.read.format("com.intersystems.spark").option("url", "IRIS://localhost:51773/DEDUPL").option("user", "********").option("password", "***********").option("dbtable", "deduplication.labeledData").load()
Feature vector generation:
%pysparkfrom pyspark.sql.functions import udf, structimport stringdistfrom pyspark.sql.types import StructType, StructField, StringType, IntegerType, DateType, ArrayType, FloatType, DoubleType, LongType, NullTypefrom pyspark.ml.linalg import Vectors, VectorUDTimport roman
translateMap = {'A' : 'А', 'B' : 'В', 'C' : 'С', 'E' : 'Е', 'H' : 'Н', 'K' : 'К', 'M' : 'М', 'O' : 'О', 'P' : 'Р', 'T' : 'Т', 'X' : 'Х', 'Y' : 'У'}
column_names = testTable.drop('ID1').drop('ID2').columnscolumnsSize = len(column_names)//2
def isRoman(numeral): numeral = numeral.upper() validRomanNumerals = ["M", "D", "C", "L", "X", "V", "I", "(", ")"] for letters in numeral: if letters not in validRomanNumerals: return False return True
def differenceVector(params): differVector = [] for i in range(0, 3): if params[i] == None or params[columnsSize + i] == None: differVector.append(0.0) elif params[i] == 'НЕТ' or params[columnsSize + i] == 'НЕТ': differVector.append(0.0) elif params[i][:params[columnsSize + i].find('-')] == params[columnsSize + i][:params[columnsSize + i].find('-')] or params[i][:params[i].find('-')] == params[columnsSize + i][:params[i].find('-')]: differVector.append(0.0) else: differVector.append(stringdist.levenshtein(params[i], params[columnsSize+i])) for i in range(3, columnsSize): # snils if i == 5 or i == columnsSize + 5: if params[i] == None or params[columnsSize + i] == None or params[i].find('123-456-789') != -1 or params[i].find('111-111-111') != -1 \ or params[columnsSize + i].find('123-456-789') != -1 or params[columnsSize + i].find('111-111-111') != -1: differVector.append(0.0) else: differVector.append(float(params[i] != params[columnsSize + i])) # birthCertificate_docNum elif i == 10 or i == columnsSize + 10: if params[i] == None or params[columnsSize + i] == None or params[i].find('000000') != -1 or params[i].find('000000') != -1 \ or params[columnsSize + i].find('000000') != -1 or params[columnsSize + i].find('000000') != -1: differVector.append(0.0) else: differVector.append(float(params[i] != params[columnsSize + i])) # birthCertificate_docSer elif i == 11 or i == columnsSize + 11: if params[i] == None or params[columnsSize + i] == None: differVector.append(0.0) # check if roman or not, then convert if roman else: docSer1 = params[i] docSer2 = params[columnsSize + i] if isRoman(params[i][:params[i].index('-')]): docSer1 = str(roman.fromRoman(params[i][:params[i].index('-')])) secPart1 = '-' for elem in params[i][params[i].index('-') + 1:]: if 65 <= ord(elem) <= 90: secPart1 += translateMap[elem] else: secPart1 = params[i][params[i].index('-'):] docSer1 += secPart1 if isRoman(params[columnsSize + i][:params[columnsSize + i].index('-')]): docSer2 = str(roman.fromRoman(params[columnsSize + i][:params[columnsSize + i].index('-')])) secPart2 = '-' for elem in params[columnsSize + i][params[columnsSize + i].index('-') + 1:]: if 65 <= ord(elem) <= 90: secPart2 += translateMap[elem] else: secPart2 = params[columnsSize + i][params[columnsSize + i].index('-'):] break docSer2 += secPart2 differVector.append(float(docSer1 != docSer2)) elif params[i] == 0 or params[columnsSize + i] == 0: differVector.append(0.0) elif params[i] == None or params[columnsSize + i] == None: differVector.append(0.0) else: differVector.append(float(params[i] != params[columnsSize + i])) return differVector
featuresGenerator = udf(lambda input: Vectors.dense(differenceVector(input)), VectorUDT())
%pysparknewTestTable = testTable.withColumn('features', featuresGenerator(struct(*column_names))) # all pairsdf = df.withColumn('features', featuresGenerator(struct(*column_names))) # labeled pairs
Split labeled dataframe into training and test dataframes:
%pysparkfrom pyspark.ml import Pipelinefrom pyspark.ml.classification import RandomForestClassifierfrom pyspark.ml.feature import IndexToString, StringIndexer, VectorIndexerfrom pyspark.ml.evaluation import MulticlassClassificationEvaluator
# split labelled data into two sets(trainingData, testData) = df.randomSplit([0.7, 0.3])
Train a RF model:
%pysparkfrom pyspark.ml.classification import RandomForestClassifier
rf = RandomForestClassifier(labelCol='label', featuresCol='features')
pipeline = Pipeline(stages=[rf])
model = pipeline.fit(trainingData)
# Make predictions.predictions = model.transform(testData)# predictions.select("predictedLabel", "label", "features").show(5)
Test the RF model:
%pysparkTP = int(predictions.select("label", "prediction").where((col("label") == 1) & (col('prediction') == 1)).count())TN = int(predictions.select("label", "prediction").where((col("label") == 0) & (col('prediction') == 0)).count())FP = int(predictions.select("label", "prediction").where((col("label") == 0) & (col('prediction') == 1)).count())FN = int(predictions.select("label", "prediction").where((col("label") == 1) & (col('prediction') == 0)).count())total = int(predictions.select("label").count())
print("accuracy = %f" % ((TP + TN) / total))print("precision = %f" % (TP/ (TP + FP))print("recall = %f" % (TP / (TP + FN))
How it looks:
Use the RF model on all the pairs:
%pysparkallData = model.transform(newTestTable)
Check how many duplicates are found:
%pysparkallData.where(col('prediction') == 1).count()
Or look at the dataframe:
Conclusion
This approach is not ideal. You can make it better by experimenting with feature vectors, a model or increasing the size of labeled dataset.
Also, you can do the same to find duplicates, for example, in shops database, historical research, etc...
Links
Apache Zeppelin
Jupyter Notebook
Apache Spark
Record Linkage
ML models
The way to launch Jupyter Notebook + Apache Spark + InterSystems IRIS
Load a ML model into InterSystems IRIS
K-Means clustering of the Iris Dataset
The way to launch Apache Spark + Apache Zeppelin + InterSystems IRIS
GitHub
Article
Mark Bolinsky · Oct 12, 2018
Google Cloud Platform (GCP) provides a feature rich environment for Infrastructure-as-a-Service (IaaS) as a cloud offering fully capable of supporting all of InterSystems products including the latest InterSystems IRIS Data Platform. Care must be taken, as with any platform or deployment model, to ensure all aspects of an environment are considered such as performance, availability, operations, and management procedures. Specifics of each of those areas will be covered in this article.
The following overview and details are provided by Google and can be found here.
Overview
GCP Resources
GCP consists of a set of physical assets, such as computers and hard disk drives, and virtual resources, such as virtual machines (VMs), that are contained in Google's data centers around the globe. Each data center location is in a global region. Each region is a collection of zones, which are isolated from each other within the region. Each zone is identified by a name that combines a letter identifier with the name of the region.
This distribution of resources provides several benefits, including redundancy in case of failure and reduced latency by locating resources closer to clients. This distribution also introduces some rules about how resources can be used together.
Accessing GCP Resources
In cloud computing physical hardware and software become services. These services provide access to the underlying resources. When you develop your InterSytems IRIS-based application on GCP, you mix and match these services into combinations that provide the infrastructure you need, and then add your code to enable the scenarios you want to build. Details of the available services can be found here.
Projects
Any GCP resources that you allocate and use must belong to a project. A project is made up of the settings, permissions, and other metadata that describe your applications. Resources within a single project can work together easily, for example by communicating through an internal network, subject to the regions-and-zones rules. The resources that each project contains remain separate across project boundaries; you can only interconnect them through an external network connection.
Interacting with Services
GCP gives you three basic ways to interact with the services and resources.
Console
The Google Cloud Platform Console provides a web-based, graphical user interface that you can use to manage your GCP projects and resources. When you use the GCP Console, you create a new project, or choose an existing project, and use the resources that you create in the context of that project. You can create multiple projects, so you can use projects to separate your work in whatever way makes sense for you. For example, you might start a new project if you want to make sure only certain team members can access the resources in that project, while all team members can continue to access resources in another project.
Command-line Interface
If you prefer to work in a terminal window, the Google Cloud SDK provides the gcloud command-line tool, which gives you access to the commands you need. The gcloud tool can be used to manage both your development workflow and your GCP resources. gcloud reference details can be found here.
GCP also provides Cloud Shell, a browser-based, interactive shell environment for GCP. You can access Cloud Shell from the GCP console. Cloud Shell provides:
A temporary Compute Engine virtual machine instance.
Command-line access to the instance from a web browser.
A built-in code editor.
5 GB of persistent disk storage.
Pre-installed Google Cloud SDK and other tools.
Language support for Java, Go, Python, Node.js, PHP, Ruby and .NET.
Web preview functionality.
Built-in authorization for access to GCP Console projects and resources.
Client Libraries
The Cloud SDK includes client libraries that enable you to easily create and manage resources. GCP client libraries expose APIs for two main purposes:
App APIs provide access to services. App APIs are optimized for supported languages, such as Node.js and Python. The libraries are designed around service metaphors, so you can work with the services more naturally and write less boilerplate code. The libraries also provide helpers for authentication and authorization. Details can be found here.
Admin APIs offer functionality for resource management. For example, you can use admin APIs if you want to build your own automated tools.
You also can use the Google API client libraries to access APIs for products such as Google Maps, Google Drive, and YouTube. Details of GCP client libraries can be found here.
InterSystems IRIS Sample Architectures
As part of this article, sample InterSystems IRIS deployments for GCP are provided as a starting point for your application specific deployment. These can be used as a guideline for numerous deployment possibilities. This reference architecture demonstrates highly robust deployment options starting with the smallest deployments to massively scalable workloads for both compute and data requirements.
High availability and disaster recovery options are covered in this document along with other recommended system operations. It is expected these will be modified by the individual to support their organization’s standard practices and security policies.
InterSystems is available for further discussions or questions of GCP-based InterSystems IRIS deployments for your specific application.
Sample Reference Architectures
The following sample architectures will provide several different configurations with increasing capacity and capabilities. Consider these examples of small development / production / large production / production with sharded cluster that show the progression from starting with a small modest configuration for development efforts and then growing to massively scalable solutions with proper high availability across zones and multi-region disaster recovery. In addition, an example architecture of using the new sharding capabilities of InterSystems IRIS Data Platform for hybrid workloads with massively parallel SQL query processing.
Small Development Configuration
In this example, a minimal configuration is used to illustrates a small development environment capable of supporting up to 10 developers and 100GB of data. More developers and data can easily be supported by simply changing the virtual machine instance type and increasing storage of the persistent disks as appropriate.
This is adequate to support development efforts and become familiar with InterSystems IRIS functionality along with Docker container building and orchestration if desired. High availability with database mirroring is typically not used with a small configuration, however it can be added at any time if high availability is needed.
Small Configuration Sample Diagram
The below sample diagram in Figure 2.1.1-a illustrates the table of resources in Figure 2.1.1-b. The gateways included are just examples, and can be adjusted accordingly to suit your organization’s standard network practices.
The following resources within the GCP VPC Project are provisioned as a minimum small configuration. GCP resources can be added or removed as required.
Small Configuration GCP Resources
Sample of Small Configuration GCP resources is provided below in the following table.
Proper network security and firewall rules need to be considered to prevent unwanted access into the VPC. Google provides network security best practices for getting started which can be found here.
Note: VM instances require a public IP address to reach GCP services. While this practice might raise some concerns, Google recommends limiting the incoming traffic to these VM instances by using firewall rules.
If your security policy requires truly internal VM instances, you will need to set up a NAT proxy manually on your network and a corresponding route so that the internal instances can reach the Internet. It is important to note that you cannot connect to a fully internal VM instance directly by using SSH. To connect to such internal machines, you must set up a bastion instance that has an external IP address and then tunnel through it. A bastion Host can be provisioned to provide the external facing point of entry into your VPC.
Details of bastion hosts can he found here.
Production Configuration
In this example, a more sizable configuration as an example production configuration that incorporates InterSystems IRIS database mirroring capability to support high availability and disaster recovery.
Included in this configuration is a synchronous mirror pair of InterSystems IRIS database servers split between two zones within region-1 for automatic failover, and a third DR asynchronous mirror member in region-2 for disaster recovery in the unlikely event an entire GCP region is offline.
The InterSystems Arbiter and ICM server deployed in a separate third zone for added resiliency. The sample architecture also includes a set of optional load balanced web servers to support a web-enabled application. These web servers with the InterSystems Gateway can be scaled independently as needed.
Production Configuration Sample Diagram
The below sample diagram in Figure 2.2.1-a illustrates the table of resources found in Figure 2.2.1-b. The gateways included are just examples, and can be adjusted accordingly to suit your organization’s standard network practices.
The following resources within the GPC VPC Project are recommended as a minimum recommendation to support a sharded cluster. GCP resources can be added or removed as required.
Production Configuration GCP Resources
Sample of Production Configuration GCP resources is provided below in the following tables.
Large Production Configuration
In this example, a massively scaled configuration is provided by expanding on the InterSystems IRIS capability to also introduce application servers using InterSystems’ Enterprise Cache Protocol (ECP) to provide massive horizontal scaling of users. An even higher level of availability is included in this example because of ECP clients preserving session details even in the event of a database instance failover. Multiple GCP zones are used with both ECP-based application servers and database mirror members deployed in multiple regions. This configuration is capable of supporting tens of millions database accesses per second and multiple terabytes of data.
Large Production Configuration Sample Diagram
The sample diagram in Figure 2.3.1-a illustrates the table of resources in Figure 2.3.1-b. The gateways included are just examples, and can be adjusted accordingly to suit your organization’s standard network practices.
Included in this configuration is a failover mirror pair, four or more ECP clients (application servers), and one or more web servers per application server. The failover database mirror pairs are split between two different GCP zones in the same region for fault domain protection with the InterSystems Arbiter and ICM server deployed in a separate third zone for added resiliency.
Disaster recovery extends to a second GCP region and zone(s) similar to the earlier example. Multiple DR regions can be used with multiple DR Async mirror member targets if desired.
The following resources within the GPC VPC Project are recommended as a minimum recommendation to support a large production deployment. GCP resources can be added or removed as required.
Large Production Configuration GCP Resources
Sample of Large Production Configuration GCP resources is provided below in the following tables.
Production Configuration with InterSystems IRIS Sharded Cluster
In this example, a horizontally scaled configuration for hybrid workloads with SQL is provided by including the new sharded cluster capabilities of InterSystems IRIS to provide massive horizontal scaling of SQL queries and tables across multiple systems. Details of InterSystems IRIS sharded cluster and its capabilities are discussed later in this article.
Production Configuration with InterSystems IRIS Sharded Cluster
The sample diagram in Figure 2.4.1-a illustrates the table of resources in Figure 2.4.1-b. The gateways included are just examples, and can be adjusted accordingly to suit your organization’s standard network practices.
Included in this configuration are four mirror pairs as the data nodes. Each of the failover database mirror pairs are split between two different GCP zones in the same region for fault domain protection with the InterSystems Arbiter and ICM server deployed in a separate third zone for added resiliency.
This configuration allows for all the database access methods to be available from any data node in the cluster. The large SQL table(s) data is physically partitioned across all data nodes to allow for massive parallelization of both query processing and data volume. Combining all these capabilities provides the ability to support complex hybrid workloads such as large-scale analytical SQL querying with concurrent ingestion of new data, all within a single InterSystems IRIS Data Platform.
Note that in the above diagram and the “resource type” column in the table below, the term “Compute [Engine]” is a GCP term representing a GCP (virtual) server instance as described further in section 3.1 of this document. It does not represent or imply the use of “compute nodes” in the cluster architecture described later in this article.
The following resources within the GPC VPC Project are recommended as a minimum recommendation to support a sharded cluster. GCP resources can be added or removed as required.
Production with Sharded Cluster Configuration GCP Resources
Sample of Sharded Cluster Configuration GCP resources is provided below in the following table.
Introduction of Cloud Concepts
Google Cloud Platform (GCP) provides a feature rich cloud environment for Infrastructure-as-a-Service (IaaS) fully capable of supporting all of InterSystems products including support for container-based DevOps with the new InterSystems IRIS Data Platform. Care must be taken, as with any platform or deployment model, to ensure all aspects of an environment are considered such as performance, availability, system operations, high availability, disaster recovery, security controls, and other management procedures. This document will cover the three major components of all cloud deployments: Compute, Storage, and Networking.
Compute Engines (Virtual Machines)
Within GCP there are several options available for compute engine resources with numerous virtual CPU and memory specifications and associated storage options. One item to note within GCP, references to the number of vCPUs in a given machine type equates to one vCPU is one hyper-thread on the physical host at the hypervisor layer.
For the purposes of this document n1-standard* and n1-highmem* instance types will be used and are most widely available in most GCP deployment regions. However, the use of n1-ultramem* instance types are great options for very large working datasets keeping massive amounts of data cached in memory. Default instance settings such as Instance Availability Policy or other advanced features are used except where noted. Details of the various machine types can be found here.
Disk Storage
The storage type most directly related to InterSystems products are the persistent disk types, however local storage may be used for high levels of performance as long as data availability restrictions are understood and accommodated. There are several other options such as Cloud Storage (buckets), however those are more specific to an individual application’s requirements rather than supporting the operation of InterSystems IRIS Data Platform.
Like most other cloud providers, GCP imposes limitations on the amount of persistent storage that can be associated to an individual compute engine. These limits include the maximum size of each disk, the number of persistent disks attached to each compute engine, and the amount of IOPS per persistent disk with an overall individual compute engine instance IOPS cap. In addition, there are imposed IOPS limits per GB of disk space, so at times provisioning more disk capacity is required to achieve desired IOPS rate.
These limits may change over time and to be confirmed with Google as appropriate.
There are two types of persistent storage types for disk volumes: Standard Persistent and SSD Persistent disks. SSD Persistent disks are more suited for production workloads that require predictable low-latency IOPS and higher throughput. Standard Persistent disks are more an economical option for non-production development and test or archive type workloads.
Details of the various disk types and limitations can be found here.
VPC Networking
The virtual private cloud (VPC) network is highly recommended to support the various components of InterSystems IRIS Data Platform along with providing proper network security controls, various gateways, routing, internal IP address assignments, network interface isolation, and access controls. An example VPC will be detailed in the examples provided within this document.
Details of VPC networking and firewalls can be found here.
Virtual Private Cloud (VPC) Overview
GCP VPC’s are slightly different than other cloud providers allowing for simplicity and greater flexibility. A comparison of concepts can be found here.
Within a GCP project, several VPCs per project are allowed (currently a max of 5 per project), and there are two options for creating a VPC network – auto mode and custom mode.
Details of each type are provided here.
In most large cloud deployments, multiple VPCs are provisioned to isolate the various gateways types from application-centric VPCs and leverage VPC peering for inbound and outbound communications. It is highly recommended to consult with your network administrator for details on allowable subnets and any organizational firewall rules of your company. VPC peering is not covered in this document.
In the examples provided in this document, a single VPC with three subnets will be used to provide network isolation of the various components for predictable latency and bandwidth and security isolation of the various InterSystems IRIS components.
Network Gateway and Subnet Definitions
Two gateways are provided in the example in this document to support both Internet and secure VPN connectivity. Each ingress access is required to have appropriate firewall and routing rules to provide adequate security for the application. Details on how to use routes can be found here.
Three subnets are used in the provided example architectures dedicated for use with InterSystems IRIS Data Platform. The use of these separate network subnets and network interfaces allows for flexibility in security controls and bandwidth protection and monitoring for each of the three above major components. Details on the various use cases can be found here.
Details for creating virtual machine instances with multiple network interfaces can be found here.
The subnets included in these examples:
User Space Network for Inbound connected users and queries
Shard Network for Inter-shard communications between the shard nodes
Mirroring Network for high availability using synchronous replication and automatic failover of individual data nodes.
Note: Failover synchronous database mirroring is only recommended between multiple zones which have low latency interconnects within a single GCP region. Latency between regions is typically too high for to provide a positive user experience especially for deployment with a high rate of updates.
Internal Load Balancers
Most IaaS cloud providers lack the ability to provide for a Virtual IP (VIP) address that is typically used in automatic database failover designs. To address this, several of the most commonly used connectivity methods, specifically ECP clients and Web Gateways, are enhanced within InterSystems IRIS to no longer rely on VIP capabilities making them mirror-aware and automatic.
Connectivity methods such as xDBC, direct TCP/IP sockets, or other direct connect protocols, require the use of a VIP-like address. To support those inbound protocols, InterSystems database mirroring technology makes it possible to provide automatic failover for those connectivity methods within GCP using a health check status page called mirror_status.cxw to interact with the load balancer to achieve VIP-like functionality of the load balancer only directing traffic to the active primary mirror member, thus providing a complete and robust high availability design within GCP.
Details of using a load balancer to provide VIP-like functionality is provided here.
Sample VPC Topology
Combining all the components together, the following illustration in Figure 4.3-a demonstrates the layout of a VPC with the following characteristics:
Leverages multiple zones within a region for high availability
Provides two regions for disaster recovery
Utilizes multiple subnets for network segregation
Includes separate gateways for both Internet and VPN connectivity
Uses cloud load balancer for IP failover for mirror members
Persistent Storage Overview
As discussed in the introduction, the use of GCP persistent disks is recommended and specifically SSD persistent disk types. SSD persistent disks are recommended due to the higher read and write IOPS rates and low latency required for transactional and analytical database workloads. Local SSDs may be used in certain circumstances, however beware that the performance gains of local SSDs comes with certain trade-offs in availability, durability, and flexibility.
Details of Local SSD data persistence can be found here to understand the events of when Local SSD data is preserved and when not.
LVM Striping
Like other cloud providers, GCP imposes numerous limits on storage both in IOPS, space capacity, and number of devices per virtual machine instance. Consult GCP documentation for current limits which can be found here.
With these limits, LVM striping becomes necessary to maximize IOPS beyond that of a single disk device for a database instance. In the example virtual machine instances provided, the following disk layouts are recommended. Performance limits associated with SSD persistent disks can be found here.
Note: There is currently a maximum of 16 persistent disks per virtual machine instance although GCP currently lists an increase to 128 is “(beta)” at the moment, so this will be a welcomed enhancement.
The benefits of LVM striping allows for spreading out random IO workloads to more disk devices and inherit disk queues. Below is an example of how to use LVM striping with Linux for the database volume group. This example will use four disks in an LVM PE stripe with a physical extent (PE) size of 4MB. Alternatively, larger PE sizes can be used if needed.
Step 1: Create Standard or SSD Persistent Disks as needed
Step 2: IO scheduler is NOOP for each of the disk devices using “lsblk -do NAME,SCHED”
Step 3: Identify disk devices using “lsblk -do KNAME,TYPE,SIZE,MODEL”
Step 4: Create Volume Group with new disk devices
vgcreate s 4M <vg name> <list of all disks just created>
example: vgcreate -s 4M vg_iris_db /dev/sd[h-k]
Step 4: Create Logical Volume
lvcreate n <lv name> -L <size of LV> -i <number of disks in volume group> -I 4MB <vg name>
example: lvcreate -n lv_irisdb01 -L 1000G -i 4 -I 4M vg_iris_db
Step 5: Create File System
mkfs.xfs K <logical volume device>
example: mkfs.xfs -K /dev/vg_iris_db/lv_irisdb01
Step 6: Mount File System
edit /etc/fstab with following mount entries
/dev/mapper/vg_iris_db-lv_irisdb01 /vol-iris/db xfs defaults 0 0
mount /vol-iris/db
Using the above table, each of the InterSystems IRIS servers will have the following configuration with two disks for SYS, four disks for DB, two disks for primary journals and two disks for alternate journals.
For growth LVM allows for expanding devices and logical volumes when needed without interruption. Consult with Linux documentation on best practices for ongoing management and expansion of LVM volumes.
Note: The enablement of asynchronous IO for both the database and the write image journal files are highly recommend. See the following community article for details on enabling on Linux: https://community.intersystems.com/post/lvm-pe-striping-maximize-hyper-converged-storage-throughput
Provisioning
New with InterSystems IRIS is InterSystems Cloud Manager (ICM). ICM carries out many tasks and offers many options for provisioning InterSystems IRIS Data Platform. ICM is provided as a Docker image that includes everything for provisioning a robust GCP cloud-based solution.
ICM currently support provisioning on the following platforms:
Google Cloud Platform (GCP)
Amazon Web Services including GovCloud (AWS / GovCloud)
Microsoft Azure Resource Manager including Government (ARM / MAG)
VMware vSphere (ESXi)
ICM and Docker can run from either a desktop/laptop workstation or have a centralized dedicated modest “provisioning” server and centralized repository.
The role of ICM in the application lifecycle is Define -> Provision -> Deploy -> Manage
Details for installing and using ICM with Docker can be found here.
NOTE: The use of ICM is not required for any cloud deployment. The traditional method of installation and deployment with tar-ball distributions is fully supported and available. However, ICM is recommended for ease of provisioning and management in cloud deployments.
Container Monitoring
ICM includes a basic monitoring facility using Weave Scope for container-based deployment. It is not deployed by default, and needs to be specified in the defaults file using the Monitor field.
Details for monitoring, orchestration, and scheduling with ICM can be found here.
An overview of Weave Scope and documentation can be found here.
High Availability
InterSystems database mirroring provides the highest level of availability in any cloud environment. There are options to provide some virtual machine resiliency directly at the instance level. Details of the various policies available in GCP can be found here.
Earlier sections discussed how a cloud load balancer will provide automatic IP address failover for a Virtual IP (VIP-like) capability with database mirroring. The cloud load balancer uses the mirror_status.cxw health check status page mentioned earlier in the Internal Load Balancers section. There are two modes of database mirroring - synchronous with automatic failover and asynchronous mirroring. In this example, synchronous failover mirroring will be covered. The details of mirroring can he found here.
The most basic mirroring configuration is a pair of failover mirror members in an arbiter-controlled configuration. The arbiter is placed in a third zone within the same region to protect from potential zone outages impacting both the arbiter and one of the mirror members.
There are many ways mirroring can be setup specifically in the network configuration. In this example, we will use the network subnets defined previously in the Network Gateway and Subnet Definitions section of this document. Example IP address schemes will be provided in a following section and for the purpose of this section, only the network interfaces and designated subnets will be depicted.
Disaster Recovery
InterSystems database mirroring extends the capability of high available to also support disaster recovery to another GCP geographic region to support operational resiliency in the unlikely event of an entire GCP region going offline. How an application is to endure such outages depends on the recovery time objective (RTO) and recovery point objectives (RPO). These will provide the initial framework for the analysis required to design a proper disaster recovery plan. The following links provides a guide for the items to be considered when developing a disaster recovery plan for your application. https://cloud.google.com/solutions/designing-a-disaster-recovery-plan and https://cloud.google.com/solutions/disaster-recovery-cookbook
Asynchronous Database Mirroring
InterSystems IRIS Data Platform’s database mirroring provides robust capabilities for asynchronously replicating data between GCP zones and regions to help support the RTO and RPO goals of your disaster recovery plan. Details of async mirror members can be found here.
Similar to the earlier high availability section, a cloud load balancer will provide automatic IP address failover for a Virtual IP (VIP-like) capability for DR asynchronous mirroring as well using the same mirror_status.cxw health check status page mentioned earlier in the Internal Load Balancers section.
In this example, DR asynchronous failover mirroring will be covered along with the introduction of the GCP Global Load Balancing service to provide upstream systems and client workstations with a single anycast IP address regardless of which zone or region your InterSystems IRIS deployment is operating.
One of the advances of GCP is the load balancer is a software defined global resource and not bound to a given region. This allows for the unique capability to leverage a single service across regions since it is not an instance or device-based solution. Details of GCP Global Load Balancing with Single Anycast IP can be found here.
In the above example, the IP addresses of all three InterSystems IRIS instances are provided to the GCP Global Load Balancer, and it will only direct traffic to whichever mirror member is the acting primary mirror regardless of the zone or region it is located.
Sharded Cluster
InterSystems IRIS includes a comprehensive set of capabilities to scale your applications, which can be applied alone or in combination, depending on the nature of your workload and the specific performance challenges it faces. One of these, sharding, partitions both data and its associated cache across a number of servers, providing flexible, inexpensive performance scaling for queries and data ingestion while maximizing infrastructure value through highly efficient resource utilization. An InterSystems IRIS sharded cluster can provide significant performance benefits for a wide variety of applications, but especially for those with workloads that include one or more of the following:
High-volume or high-speed data ingestion, or a combination.
Relatively large data sets, queries that return large amounts of data, or both.
Complex queries that do large amounts of data processing, such as those that scan a lot of data on disk or involve significant compute work.
Each of these factors on its own influences the potential gain from sharding, but the benefit may be enhanced where they combine. For example, a combination of all three factors — large amounts of data ingested quickly, large data sets, and complex queries that retrieve and process a lot of data — makes many of today’s analytic workloads very good candidates for sharding.
Note that these characteristics all have to do with data; the primary function of InterSystems IRIS sharding is to scale for data volume. However, a sharded cluster can also include features that scale for user volume, when workloads involving some or all of these data-related factors also experience a very high query volume from large numbers of users. Sharding can be combined with vertical scaling as well.
Operational Overview
The heart of the sharded architecture is the partitioning of data and its associated cache across a number of systems. A sharded cluster physically partitions large database tables horizontally — that is, by row — across multiple InterSystems IRIS instances, called data nodes, while allowing applications to transparently access these tables through any node and still see the whole dataset as one logical union. This architecture provides three advantages:
Parallel processing: Queries are run in parallel on the data nodes, with the results merged, combined, and returned to the application as full query results by the node the application connected to, significantly enhancing execution speed in many cases.
Partitioned caching: Each data node has its own cache, dedicated to the sharded table data partition it stores, rather than a single instance’s cache serving the entire data set, which greatly reduces the risk of overflowing the cache and forcing performance-degrading disk reads.
Parallel loading: Data can be loaded onto the data nodes in parallel, reducing cache and disk contention between the ingestion workload and the query workload and improving the performance of both.
Details of InterSystems IRIS sharded cluster can be found here.
Elements of Sharding and Instance Types
A sharded cluster consists of at least one data node and, if needed for specific performance or workload requirements, an optional number of compute nodes. These two node types offer simple building blocks presenting a simple, transparent, and efficient scaling model.
Data Nodes
Data nodes store data. At the physical level, sharded table[1] data is spread across all data nodes in the cluster and non-sharded table data is physically stored on the first data node only. This distinction is transparent to the user with the possible sole exception that the first node might have a slightly higher storage consumption than the others, but this difference is expected to become negligible as sharded table data would typically outweigh non-sharded table data by at least an order of magnitude.
Sharded table data can be rebalanced across the cluster when needed, typically after adding new data nodes. This will move “buckets” of data between nodes to approximate an even distribution of data.
At the logical level, non-sharded table data and the union of all sharded table data is visible from any node, so clients will see the whole dataset, regardless of which node they’re connecting to. Metadata and code are also shared across all data nodes.
The basic architecture diagram for a sharded cluster simply consists of data nodes that appear uniform across the cluster. Client applications can connect to any node and will experience the data as if it were local.
[1] For convenience, the term “sharded table data” is used throughout the document to represent “extent” data for any data model supporting sharding that is marked as sharded. The terms “non-sharded table data” and “non-sharded data” are used to represent data that is in a shardable extent not marked as such or for a data model that simply doesn’t support sharding yet.
Data Nodes
For advanced scenarios where low latencies are required, potentially at odds with a constant influx of data, compute nodes can be added to provide a transparent caching layer for servicing queries.
Compute nodes cache data. Each compute node is associated with a data node for which it caches the corresponding sharded table data and, in addition to that, it also caches non-sharded table data as needed to satisfy queries.
Because compute nodes don’t physically store any data and are meant to support query execution, their hardware profile can be tailored to suit those needs, for example by emphasizing memory and CPU and keeping storage to the bare minimum. Ingestion is forwarded to the data nodes, either directly by the driver (xDBC, Spark) or implicitly by the sharding manager code when “bare” application code runs on a compute node.
Sharded Cluster Illustrations
There are various combinations of deploying a sharded cluster. The following high-level diagrams are provided to illustrate the most common deployment models. These diagrams do not include the networking gateways and details and provide to focus only on the sharded cluster components.
Basic Sharded Cluster
The following diagram is the simplest sharded cluster with four data nodes deployed in a single region and in a single zone. A GCP Cloud Load Balancer is used to distribute client connections to any of the sharded cluster nodes.
In this basic model, there is no resiliency or high availability provided beyond that of what GCP provides for a single virtual machine and its attached SSD persistent storage. Two separate network interface adapters are recommended to provide both network security isolation for the inbound client connections and also bandwidth isolation between the client traffic and the sharded cluster communications.
Basic Sharded Cluster with High Availability
The following diagram is the simplest sharded cluster with four mirrored data nodes deployed in a single region and splitting each node’s mirror between zones. A GCP Cloud Load Balancer is used to distribute client connections to any of the sharded cluster nodes.
High availability is provided through the use of InterSystems database mirroring which will maintain a synchronously replicated mirror in a secondary zone within the region.
Three separate network interface adapters are recommended to provide both network security isolation for the inbound client connections and bandwidth isolation between the client traffic, the sharded cluster communications, and the synchronous mirror traffic between the node pairs.
This deployment model also introduces the mirror arbiter as described in an earlier section of this document.
Sharded Cluster with Separate Compute Nodes
The following diagram expands the sharded cluster for massive user/query concurrency with separate compute nodes and four data nodes. The Cloud Load Balancer server pool only contains the addresses of the compute nodes. Updates and data ingestion will continue to update directly to the data nodes as before to sustain ultra-low latency performance and avoid interference and congestion of resources between query/analytical workloads from real-time data ingestion.
With this model the allocation of resources can be fine-tuned for scaling of compute/query and ingestion independently allowing for optimal resources where needed in a “just-in-time” and maintaining an economical yet simple solution instead of wasting resources unnecessarily just to scale compute or data.
Compute Nodes lend themselves for a very straightforward use of GCP auto scale grouping (aka Autoscaling) to allow for automatic addition or deletion of instances from a managed instance group based on increased or decreased load. Autoscaling works by adding more instances to your instance group when there is more load (upscaling), and deleting instances when the need for instances is lowered (downscaling).
Details of GCP Autoscaling can be found here.
Autoscaling helps cloud-based applications gracefully handle increases in traffic and reduces cost when the need for resources is lower. Simply define the autoscaling policy and the autoscaler performs automatic scaling based on the measured load.
Backup Operations
There are multiple options available for backup operations. The following three options are viable for your GCP deployment with InterSystems IRIS.
The first two options, detailed below, incorporate a snapshot type procedure which involves suspending database writes to disk prior to creating the snapshot and then resuming updates once the snapshot was successful.
The following high-level steps are taken to create a clean backup using either of the snapshot methods:
Pause writes to the database via database External Freeze API call.
Create snapshots of the OS + data disks.
Resume database writes via External Thaw API call.
Backup facility archives to backup location
Details of the External Freeze/Thaw APIs can be found here.
Note: Sample scripts for backups are not included in this document, however periodically check for examples posted to the InterSystems Developer Community. www.community.intersystems.com
The third option is InterSystems Online backup. This is an entry-level approach for smaller deployments with a very simple use case and interface. However, as databases increase in size, external backups with snapshot technology are recommended as a best practice with advantages including the backup of external files, faster restore times, and an enterprise-wide view of data and management tools.
Additional steps such as integrity checks can be added on a periodic interval to ensure clean and consistent backup.
The decision points on which option to use depends on the operational requirements and policies of your organization. InterSystems is available to discuss the various options in more detail.
GCP Persistent Disk Snapshot Backup
Backup operations can be achieved using GCP gcloud command-line API along with InterSystems ExternalFreeze/Thaw API capabilities. This allows for true 24x7 operational resiliency and assurance of clean regular backups. Details for managing and creating and automation GCP Persistent Disk Snapshots can be found here.
Logical Volume Manager (LVM) Snapshots
Alternatively, many of the third-party backup tools available on the market can be used by deploying individual backup agents within the VM itself and leveraging file-level backups in conjunction with Logical Volume Manager (LVM) snapshots.
One of the major benefits to this model is having the ability to have file-level restores of either Windows or Linux based VMs. A couple of points to note with this solution, is since GCP and most other IaaS cloud providers do not provide tape media, all backup repositories are disk-based for short term archiving and have the ability to leverage blob or bucket type low cost storage for long-term retention (LTR). It is highly recommended if using this method to use a backup product that supports de-duplication technologies to make the most efficient use of disk-based backup repositories.
Some examples of these backup products with cloud support include but is not limited to: Commvault, EMC Networker, HPE Data Protector, and Veritas Netbackup.
Note: InterSystems does not validate or endorses one backup product over the other. The responsibility of choosing a backup management software is up to the individual customer.
Online Backup
For small deployments the built-in Online Backup facility is also a viable option as well. This InterSystems database online backup utility backs up data in database files by capturing all blocks in the databases then writes the output to a sequential file. This proprietary backup mechanism is designed to cause no downtime to users of the production system. Details of Online Backup can be found here.
In GCP, after the online backup has finished, the backup output file and all other files in use by the system must be copied to some other storage location outside of that virtual machine instance. Bucket/Object storage is a good designation for this.
There are two option for using a GCP Storage bucket.
Use the gcloud scripting APIs directly to copy and manipulate the newly created online backup (and other non-database) files. Details can be found here.
Mount a storage bucket as a file system and use it similarly as a persistent disk enough though Cloud Storage buckets are object storage.
Details of mounting a Cloud Storage bucket using Cloud Storage FUSE can be found here.
Announcement
Anastasia Dyubaylo · Jul 13, 2023
Hi Community,
We're pleased to invite you to the upcoming webinar in Hebrew:
👉 Introducing VS Code, and Moving from Studio in Hebrew 👈
🗓️ Date & time: July 25th, 3:00 PM IDT
🗣️ Speaker: @Tani.Frankel, Sales Engineer Manager
In this session, we will review using VS Code for InterSystems-based development.
It is aimed at beginners of VS Code, but will also cover some areas that might be beneficial for users who are already using VS Code. We will also cover some topics relevant to people moving from InterSystems Studio to VS Code.
The session is relevant for users of Caché / Ensemble / InterSystems IRIS Data Platform / InterSystems IRIS for Health / HealthShare Health Connect.
➡️ Register today and enjoy! >>
Announcement
Fabiano Sanches · Jun 21, 2023
InterSystems announces its fourth preview, as part of the developer preview program for the 2023.2 release. This release will include InterSystems IRIS and InterSystems IRIS for Health.
Highlights
Many updates and enhancements have been added in 2023.2 and there are also brand-new capabilities, such as Time-Aware Modeling, enhancements of Foreign Tables, and the ability to use Ready-Only Federated Tables. Note that some of these features or improvements may not be available in this current developer preview.
Another important topic is the removal of the Private Web Server (PWS) from the installers. This feature has been announced since last year and will be removed from InterSystems installers, but they are still in this first preview. See this note in the documentation.
--> If you are interested to try the installers without the PWS, please enroll in its EAP using this form, selecting the option "NoPWS". Additional information related to this EAP can be found here.
Future preview releases are expected to be updated biweekly and we will add features as they are ready. Please share your feedback through the Developer Community so we can build a better product together.
Initial documentation can be found at these links below. They will be updated over the next few weeks until launch is officially announced (General Availability - GA):
InterSystems IRIS
InterSystems IRIS for Health
Availability and Package Information
As usual, Continuous Delivery (CD) releases come with classic installation packages for all supported platforms, as well as container images in Docker container format. For a complete list, refer to the Supported Platforms document.
Installation packages and preview keys are available from the WRC's preview download site or through the evaluation services website (use the flag "Show Preview Software" to get access to the 2023.2).
Container images for both Enterprise and Community Editions of InterSystems IRIS and IRIS for Health and all corresponding components are available from the new InterSystems Container Registry web interface. For additional information about docker commands, please see this post: Announcing the InterSystems Container Registry web user interface.
The build number for this developer preview is 2023.2.0.204.0.
For a full list of the available images, please refer to the ICR documentation. Alternatively, tarball versions of all container images are available via the WRC's preview download site.