Clear filter
Announcement
Janine Perkins · May 5, 2017
Take this online course to learn the basics of SAML (Security Assertion Markup Language), the ways in which it can be used within Caché security features, and some use cases that can be applied to HealthShare productions.
Audience: This course is for Caché and HealthShare developers who may want to use SAML as a part of their security strategy, want to learn SAML basics, and the capabilities of Caché and HealthShare for using SAML.
Learn more.
Announcement
Anastasia Dyubaylo · Sep 28, 2017
Hi, Community! This week we have two videos. Check all new videos on InterSystems Developers YouTube Channel:1. What is InterSystems Cloud Manager? This video provides an introduction to InterSystems Cloud Manager (ICM) and its capabilities.2. Instant Gratification: Pick Your CloudIn this video learn, how to quickly define and create a cloud infrastructure on the top three cloud IaaS providers. You can provision a cloud application within any one of those environments. The approach is to use containers and InterSystems new IPD tool with a DevOps approach to define, create, and provision an application. See additional resources to this video here.You can also read this article about InterSystems Cloud Manager.What's new on InterSystems Developers YouTube Channel? We have created two dedicated playlists: InterSystems Cloud Manager playlistInterSystems Data Platform playlistEnjoy and stay tuned!
Announcement
Evgeny Shvarov · Oct 1, 2018
hi Community!Here is the link: https://portal.xyvid.com/group/GS18It will start now! Today is the second InterSystem Global Summit KeyNotes day! Don't miss breaking news!
Announcement
Jon Jensen · Jun 5, 2018
Hi Community!Save the date for InterSystems Global Summit 2018! This year Global Summit is Sept. 30 – Oct. 3, 2018 at the JW Marriott Hill Country Resort & Spa in San Antonio, TX.Registration for Global Summit is now open!Empowering What MattersInterSystems Global Summit 2018 is all about empowering you, because you and the applications you build with our technology MATTER. It is an unparalleled opportunity to connect with your peers and with InterSystems’ executives and technical experts.InterSystems Global Summit is an annual gathering for everyone in the InterSystems community. It is composed of three conferences:Solution Developer ConferenceTechnology Leadership ConferenceHealthcare Leadership ConferenceThe super early bird registration rate of $999 is available until August 10, 2018.Register now. See you at InterSystems Global Summit 2018!
Article
Sergey Mikhailenko · Jan 23, 2018
This article was written as an attempt to share the experience of installing the InterSystems Caché DBMS for production environment.
We all know that the development configuration of a DBMS is very different from real-life conditions.
As a rule, development is carried out in “hothouse conditions” with a bare minimum of security measures, but when we publish our project online, we must ensure its reliable and uninterrupted operation in a very aggressive environment.
##The process of installing the InterSystems Caché DBMS with maximum security settings
**OS security settings**
**The first step is the operating system. You need to do the following:**
> * Minimize the rights of the technical account of the Caché DBMS
> * Rename the administrator account of the local computer.
> * Leave the necessary minimum of users in the OS.
> * Timely install security updates for the OS and used services.
> * Use and regularly update anti-virus software
> * Disable or delete unused services.
> * Limit access to database files
> * Limit the rights to Caché data files (leave owner and DB admin rights only)
__For UNIX/Linux systems, create the following group and user types prior to installation:__
> * Owner user ID
> * Management group ID
> * Internal Caché system group ID
> * Internal Caché user ID
__InterSystems Caché installation-time security settings__
__InterSystems, the DBMS developer, strongly recommends deploying applications on Caché 2015.2 and newer versions only.__
__During installation, you need to perform the following actions:__
Select the “Locked Down” installation mode
Select the “Custom Setup” option, then select only the bare minimum of components that are required for the work of the application
During installation, choose the SuperServer port that is different from the standard TCP port 1972
During installation, specify the port of the internal web server that is different from the standard TCP port 57772
Specify a Caché instance location path that is different from the standard one (the default option for Windows systems is C:\lnterSystems\Cache, for UNIX/Linux systems — /usr/Cachesys)
Post-installation Caché security settings
**The following actions need to be performed after installation (most of them are already performed in the “Locked down” installation mode):**
All services and resources that are not used by application should be disabled.
For services using network access, IP addresses that can be used for remote interaction must be explicitly specified.
Unused CSP web applications must be disabled.
Access without authentication and authorization must be disabled.
Access to the CSP Gateway must be password-protected and restricted.
Audit must be enabled.
The Data encryption option must be enabled for the configuration file.
To ensure the security of system settings, Security Advisor must be launched from the management portal and its recommendations must be followed. __[Home] > [Security Management] > [Security Advisor]__
[For services (Services section):](http://docs.intersystems.com/ens20172/csp/docbook/DocBook.UI.Page.cls?KEY=GCAS_secmgmt_secadv_svcs)
>**Ability to set % globals should be turned off** — the possibility to modify % globals must be disabled, since such globals are often used for system code and modification of such variables can lead to unpredictable consequences.
>**Unauthenticated should be off** — unauthenticated access must be disabled. Unauthenticated access to the service makes it accessible to all users.
>**Service should be disabled unless required** — if a service is not used, it must be disabled. Access to any service that is not used by an application can provide an unjustifiably high level of access to the entire system.
>**Service should use Kerberos authentication** — access through any other authentication mechanism does not provide the maximum level of security
>**Service should have client IP addresses assigned** — IP addresses of connections to the services must be specified explicitly. Limiting the list of IP addresses that will be allowed to connect let you have greater control over connections to Caché
>**Service is Public** — public services allow all users, including the UnknownUser account that requires no authentication, to get unregulated access to Caché
[Applications (CSP, Privileged Routine, and Client Applications)](http://docs.intersystems.com/ens20172/csp/docbook/DocBook.UI.Page.cls?KEY=GCAS_secmgmt_secadv_apps)
>**Application is Public** — Public applications allow all users, including the UnknownUser account that requires no authentication, to get unregulated access to Caché
>**Application conditionally grants the %AII role** — a system cannot be considered secure if an application can potentially delegate all privileges to its users. Applications should not delegate all privileges
>**Application grants the %AII role** — the application explicitly delegates all privileges to its users. Applications should not delegate all privileges
#1.Managing users
##1.1 Managing system accounts
You need to make sure that unused system accounts are disabled or deleted, and that passwords are changed for used system accounts.
To identify such accounts, you need to use the Security Advisor component of the management portal. To do it, go to the management portal here: __[Home] > [Security Management] > [Security Advisor]__.
Change corresponding users’ passwords in all records in the Users section where Recommendations = “Password should be changed from default password”.

##1.2 Managing privileges accounts
If the DBMS has several administrators, a personal account should be created for each of them with just a minimum of privileges required for their job.
##1.3 Managing rights and privileges
When delegating access rights, you should use the minimum privileges principle. That is, you should forbid everything and then provide a bare minimum of rights required for this particular role. When granting privileges, you should use a role-based approach – that is, assign rights to a role, not a user, then assign a role to the necessary user.
##1.4 Delegation of access rights
In order to check security settings in terms of access rights delegation, launch Security Advisor. You need to perform the following actions depending on the recommendations provided by Security Advisor.
To roles:
This role holds privileges on the Auditing database — this role has privileges for accessing the auditing database. Read access makes it possible to use audit data in an inappropriate way. Write access makes it possible to compromise the audit data
This role holds the %Admin_Secure privilege — This role includes the %Admin_Secure resource, which allows holder to change access privileges for any user
This role holds WRITE privilege on the CACHESYS database — this role allows users to write to the CACHESYS system database, thus making it possible to change the system code and Caché system settings
Users:
At least 2 and at most 5 users should have the %AII role — at least 2 and no more than 5 users can have the %Аll role. Too few users with this role may result in problems with access during emergencies; too many users may jeopardize the overall security of the system.
This user holds the %AII role — this user has the %Аll role. You need to verify the necessity of assigning this role to the user.
UnknownUser account should not have the %AII role — the system cannot be considered secure if UnknownUser has the %Аll role.
Account has never been used — this account has never been used. Unused accounts may be used for unauthorized access to the system.
Account appears dormant and should be disabled — the account is inactive and must be disabled. Inactive accounts (ones that haven’t been used for 30 days) may be used for unauthorized access.
Password should be changed from default password — the default password value must be changed.
After deleting a user, make sure that roles and privileges created by this user have been deleted, if they are no longer required.
##1.5 Configuring the password policy
Password case sensitivity is enabled in Caché by default.
The password policy is applied through the following section of the management portal:
__[Home]>[Security Management] > [System Security Settings] > [System-wide Security Parameters]__.
The configuration of the necessary password complexity is carried out by specifying a password template in the Password Pattern parameter. By default, maximum security uses Password Pattern = 8.32ANP, which means that passwords must be 8 – 32 characters long, contain numbers, characters, and punctuation marks. The “Password validation routine” parameter is used for invoking specific password validity checking algorithms.
A detailed description is provided in [1], section “Password Strength and Password Policies”.
In addition to using internal mechanisms, authentication in Caché can be delegated to the operating system, Kerberos or LDAP servers.
Just recently, I had to check whether the Caché DBMS complied with the new edition of PCI DSS 3.2, the main security standard of the bank card industry adopted in April 2016.
**Compliance of Caché DBMS security settings with the requirements of the PCI DSS version 3.2 [5] standard**


##1.6 Configuration of terminating an inactive database connection
Database disconnect settings for inactive user sessions depend on the type of connection to Caché.
For SQL and object access via TCP, the parameter is set in the **[Home] > [Configuration] > [SQL Settings] > [General SQL Settings]** section of the management portal. Look for a parameter called TCP Keep Alive interval (in seconds): set it to 900, which will correspond to 15 minutes.
For web access, this parameter is specified in the “No Activity Timeout” for **[Home] > [Configuration] > [CSP Gateway Management]**. Replace the default parameter with 900 seconds and enable the “Apply time-out to all connections” parameter
#2 Event logging
##2.1 General settings
To enable auditing, you need to enable this option for the entire Caché DBMS instance. To do it, open the system management portal, navigate to the auditing management page (([Home] > [Security Management] > [Auditing]** and make sure that the “Disable Auditing” option is available, and “Enable Auditing” is unavailable. The opposite will mean that auditing is disabled.
If auditing is disabled, it should be enabled by selecting the “Enable Auditing” command from the menu. You can view the event log through the system management portal: **[Home] > [Security Management] > [Auditing] > [View Audit Database]**

There are also system classes (utilities) for viewing the event log. The log contains the following records, among others:
-Date and time
-Event type
-Account name (user identification)
Access to audit data is managed by the %DB_CACHEAUDIT resource. To disable public access to this resource, you need to make sure that both Read and Write operations are closed for Public access in its properties. Access to the list of resources is provided through the system management portal [Home] > [Security Management] > [Resources] > [Edit Resource]. Select the necessary resource, then click the Edit link.
By default, the %DB_CACHEAUDIT resource has the same-name role %DB_CACHEAUDIT. To limit access to logs, you need to define a list of users with this role, which can be done in the system management portal:
**[Home] > [Security Management] > [Roles] > [Edit Role]**, then use the Edit button in the %DB_CACHEAUDIT role
##2.2 List of logged event types
###2.2.1 Logging of access to tables containing bank card details (PCI DSS 10.2.1)
Logging of access to tables (datasets) containing bank card data is performed with the help of the following mechanisms:
>1. A system auditing mechanism that makes records of the “ResourceChange” type whenever access rights are changed for a resource responsible for storing bank card information (access to the audit log is provided from the system management portal: [Home] > [Security Management] > [Auditing] > [View Audit Database]);
>2. On the application level, it is possible to log access to a particular record by registering an application event in the system and calling it from your application when a corresponding event takes place.
**[System] > [Security Management] > [Configure User Audit Events] > [Edit Audit Event]**
###2.2.2 Logging attempts to use administrative privileges (PCI DSS 10.2.2)
The Caché DBMS logs the actions of all users and the configuration of the logging method is carried out by specifying the events that should be logged **[Home] > [Security Management] > [Auditing] > [Configure SystemEvents]**
Logging of all system events needs to be enabled.
###2.2.3 Logging of event log changes (PCI DSS 10.2.3)
The Caché DBMS uses a single audit log that cannot be changed, except for the natural change of its content and error entries, log purging, the change of audited events, which add corresponding AuditChange entries to the log.
The task of logging the AuditChange event is accomplished by enabling the auditing of all events (see 2.2.2).
###2.2.4 Logging of all unsuccessful attempts to obtain logical access (PCI-DSS 10.2.4)
The task of logging an unsuccessful attempt to obtain logical access is accomplished through enabling the auditing of all events (see 2.2.2).
When an attempt to obtain logical access is registered, a LoginFailure event is created in the audit log.
###2.2.5 Logging of attempts to obtain access to the system (PCI DSS 10.2.5)
The task of logging an attempt to access the system is accomplished by enabling the auditing of all events (see 2.2.2).
When an unsuccessful attempt to obtain access is registered, a “LoginFailure” event is created in the audit log. A successful log-in creates a “Login” event in the log.
###2.2.6 Logging of audit parameter changes (PCI DSS 10.2.6)
The task of logging changes in audit parameters is accomplished by enabling the auditing of all events (see 2.2.2).
When an attempt to obtain logical access is made, an “AuditChange” event is created in the audit log.
###2.2.7 Logging of the creation and deletion of system objects (PCI DSS 10.2.7)
The Caché DBMS logs the creation, modification, and removal of the following system objects: roles, privileges, resources, users.
The task of logging the creation and deletion of system objects is accomplished by enabling the auditing of all events (see 2.2.2).
When a system object is created, changed or removed, the following events are added to the audit log: “ResourceChange”, “RoleChange”, “ServiceChange”, “UserChange”.
##2.3 Protection of event logs
You need to make sure that access to the %DB_CACHEAUDIT resource is restricted. That is, only the admin and those responsible for log monitoring have read and write rights to this resource.
Following the recommendations above, I have managed to install Caché in the maximum security mode. To demonstrate compliance with the requirements of PCI DSS section 8.2.5 “Forbid the use of old passwords”, I created a small application that will be launched by the system when the user attempts to change the password and will validate whether it has been used before.
To install this program, you need to import the source code using Caché Studio, Atelier or the class import page through the control panel
ROUTINE PASSWORD
PASSWORD ; password verification program
#include %occInclude
CHECK(Username,Password) PUBLIC {
if '$match(Password,"(?=.*[0-9])(?=.*[a-zA-Z]).{7,}") quit $$$ERROR($$$GeneralError,"Password does not match the standard PCI_DSS_v3.2")
set Remember=4 ; the number of most recent passwords that cannot be used according to PCI-DSS
set GlobRef="^PASSWORDLIST" ; The name of the global link
set PasswordHash=$System.Encryption.SHA1Hash(Password)
if $d(@GlobRef@(Username,"hash",PasswordHash)){
quit $$$ERROR($$$GeneralError,"This password has already been used ")
}
set hor=""
for i=1:1 {
; Traverse the nods chronologically from new to old ones
set hor=$order(@GlobRef@(Username,"datetime",hor),-1)
quit:hor=""
; Delete the old one that’s over the limit
if i>(Remember-1) {
set hash=$g(@GlobRef@(Username,"datetime",hor))
kill @GlobRef@(Username,"datetime",hor)
kill:hash'="" @GlobRef@(Username,"hash",hash)
}
}
; Save the current one
set @GlobRef@(Username,"hash",PasswordHash)=$h
set @GlobRef@(Username,"datetime",$h)=PasswordHash
quit $$$OK
}
Let’s save the name of the program in the management portal.

It happened so that my product configuration was different from the test one not only in terms of security but also in terms of users. In my case, there were thousands of them, which made it impossible to create a new user by copying settings from an existing one.

DBMS developers limited list output to 1000 elements. After talking to [the InterSystems WRC technical support service](https://login.intersystems.com/login/SSO.UI.Login.cls?referrer=https%253A//wrc.intersystems.com/wrc/login.csp), I learned that the problem could be solved by creating a special global node in the system area using the following command:
%SYS>set ^CacheTemp.MgtPortalSettings($Username,"MaxUsers")=5000
This is how you can increase the number of users shown in the dropdown list. I explored this global a bit and found a number of other useful settings of the current user. However, there is a certain inconvenience here: this global is mapped to the temporary CacheTemp database and will be removed after the system is restarted. This problem can be solved by saving this global before shutting down the system and restoring it after the system is restarted.
To this end, I wrote [two programs](http://docs.intersystems.com/ens20172/csp/docbook/DocBook.UI.Page.cls?KEY=GSTU_customize#GSTU_customize_startstop),^%ZSART and ^%ZSTOP, with the required functionality.
The source code of the %ZSTOP program
%ZSTOP() {
Quit
}
/// save users’ preferences in a non-killable global
SYSTEM() Public {
merge ^tmpMgtPortalSettings=^CacheTemp.MgtPortalSettings
quit
}
The source code of the %ZSTART program
%ZSTART() {
Quit
}
/// restore users’ preferences from a non-killable global
SYSTEM() Public {
if $data(^tmpMgtPortalSettings) merge ^CacheTemp.MgtPortalSettings=^tmpMgtPortalSettings
quit
}
Going back to security and the requirements of the standard, we can’t ignore the backup procedure. The PCI DSS standard imposes certain requirements for backing up both data and event logs. In Caché, all logged events are saved to the CACHEAUDIT database that can be included in the list of backed up databases along with other ones.
The Caché DBMS comes with several pre-configured backup jobs, but they didn’t always work for me. Every time I needed something particular for a project, it wasn’t there in “out-of-the-box” jobs. In one project, I had to automate the control over the number of backup copies with an option of automatic purging of the oldest ones. In another project, I had to estimate the size of the future backup file. In the end, I had to write my own backup task.
CustomListBackup.cls
Include %occKeyword
/// Backup task class
Class App.Task.CustomListBackup Extends %SYS.Task.Definition [ LegacyInstanceContext ]
{
/// If ..AllDatabases=1, include all databases into the backup copy ..PrefixIncludeDB and ..IncludeDatabases are ignored
Property AllDatabases As %Integer [ InitialExpression = 0 ];
/// If ..AllDatabases=1, include all databases into the backup copy, excluding from ..IgnoreForAllDatabases (comma-delimited)
Property IgnoreForAllDatabases As %String(MAXLEN = 32000) [ InitialExpression = "Not applied if AllDatabases=0 " ];
/// If ..IgnoreTempDatabases=1, exclude temporary databases
Property IgnoreTempDatabases As %Integer [ InitialExpression = 1 ];
/// If ..IgnorePreparedDatabases=1, exclude pre-installed databases
Property IgnorePreparedDatabases As %Integer [ InitialExpression = 1 ];
/// If ..AllDatabases=0 and PrefixIncludeDB is not empty, we will be backing up all databases starting with ..PrefixIncludeDB
Property PrefixIncludeDB As %String [ SqlComputeCode = {S {*}=..ListNS()}, SqlComputed ];
/// If ..AllDatabases=0, back up all databases from ..IncludeDatabases (comma-delimited)
Property IncludeDatabases As %String(MAXLEN = 32000) [ InitialExpression = {"Not applied if AllDatabases=1"_..ListDB()} ];
/// Name of the task on the general list
Parameter TaskName = "CustomListBackup";
/// Path for the backup file
Property DirBackup As %String(MAXLEN = 1024) [ InitialExpression = {##class(%File).NormalizeDirectory("Backup")} ];
/// Path for the log
Property DirBackupLog As %String(MAXLEN = 1024) [ InitialExpression = {##class(%File).NormalizeDirectory("Backup")} ];
/// Backup type (Full, Incremental, Cumulative)
Property TypeBackup As %String(DISPLAYLIST = ",Full,Incremental,Cumulative", VALUELIST = ",Full,Inc,Cum") [ InitialExpression = "Full", SqlColumnNumber = 4 ];
/// Backup file name prefix
Property PrefixBackUpFile As %String [ InitialExpression = "back" ];
/// The maximum number of backup files, delete the oldest ones
Property MaxBackUpFiles As %Integer [ InitialExpression = 3 ];
ClassMethod DeviceIsValid(Directory As %String) As %Status
{
If '##class(%Library.File).DirectoryExists(Directory) quit $$$ERROR($$$GeneralError,"Directory does not exist")
quit $$$OK
}
ClassMethod CheckBackup(Device, MaxBackUpFiles, del = 0) As %Status
{
set path=##class(%File).NormalizeFilename(Device)
quit:'##class(%File).DirectoryExists(path) $$$ERROR($$$GeneralError,"Folder "_path_" does not exist")
set max=MaxBackUpFiles
set result=##class(%ResultSet).%New("%File:FileSet")
set st=result.Execute(path,"*.cbk",,1)
while result.Next()
{ If result.GetData(2)="F" {
continue:result.GetData(3)=0
set ts=$tr(result.GetData(4),"-: ")
set ts(ts)=$lb(result.GetData(1),result.GetData(3))
}
}
#; Let’s traverse all the files starting from the newest one
set i="" for count=1:1 { set i=$order(ts(i),-1) quit:i=""
#; Get the increase in bytes as a size difference with the previous backup
if $data(size),'$data(delta) set delta=size-$lg(ts(i),2)
#; Get the size of the most recent backup file in bytes
if '$data(size) set size=$lg(ts(i),2)
#; If the number of backup files is larger or equals to the upper limit, delete the oldest ones along with logs
if count'$g(free) $$$ERROR($$$GeneralError,"Estimated size of the new backup file is larger than the available disk space:("_$g(size)_"+"_$g(delta)_")>"_$g(free))
quit $$$OK
}
Method OnTask() As %Status
{
do $zu(5,"%SYS")
set list=""
merge oldDBList=^SYS("BACKUPDB")
kill ^SYS("BACKUPDB")
#; Adding new properties for the backup task
set status=$$$OK
try {
##; Check the number of database copies, delete the oldest one, if necessary
##; Check the remaining disk space and estimate the size of the new file
set status=..CheckBackup(..DirBackup,..MaxBackUpFiles,1)
quit:$$$ISERR(status)
#; All databases
if ..AllDatabases {
set vals=""
set disp=""
set rss=##class(%ResultSet).%New("Config.Databases:List")
do rss.Execute()
while rss.Next(.sc) {
if ..IgnoreForAllDatabases'="",(","_..IgnoreForAllDatabases_",")[(","_$zconvert(rss.Data("Name"),"U")_",") continue
if ..IgnoreTempDatabases continue:..IsTempDB(rss.Data("Name"))
if ..IgnorePreparedDatabases continue:..IsPreparedDB(rss.Data("Name"))
set ^SYS("BACKUPDB",rss.Data("Name"))=""
}
}
else {
#; if the PrefixIncludeDB property is not empty, we’ll back up all DB’s with names starting from ..PrefixIncludeDB
if ..PrefixIncludeDB'="" {
set rss=##class(%ResultSet).%New("Config.Databases:List")
do rss.Execute(..PrefixIncludeDB_"*")
while rss.Next(.sc) {
if ..IgnoreTempDatabases continue:..IsTempDB(rss.Data("Name"))
set ^SYS("BACKUPDB",rss.Data("Name"))=""
}
}
#; Include particular databases into the list
if ..IncludeDatabases'="" {
set rss=##class(%ResultSet).%New("Config.Databases:List")
do rss.Execute("*")
while rss.Next(.sc) {
if ..IgnoreTempDatabases continue:..IsTempDB(rss.Data("Name"))
if (","_..IncludeDatabases_",")'[(","_$zconvert(rss.Data("Name"),"U")_",") continue
set ^SYS("BACKUPDB",rss.Data("Name"))=""
}
}
}
do ..GetFileName(.backFile,.logFile)
set typeB=$zconvert($e(..TypeBackup,1),"U")
set:"FIC"'[typeB typeB="F"
set res=$$BACKUP^DBACK("",typeB,"",backFile,"Y",logFile,"NOINPUT","Y","Y","","","")
if 'res set status=$$$ERROR($$$GeneralError,"Error: "_res)
} catch { set status=$$$ERROR($$$GeneralError,"Error: "_$ze)
set $ze=""
}
kill ^SYS("BACKUPDB")
merge ^SYS("BACKUPDB")=oldDBList
quit status
}
/// Get file names
Method GetFileName(aBackupFile, ByRef aLogFile) As %Status
{
set tmpName=..PrefixBackUpFile_"_"_..TypeBackup_"_"_$s(..AllDatabases:"All",1:"List")_"_"_$zd($h,8)_$tr($j($i(cnt),3)," ",0)
do {
s aBackupFile=##class(%File).NormalizeFilename(..DirBackup_"/"_tmpName_".cbk")
} while ##class(%File).Exists(aBackupFile)
set aLogFile=##class(%File).NormalizeFilename(..DirBackupLog_"/"_tmpName_".log")
quit 1
}
/// Check if the database is pre-installed
ClassMethod IsPreparedDB(name)
{
if (",ENSDEMO,ENSEMBLE,ENSEMBLEENSTEMP,ENSEMBLESECONDARY,ENSLIB,CACHESYS,CACHELIB,CACHETEMP,CACHE,CACHEAUDIT,DOCBOOK,USER,SAMPLES,")[(","_$zconvert(name,"U")_",") quit 1
quit 0
}
/// Check if the database is temporary
ClassMethod IsTempDB(name)
{
quit:$zconvert(name,"U")["TEMP" 1
quit:$zconvert(name,"U")["SECONDARY" 1
quit 0
}
/// Get a comma-delimited list of databases
ClassMethod ListDB()
{
set list=""
set rss=##class(%ResultSet).%New("Config.Databases:List")
do rss.Execute()
while rss.Next(.sc) {
set list=list_","_rss.Data("Name")
}
quit list
}
ClassMethod ListNS() [ Private ]
{
set disp=""
set tRS = ##class(%ResultSet).%New("Config.Namespaces:List")
set tSC = tRS.Execute()
While tRS.Next() {
set disp=disp_","_tRS.GetData(1)
}
set %class=..%ClassName(1)
$$$comSubMemberSet(%class,$$$cCLASSproperty,"PrefixIncludeDB",$$$cPROPparameter,"VALUELIST",disp)
quit ""
}
ClassMethod oncompile() [ CodeMode = generator ]
{
$$$defMemberKeySet(%class,$$$cCLASSproperty,"PrefixIncludeDB",$$$cPROPtype,"%String")
set updateClass=##class("%Dictionary.ClassDefinition").%OpenId(%class)
set updateClass.Modified=0
do updateClass.%Save()
do updateClass.%Close()
}
}
All our major concerns are addressed here:
limitation of the number of copies,
removal of old copies,
estimation of the size of the new file,
different methods of selecting or excluding databases from the list.
Let’s import it into the system and create a new task using the Task manager.
And include the database into the list of copied databases.
All of the examples above are provided for Caché 2016.1 and are intended for educational purposes only. They can only be used in a product system after serious testing. I will be happy if this code helps you do your job better or avoid making mistakes.
[Github repository](https://github.com/SergeyMi37/ExampleBackupTask)
>The following materials were used for writing this article:
>1. Caché Security Administration Guide (InterSystems)
>2. Caché Installation Guide. Preparing for Caché Security (InterSystems)
>3. Caché System Administration Guide (InterSystems)
>4. Introduction to Caché. Caché Security (InterSystems)
>5. PCI DSS.RU. Requirements and the security audit procedure. Version 3.2
Great articleI found this too...The Caché DBMS comes with several pre-configured backup jobs, but they didn’t always work for me. Every time I needed something particular for a project, it wasn’t there in “out-of-the-box” jobs. In one project, I had to automate the control over the number of backup copies with an option of automatic purging of the oldest ones.I thought I'd share the class we use - this only deletes old backups
Class App.PurgeOldBackupFiles Extends %SYS.Task.Definition
{
Property BackupsToKeep As %Integer(MAXVAL = 30, MINVAL = 1) [ InitialExpression = 30, Required ];
Property BackupFolder As %String [ Required ];
Property BackupFileType As %String [ Required ];
Method OnTask() As %Status
{
//s BackupsToKeep = 2
//s Folder = "c:\backupfolder"
//s BackupFileType = "FullAllDatabases" // or "FullDBList"
s SortOrder = "DateModified"
If ..BackupsToKeep<1 Quit $$$ERROR($$$GeneralError,"Invalid - Number of Backups to Keep must be greater than or equal to 1")
If ..BackupFolder="" Quit $$$ERROR($$$GeneralError,"Invalid - BackupFolder - not supplied")
if ..BackupFileType = "" Quit $$$ERROR($$$GeneralError,"Invalid - BackupFileType - not supplied")
if (..BackupFileType '= "FullAllDatabases")&&(..BackupFileType '= "FullDBList") Quit $$$ERROR($$$GeneralError,"Invalid - BackupFileType")
s BackupCount=0
k zPurgeOldBackupFiles(..BackupFileType)
Set rs=##class(%ResultSet).%New("%Library.File:FileSet")
w !,"backuplist",!
s BackupFileWildcard = ..BackupFileType _ "*.cbk"
set status=rs.Execute(..BackupFolder, BackupFileWildcard, SortOrder)
WHILE rs.Next() {
Set FullFileName=rs.Data("Name")
Set FName=##class(%File).GetFilename(FullFileName)
Set FDateTime=##class(%File).GetFileDateModified(FullFileName)
w "File "_FName_" "_FDateTime,!
Set FDate=$PIECE(FDateTime,",")
Set CDate=$PIECE($H,",")
s BackupCount=$I(BackupCount)
s zPurgeOldBackupFiles(..BackupFileType, BackupCount)=FullFileName
}
s zPurgeOldBackupFiles(..BackupFileType, "BackupCount")=BackupCount
do rs.Close()
if BackupCount > ..BackupsToKeep {
for i=1:1:BackupCount-..BackupsToKeep {
s FullFileName = zPurgeOldBackupFiles(..BackupFileType, i)
d ##class(%File).Delete(FullFileName)
w "File Purged "_FullFileName_" Purged",!
}
}
q status
}
}
Thanks!
Announcement
David Reche · Jan 22, 2018
Come to Barcelona and join us !!
Agenda
09.00 – 09.30
Registration
General/Management Sessions
09.30 – 09.45
Welcome ( Jordi Calvera)
09.45 – 10.15
Your Success – Our Success ( Jordi Calvera)
10.15 – 11:00
Choosing a DBMS to Build Something that Matters of the Third Platform ( IDC, Philip Carnelley)
11:00 – 11:45
InterSystems IRIS Data Platform (Industries & in Action) ( Joe Lichtenberg)
11:45 – 12:15
Café
12:15 – 13:00
InterSystems Guide to the Data Galaxy ( Benjamin de Boe)
13:00 – 13:20
InterSystems Worldwide Support: A Competitive Advantage ( Stefan Schulte Strathaus)
13:20 – 13:50
Developers Community Meet Up ( Evgeny Shvarov & Francisco J. López)
13:50 – 14:00
Morning Sessions Closing ( Jordi Calvera)
14:00 – 15:15
Lunch & Networking
Technical Sessions
15.15 – 16.00
Docker Containers ( José Tomás Salvador)
16.00 – 16.30
ICM – InterSystems Cloud Manager* ( Luca Ravazzolo)
16.30 – 17:00
Escalabilidad & Sharding ( Pierre Yves Duquesnoy)
17:00 – 17:15
Coffee Break
17:15 – 17:45
Interacting Faster and with More Technologies ( David Reche)
17:45 – 18:15
Atelier: Fast and Intuitive Development Environment ( Alberto Fuentes)
18:15
Q&A panel ( Jordi Calvera)
Closing & Build Walkout Video
David, I edited your post, just to give a bit more information, and to be more than just Twitter. Don't miss my session ;) Thanks Dmitry is a good idea Of course!! For sure the best one I'll be there, sure
Announcement
James Schultz · Jun 14, 2018
Hi Community!Come join us on Developer Week in NYC on 18-20 of June!InterSystems has signed on for a high-level sponsorship and exhibitor space at this year's DeveloperWeek, billed as "New York City’s Largest Developer Conference & Expo". This is the first time we have participated in the event that organizers expect will draw more than 3,000 developers from 18th to 20th June.“The world is changing rapidly, and our target audience is far more diverse in terms of roles and interests than it used to be... DeveloperWeek NYC is a gathering of people who create applications for a living, and we want developers to see the power and capabilities of InterSystems. We need to know them, and they need to know us, as our software can be a foundation for their success.” – says Marketing Vice President Jim Rose.The main feature at InterSystems booth 812 is the new sandbox experience for InterSystems IRIS Data Platform™. Meanwhile Director of Data Platforms Product Management @Iran.Hutchinson is delivering two presentations on the conference agenda. One, "GraalVM: What it is. Why it’s important. Experiments you should try today" will be on the Main Stage on June 19 between 11:00 a.m. and 11:20 a.m. GraalVM is an open source set of projects driven by Oracle Labs, the Institute for Software Kepler University Linz Austria, and a community of contributors.On the following day, Hutchinson will lead a follow-on presentation to Frictionless Systems Founder Carter Jernigan's productivity boosting "Power Up with Flow: Get “In the Zone” to Get Things Done", which runs from 11:00 a.m. - 11:45 a.m. on Workshop Stage 2. In "Show and Tell Your Tips and Techniques – and Win in Powers of 2!" he leads an open exchange of productivity ideas, tips, and innovations culminating in prizes to the "Power of 2" for the best ideas. If you are attending, it takes place between 11:45 a.m. and 12:30 p.m. also on Workshop Stage 2.Don't forget to check these useful links:All details about the DeveloperWeek NYC The original agendaRegister now and see you in New York! Very cool Thanks! We're very excited to be participating!
Article
Gevorg Arutiunian · Sep 4, 2018

I already talked about [GraphQL](https://community.intersystems.com/post/graphql-intersystems-data-platforms) and the ways of using it in this article. Now I am going to tell you about the tasks I was facing and the results that I managed to achieve in the process of implementing GraphQL for InterSystems platforms.
## What this article is about
- Generation of an [AST](https://en.wikipedia.org/wiki/Abstract_syntax_tree) for a GraphQL request and its validation
- Generation of documentation
- Generation of a response in the JSON format
Let’s take a look at the entire process from sending a request to receiving a response using this simple flow as an example:

The client can send requests of two types to the server:
- A schema request.
The server generates a schema and returns it to the client. We’ll cover this process later in this article.
- A request to fetch/modify a particular data set.
In this case, the server generates an AST, validates and returns a response.
## AST generation
The first task that we had to solve was to parse the received GraphQL request. Initially, I wanted to find an external library and use it to parse the response and generate an AST. However, I discarded the idea for a number of reasons. This is yet another black box, and you should keep the issue with long callbacks in mind.
That’s how I ended up with a decision to write my own parser, but where do I get its description? Things got better when I realized that [GraphQL](http://facebook.github.io/graphql/October2016/) was an open-source project with a pretty good description by Facebook. I also found multiple examples of parsers written in other languages.
You can find a description of an AST [here](http://facebook.github.io/graphql/October2016/#Document).
Let’s take a look at a sample request and tree:
```
{
Sample_Company(id: 15) {
Name
}
}
```
**AST**
```
{
"Kind": "Document",
"Location": {
"Start": 1,
"End": 45
},
"Definitions": [
{
"Kind": "OperationDefinition",
"Location": {
"Start": 1,
"End": 45
},
"Directives": [],
"VariableDefinitions": [],
"Name": null,
"Operation": "Query",
"SelectionSet": {
"Kind": "SelectionSet",
"Location": {
"Start": 1,
"End": 45
},
"Selections": [
{
"Kind": "FieldSelection",
"Location": {
"Start": 5,
"End": 44
},
"Name": {
"Kind": "Name",
"Location": {
"Start": 5,
"End": 20
},
"Value": "Sample_Company"
},
"Alias": null,
"Arguments": [
{
"Kind": "Argument",
"Location": {
"Start": 26,
"End": 27
},
"Name": {
"Kind": "Name",
"Location": {
"Start": 20,
"End": 23
},
"Value": "id"
},
"Value": {
"Kind": "ScalarValue",
"Location": {
"Start": 24,
"End": 27
},
"KindField": 11,
"Value": 15
}
}
],
"Directives": [],
"SelectionSet": {
"Kind": "SelectionSet",
"Location": {
"Start": 28,
"End": 44
},
"Selections": [
{
"Kind": "FieldSelection",
"Location": {
"Start": 34,
"End": 42
},
"Name": {
"Kind": "Name",
"Location": {
"Start": 34,
"End": 42
},
"Value": "Name"
},
"Alias": null,
"Arguments": [],
"Directives": [],
"SelectionSet": null
}
]
}
}
]
}
}
]
}
```
## Validation
Once we receive a tree, we’ll need to check if it has classes, properties, arguments and their types on the server – that is, we’ll need to validate it. Let’s traverse the tree recursively and check whether its elements match the ones on the server. [Here’s](https://github.com/intersystems-ru/GraphQL/blob/master/cls/GraphQL/Query/Validation.cls) how a class looks.
## Schema generation
A **schema** is a type of documentation for available classes and properties, as well as a description of property types in these classes.
GraphQL implementations in other languages or technologies use resolvers to generate schemas. A resolver is a description of the types of data available on the server.
**Examples or resolvers, requests, and responses**
```
type Query {
human(id: ID!): Human
}
type Human {
name: String
appearsIn: [Episode]
starships: [Starship]
}
enum Episode {
NEWHOPE
EMPIRE
JEDI
}
type Starship {
name: String
}
```
```
{
human(id: 1002) {
name
appearsIn
starships {
name
}
}
}
```
```json
{
"data": {
"human": {
"name": "Han Solo",
"appearsIn": [
"NEWHOPE",
"EMPIRE",
"JEDI"
],
"starships": [
{
"name": "Millenium Falcon"
},
{
"name": "Imperial shuttle"
}
]
}
}
}
```
However, before we generate a schema, we need to understand its structure, find a description or, even better, examples. The first thing I tried was attempting to find one that would help me understand the structure of a schema. Since GitHub has its own [GraphQL API](https://developer.github.com/v4/explorer/), it was easy to get one from there. But the problem was that its server-side was so huge that the schema itself occupied 64 thousand lines. I really hated the idea of delving into all that and started looking for other methods of obtaining a schema.
Since our platforms are based on a DBMS, my plan for the next step was to build and start GraphQL for PostgreSQL and SQLite. With PostgreSQL, I managed to fit the schema into just 22 thousand lines, and SQLite gave me an even better result with 18 thousand lines. It was better than the starting point, but still not enough, so I kept on looking.
I ended up choosing a [NodeJS](https://graphql.org/graphql-js/) implementation, made a build, wrote a simple resolver, and got a solution with just 1800 lines, which was way better!
Once I had wrapped my head around the schema, I decided to generate it automatically without creating resolvers on the server in advance, since getting meta information about classes and their relationships is really easy.
To generate your own schema, you need to understand a few simple things:
- You don’t need to generate it from scratch – take one from NodeJS, remove the unnecessary stuff and add the things that you do need.
- The root of the schema has a queryType type. You need to initialize its “name” field with some value. We are not interested in the other two types since they are still being implemented at this point.
- You need to add all the available classes and their properties to the **types** array.
```
{
"data": {
"__schema": {
"queryType": {
"name": "Query"
},
"mutationType": null,
"subscriptionType": null,
"types":[...
],
"directives":[...
]
}
}
}
```
- First of all, you need to describe the **Query** root element and add all the classes, their arguments, and class types to the **fields** array. This way, they will be accessible from the root element.
**Let’s take a look at two sample classes, Example_City and Example_Country**
```
{
"kind": "OBJECT",
"name": "Query",
"description": "The query root of InterSystems GraphQL interface.",
"fields": [
{
"name": "Example_City",
"description": null,
"args": [
{
"name": "id",
"description": "ID of the object",
"type": {
"kind": "SCALAR",
"name": "ID",
"ofType": null
},
"defaultValue": null
},
{
"name": "Name",
"description": "",
"type": {
"kind": "SCALAR",
"name": "String",
"ofType": null
},
"defaultValue": null
}
],
"type": {
"kind": "LIST",
"name": null,
"ofType": {
"kind": "OBJECT",
"name": "Example_City",
"ofType": null
}
},
"isDeprecated": false,
"deprecationReason": null
},
{
"name": "Example_Country",
"description": null,
"args": [
{
"name": "id",
"description": "ID of the object",
"type": {
"kind": "SCALAR",
"name": "ID",
"ofType": null
},
"defaultValue": null
},
{
"name": "Name",
"description": "",
"type": {
"kind": "SCALAR",
"name": "String",
"ofType": null
},
"defaultValue": null
}
],
"type": {
"kind": "LIST",
"name": null,
"ofType": {
"kind": "OBJECT",
"name": "Example_Country",
"ofType": null
}
},
"isDeprecated": false,
"deprecationReason": null
}
],
"inputFields": null,
"interfaces": [],
"enumValues": null,
"possibleTypes": null
}
```
- Our second step is to go one level higher and extend the **types** array with the classes that have already been described in the **Query** object with all of the properties, types, and relationships with other classes.
**Descriptions of classes**
```
{
"kind": "OBJECT",
"name": "Example_City",
"description": "",
"fields": [
{
"name": "id",
"description": "ID of the object",
"args": [],
"type": {
"kind": "SCALAR",
"name": "ID",
"ofType": null
},
"isDeprecated": false,
"deprecationReason": null
},
{
"name": "Country",
"description": "",
"args": [],
"type": {
"kind": "OBJECT",
"name": "Example_Country",
"ofType": null
},
"isDeprecated": false,
"deprecationReason": null
},
{
"name": "Name",
"description": "",
"args": [],
"type": {
"kind": "SCALAR",
"name": "String",
"ofType": null
},
"isDeprecated": false,
"deprecationReason": null
}
],
"inputFields": null,
"interfaces": [],
"enumValues": null,
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Example_Country",
"description": "",
"fields": [
{
"name": "id",
"description": "ID of the object",
"args": [],
"type": {
"kind": "SCALAR",
"name": "ID",
"ofType": null
},
"isDeprecated": false,
"deprecationReason": null
},
{
"name": "City",
"description": "",
"args": [],
"type": {
"kind": "LIST",
"name": null,
"ofType": {
"kind": "OBJECT",
"name": "Example_City",
"ofType": null
}
},
"isDeprecated": false,
"deprecationReason": null
},
{
"name": "Name",
"description": "",
"args": [],
"type": {
"kind": "SCALAR",
"name": "String",
"ofType": null
},
"isDeprecated": false,
"deprecationReason": null
}
],
"inputFields": null,
"interfaces": [],
"enumValues": null,
"possibleTypes": null
}
```
- The third point is that the “types” array contains descriptions of all popular scalar types, such as int, string, etc. We’ll add our own scalar types there, too.
## Response generation
Here is the most complex and exciting part. A request should generate a response. At the same time, the response should be in the JSON format and match the request structure.
For each new GraphQL request, the server has to generate a class containing the logic for obtaining the requested data. The class is not considered new if the values of arguments changed – that is, if we get a particular dataset for Moscow and then the same set for London, no new class will be generated, it’s just going to be new values. In the end, this class will contain an SQL query and the resulting dataset will be saved in the JSON format with its structure matching the GraphQL request.
**An example of a request and a generated class**
```
{
Sample_Company(id: 15) {
Name
}
}
```
```
Class gqlcq.qsmytrXzYZmD4dvgwVIIA [ Not ProcedureBlock ]
{
ClassMethod Execute(arg1) As %DynamicObject
{
set result = {"data":{}}
set query1 = []
#SQLCOMPILE SELECT=ODBC
&sql(DECLARE C1 CURSOR FOR
SELECT Name
INTO :f1
FROM Sample.Company
WHERE id= :arg1
) &sql(OPEN C1)
&sql(FETCH C1)
While (SQLCODE = 0) {
do query1.%Push({"Name":(f1)})
&sql(FETCH C1)
}
&sql(CLOSE C1)
set result.data."Sample_Company" = query1
quit result
}
ClassMethod IsUpToDate() As %Boolean
{
quit:$$$comClassKeyGet("Sample.Company",$$$cCLASShash)'="3B5DBWmwgoE" $$$NO
quit $$$YES
}
}
```
How this process looks in a scheme:

For now, the response is generated for the following requests:
- Basic
- Embedded objects
- Only the many-to-one relationship
- List of simple types
- List of objects
Below is a scheme containing the types of relationships that are yet to be implemented:

Summary
- **Response** — at the moment, you can get a set of data for relatively simple requests.
- **Automatically generated schema** — the schema is generated for stored classes accessible to the client, but not for pre-defined resolvers.
- **A fully functional parser** – the parser is fully implemented, you can get a tree by making a request of any complexity.
→ [Link to the project repository](https://github.com/intersystems-community/GraphQL)
→ [Link to the demo server](http://37.139.6.217:57773/graphiql/index.html)
Demo server doesn't work Thank you, I fixed this issue!
Announcement
Michelle Spisak · Apr 30, 2018
When you hear people talk about moving their applications to the cloud, are you unsure of what exactly they mean? Do you want a solution for migrating your local, physical servers to a flexible, efficient cloud infrastructure? Join Luca Ravazzolo for Introducing InterSystems Cloud Manager, (May 17th, 2:00 p.m. EDT). In this webinar, Luca — Product Manager for InterSystems Cloud Manager — will explain cloud technology and how you can move your InterSystems IRIS infrastructure to the cloud in an operationally agile fashion. He will also be able to answer your questions following the webinar about this great new product from InterSystems! Thanks Michelle!I'm happy to answer any question anybody may have on the webinar where I presented InterSystems Cloud Manager and generally on the improvement an organization can achieve in its software-factory with the newly available technologies from InterSystems. This webinar recording has been posted: https://learning.intersystems.com/course/view.php?name=IntroICMwebinar And now this webinar recording is available on InterSystems Developers YouTube Channel: Please welcome!
Announcement
Evgeny Shvarov · Aug 7, 2018
Hi, Community!
Just want to let you know that InterSystems IRIS is available on Google Cloud Marketplace.
Start here to get your InterSystems IRIS VM on GCP.
You can request a license for InterSystems IRIS here.
Learn more in an official press release Create an IRIS instance in 5 minutes video... https://youtu.be/f_uVe_Q5X-c
Announcement
Sourabh Sethi · Jul 29, 2019
A SOLID Design in Cache Object
In this session, we will discussing SOLID Principle of Programming and will implement in a example.I have used Cache Object Programming Language for examples.We will go step by step to understand the requirement, then what common mistakes we use to do while designing, understanding each principles and then complete design with its implementation via Cache Objects.
If you have any questions or suggestions, please write to me - sethisourabh.hit@gmail.com
CodeSet - https://github.com/sethisourabh/SolidPrinciplesTraining Thanks for sharing this knowledge on ObjectScript language.I haven't heard of SOLID Principle before, I'll apply it on my next code.BTW : can you share your sildes for an easier walkthrough ? Thank you for your response.I dont see any way to attach documents here. You can send your email id and I will send over there.My email ID - sethisourabh.hit@gmail.comRegards,Sourabh You could use https://www.slideshare.net/ or add the document to the GitHub repo.There is a way to post documents on Intersystems Community under Edit Post -> Change Additional Settings, which I documented here but it's not user friendly and I didn't automatically see links to attached documents within the post so I had to manually add the links. Community feedback suggests they may turn this feature off at some point so I'd recommend any of the above options instead. Thanks, @Stephen.Wilson!Yes, we plan to turn off the attachments feature. As you mention there are a lot of better ways to expose presentation and code.And as you see @Sourabh.Sethi6829 posted the recent package for his recent video on Open Exchange. Do I need the code set for this session in open exchange? Would be great - it’s even more presence and developers can collaborate DONE
Question
Evgeny Shvarov · Jul 23, 2019
Hi guys!What is the IRIS analog for Ensemble.INC? Tried to compile the class in IRIS - says
Error compiling routine: Util.LogQueueCounts. Errors: Util.LogQueueCounts.cls : Util.LogQueueCounts.1(7) : MPP5635 : No include file 'Ensemble'
You just have to enable Ensemble in the installer
<Namespace Name="${NAMESPACE}" Code="${DBNAME}-CODE" Data="${DBNAME}-DATA" Create="yes" Ensemble="1">
That helped! Thank you! What do you mean? There are still Ensemble.inc
Article
Evgeny Shvarov · Sep 19, 2019
Hi Developers!
Recently we launched InterSystems Package Manager - ZPM. And one of the intentions of the ZPM is to let you package your solution and submit into the ZPM registry to make its deployment as simple as "install your-package" command.
To do that you need to introduce module.xml file into your repository which describes what is your InterSystems IRIS package consists of.
This article describes different parts of module.xml and will help you to craft your own.
I will start from samples-objectscript package, which installs into IRIS the Sample ObjectScript application and could be installed with:
zpm: USER>install samples-objectscript
It is probably the simplest package ever and here is the module.xml which describes the package:
<?xml version="1.0" encoding="UTF-8"?>
<Export generator="Cache" version="25">
<Document name="samples-objectscript.ZPM">
<Module>
<Name>samples-objectscript</Name>
<Version>1.0.0</Version>
<Packaging>module</Packaging>
<SourcesRoot>src</SourcesRoot>
<Resource Name="ObjectScript.PKG"/>
</Module>
</Document>
</Export>
Let's go line-by-line through the document.
<Export generator="Cache" version="25">
module.xml belongs to the family of Cache/IRIS xml documents so this line states this relation to let internal libraries to recognize the document.
Next section is <Document>
<Document name="samples-objectscript.ZPM">
Your package should have a name. The name can contain letters in lower-case and "-" sign. E.g. samples-objectscript in this case. please put the name of your package in the name clause of the Document tag with the .ZPM extension.
Inner elements of the Document are:
<Name> - the name of your package. In this case:
<Name>samples-objectscript</Name>
<Version> - the version of the package. In this case:
<Version>1.0.0</Version>
<Packaging>module</Packaging> - the type of packaging. Put the module parameter here.
<Packaging>module</Packaging>
<SourcesRoot> - a folder, where zpm will look for ObjectScript to import.
In this case we tell to look for ObjectScript in /src folder:
<SourcesRoot>src</SourcesRoot>
<Resource Name> - elements of ObjectScript to import. This could be packages, classes, includes, globals, dfi, etc.
The structure under SourceRoot folder should be the following:
/cls - all the ObjectScript classes in Folder=Package, Class=file.cls form. Subpackages are subfolders
/inc - all the include files in file.inc form.
/mac - all the mac routines.
/int - all the "intermediate" routines (AKA "other" code, the result of a compilation of mac code, or ObjectScirpt without classes and macro).
/gbl - all the globals in xml form of export.
/dfi - all the DFI files in xml form of export. Each pivot comes in pivot.dfi file, each dashboard comes in dashboard.dfi file.
E.g. here we import the ObjectScript page. This will tell to ZPM to look for /src/cls/ObjectScript folder and import all the classes from it:
<Resource Name="ObjectScript.PKG"/>
So! To prepare your solution for packaging put ObjectScript classes into some folder of your repository inside /cls folder and place all packages and classes in package=folder, class=file.cls form.
If you store classes in your repo differently and don't want a manual work to prepare the proper folder structure for ObjectScript there are plenty of tools which do the work: Atelier and VSCode ObjectScript export classes this way, also there is isc-dev utility which exports all the artifacts from namespace ready for packaging.
Packaging mac routines
This is very similar to classes. Just put routines under /mac folder. Example.
<?xml version="1.0" encoding="UTF-8"?>
<Export generator="Cache" version="25">
<Document name="DeepSeeButtons.ZPM">
<Module>
<Name>DeepSeeButtons</Name>
<Version>0.1.7</Version>
<Packaging>module</Packaging>
<SourcesRoot>src</SourcesRoot>
<Resource Name="DeepSeeButtons.mac"/>
</Module>
</Document>
</Export>
Some other elements
There are also optional elements like:<Author>
Which could contain <Organization> and <CopyrightDate> elements.
Example:
<Author>
<Organization>InterSystems</Organization>
<CopyrightDate>2019</CopyrightDate>
</Author>
Packaging CSP/Web applications
ZPM can deploy web applications too.
To make it work introduce CSPApplication element with the clauses of CSP Application parameters.
For example, take a look on DeepSeeWeb module.xml CSPApplication tag:
<CSPApplication
Url="/dsw"
DeployPath="/build"
SourcePath="${cspdir}/dsw"
ServeFiles="1"
Recurse="1"
CookiePath="/dsw"
/>
This setting will create a Web application with the name /dsw and will copy all the files from /build folder of the repository into ${cspdir}/dsw folder which is a folder under IRIS csp directory.
REST API application
If this is a REST-API application the CSPApplication element will contain dispatch class and could look like the MDX2JSON module.xml:
<CSPApplication
Path="/MDX2JSON"
Url="/MDX2JSON"
CookiePath="/MDX2JSON/"
PasswordAuthEnabled="1"
UnauthenticatedEnabled="1"
DispatchClass="MDX2JSON.REST"
/>
Dependencies
Your module could expect the presence of another module installed on the target system. This could be described by <Dependencies> element inside <Document> element which could contain several <ModuleReference> elements each of which has <Name> and <Version> and which state what other modules with what version should be installed before your one. This will cause ZPM to check, whether modules are installed and if not perform the installation.
Here is an example of dependency DSW module on MDX2JSON module:
<Dependencies>
<ModuleReference>
<Name>MDX2JSON</Name>
<Version>2.2.0</Version>
</ModuleReference>
</Dependencies>
Another example where ThirdPartyPortlets depends on Samples BI(holefoods):
<Dependencies>
<ModuleReference>
<Name>holefoods</Name>
<Version>0.1.0</Version>
</ModuleReference>
</Dependencies>
There are also options to run your arbitrary code to set up the data, environment and we will talk about it in the next articles.
How to build your own package
Ok! Once you have a module.xml you can try to build the package and test if the module.xml structure is accurate.
You may test via zpm client. Install zpm on an IRIS system and load the package code with load command:
zpm: NAMESPACE>load path-to-the-project
The path points to the folder which contains the resources for the package and has module.xml in the root folder.
E.g. you can test the package building this project. Check out it and build a container with docker-compose-zpm.yml.
Open terminal in SAMPLES namespace and call ZPM:
zpm: SAMPLES>
zpm: SAMPLES>load /iris/app
[samples-objectscript] Reload START
[samples-objectscript] Reload SUCCESS
[samples-objectscript] Module object refreshed.
[samples-objectscript] Validate START
[samples-objectscript] Validate SUCCESS
[samples-objectscript] Compile START
[samples-objectscript] Compile SUCCESS
[samples-objectscript] Activate START
[samples-objectscript] Configure START
[samples-objectscript] Configure SUCCESS
[samples-objectscript] Activate SUCCESS
The path is "/iris/app" cause we tell in docker-compose-zpm.yml that we map the root of the project to /iris/app folder in the container. So we can use this path to tell zpm where to load the project from.
So! The load performed successfully. And this means that module.xml could be used to submit a package to the developers' community repository.
Now you know how to make a proper module.xml for your application.
How to submit the application to InterSystems Community repository
As for today there two requirements:
1. Your application should be listed on Open Exchange
2. Request me in Direct Message or in comments to this post if you want your application to be submitted to the Community Package manager repository.
And you should have a module.xml working!) Updated module documents from .MODULE to .ZPM Hi @Evgeny.Shvarov I'm creating a module.xml for iris-history-monitor, and during the process, a question came up.When you run docker-compose up in my project, the Installer has an invoke tag to execute a class method.
But how can I make this works in the ZPM? Here is objectscript package template, which has an example module.xml with almost everything which could happen in a package.
Take a look on invoke tag:
<Invokes>
<Invoke Class="community.objectscript.PersistentClass" Method="CreateRecord"></Invoke>
<Invoke Class="community.objectscript.ClassExample" Method="SetToTheGlobal">
<Arg>42</Arg>
<Arg>Text Data</Arg>
</Invoke>
</Invokes>
Place calls elements <Invoke> in <Invokes> tag. You can pass parameters if you need. This article describes all the details about <invoke> elements. Perfect!
Thanks @Evgeny.Shvarov ZPM forces to use categories in the folder structure... perhaps, to make it easier, VS Code ObjectScript extension should be configured with that option by default... just an idea.
Also, is it there any place with full doc about module.xml? Articles are full of really useful info but having to navigate through all of them is a bit confusing. Hi Salva!
Thanks, but not anymore. Check the article.
Also, is it there any place with full doc about module.xml? Articles are full of really useful info but having to navigate through all of them is a bit confusing.
Sure. Here is the ZPM documentation Oh my... I didn't see the Wiki...
Thanks! Aggggghhh... OK... come back to previous structure. At least... I can confirm that it was a good idea...but I was not the first one Yes. We first introduced this one cause it exactly how Atelier API exports ObjectScript artifacts by default, but IMHO the simplified one is much better.
And that's why we have it in the ZPM basic template. What is the rule for VERSION in Dependencies ?Is it an EQUAL or a MINIMUM Version.
e.g. <Version>0.0.0</Version> would mean any version
<Dependencies>
<ModuleReference>
<Name>holefoods</Name>
<Version>0.1.0</Version>
</ModuleReference>
</Dependencies>
Thanks ! I used the ZPM "generate" function to create myself a module.xml of a Cache Server Pages application. Worked great! However, I cannot figure out how to get the classes and csp files for this application physically out of IRIS and into a the "src" folder with the module.xml so I can install the application, via ZPM, into another server. Can anyone help me with this part? Hi Jason!
If you are on VSCode you can leverage the InterSystems plugin and export classes.
For CSP files - if something works for you as a CSP web application, you don't have CSP files but rather CSP classes; you can simply export them via VSCode as well.
Thanks Evgeny! I don't use VS Code much but I can get back to it and try. Are there any other options? Hi Jason! I asked DC-AI and he provided an answer!
You can do this:
do $SYSTEM.OBJ.Export("*.cls", "/path/to/export/directory/", "ck")
Article
Henry Pereira · Sep 16, 2019
In an ever-changing world, companies must innovate to stay competitive. This ensures that they’ll make decisions with agility and safety, aiming for future results with greater accuracy.Business Intelligence (BI) tools help companies make intelligent decisions instead of relying on trial and error. These intelligent decisions can make the difference between success and failure in the marketplace.Microsoft Power BI is one of the industry’s leading business intelligence tools. With just a few clicks, Power BI makes it easy for managers and analysts to explore a company’s data. This is important because when data is easy to access and visualize, it’s much more like it’ll be used to make business decisions.
Power BI includes a wide variety of graphs, charts, tables, and maps. As a result, you can always find visualizations that are a good fit for your data.
BI tools are only as useful as the data that backs them, however. Power BI supports many data sources, and InterSystems IRIS is a recent addition to those sources. Since Power BI provides an exciting new way to explore data stored in IRIS, we’ll be exploring how to use these two amazing tools together.
This article will explain how to use IRIS Tables and Power BI together on real data. In a follow-up article, we’ll walk through using Power BI with IRIS Cubes.
Project Prerequisites and Setup
You will need the following to get started:
InterSystems IRIS Data Platform
Microsoft Power BI Desktop (April 2019 release or more recent)
InterSystems Sample-BI data
We'll be using the InterSystems IRIS Data Platform, so you’ll need access to an IRIS install to proceed. You can download a trial version from the InterSystems website if necessary.
There are two ways to install the Microsoft Power BI Desktop. You can download an installer and, or install it through the Microsoft Store. Note that if you are running Power BI from a different machine than where you installed InterSystems IRIS, you will need to install the InterSystems IRIS ODBC drivers on that machine separately
To create a dashboard on Power BI we'll need some data. We'll be using the HoleFoods dataset provided by InterSystems here on GitHub. To proceed, either clone or download the repository.
In IRIS, I've created a namespace called SamplesBI. This is not required, but if you want to create a new namespace, in the IRIS Management Portal, go to System Administration > Configuration > System Configuration > Namespace and click on New Namespace. Enter a name, then create a data file or use an existing one.
On InterSystems IRIS Terminal, enter the namespace that you want to import the data into. In this case, SamplesBI:
Execute $System.OBJ.Load() with the full path of buildsample/Build.SampleBI.cls and the "ck" compile flags:
Execute the Build method of Build.SampleBI class, and full path directory of the sample files:
Connecting Power BI with IRIS
Now it's time to connect Power BI with IRIS. Open Power BI and click on "Get Data". Choose "Database", and you will see the InterSystems IRIS connector:
Enter the host address. The host address is the IP address of the host for your InterSystems IRIS instance (localhost in my case), the Port is the instance’s superserver port (IRIS default is 57773), and the Namespace is where your HoleFoods data is located.
Under Data Connectivity mode, choose "DirectQuery", which ensures you’re always viewing current data.
Next, enter the username and password to connect to IRIS. The defaults are "_SYSTEM" and "SYS".
You can import both tables and cubes generated you’ve created in IRIS. Let’s start by importing some tables.
Under Tables and HoleFoods, check:
Country
Outlet
Product
Region
SalesTransaction
We're almost there! To tell Power BI about the relationship between our tables, click on "Manage Relationships".
Then, click on New.
Let's make two relationships: "SalesTransaction" and "Product relationship".
On top, select the "SalesTransaction" table and click on the "Product" column. Next, select the "Product" table and click on the "ID" column. You'll see that the Cardinality changes automatically to "Many to One (*:1)".
Repeat this step for the following:
"SalesTransaction(Outlet)" with "Outlet(ID)"
"Outlet(Country)" with "Country(ID)"
"Country(Region)" with "Region(ID)":
Note that these relationships are imported automatically if they are expressed as Foreign Keys.
Power BI also has a Relationships schema viewer. If you click the button on the left side of the application, it will show our data model.
Creating a Dashboard
We now have everything we need to create a dashboard.
Start by clicking the button on the left to switch from schema view back to Report view. On the Home tab under the Insert Group, click the TextBox to add a Title.
The Insert Group includes static elements like Text, Shapes, and Images we can use to enhance our reports.
It's time to add our first visualization! In the Fields pane, check "Name" on "Product" and "UnitsSold" on "SalesTransaction".
Next, go to Style and select "Bold Header".
Now it's time to do some data transformation. Click on the ellipsis next to "SalesTransaction" in the Field pane.
Then, click on "Edit Query". It will open the "Power Query Editor".
Select the "DateOfSale" column and click on "Duplicate Column".
Rename this new column to "Year", and click on "Date" and select "Year".
Apply these changes. Next, select the new column and, on the "Modeling" tab, change "Default Summarization" to "Don't Summarize".
Add a "Line Chart" visualization, then drag Year to Axis, drag "Name" from "Region" to Legend, and drag "AmountOfSale" from "SalesTransaction" to Values.
Imagine that the HoleFoods sales team has a target of selling 2000 units. How can we tell if the team is meeting its goal?
To answer, let's add a visual for metrics and targets.
On "SalesTransaction" in the Field pane, check "UnitsSold", then click Gauge Chart. Under the Style properties, set Max to 3000 and Target to 2000.
KPIs (Key Performance Indicators) are helpful decision-making tools, and Power BI has a convenient KPI visual we can use.
To add it, under "SalesTransaction", check "AmountOfSale" and choose KPI under “Visualizations”. Then, drag "Year" to "Trend axis".
To align all charts and visuals, simply click and drag a visual, and when an edge or center is close to aligning with the edge or center of another visual or set of visuals, red dashed lines appear.
You also can go to the View tab and enable "Show GridLines" and "Snap Objects to Grid".
We’ll finish up by adding a map that shows HoleFoods global presence. Set Longitude and Latitude on "Outlet" to "Don't Summarize" on the Modeling tab.
You can find the map tool in the Visualizations pane. After adding it, drag the Latitude and Longitude fields from Outlet to respective properties on the map. Also from SalesTransaction, drag the AmountOfSale to Size property and UnitsSold to ToolTips.
And our dashboard is finally complete.
You can share your dashboard by publishing it to the Power BI Service. To do this, you’ll have to sign up for a Power BI account.
Conclusion
In just a few minutes, we were able to connect Power BI to InterSystems IRIS and then create amazing interactive visualizations.
As developers, this is great. Instead of spending hours or days developing dashboards for managers, we can get the job done in minutes. Even better, we can show managers how to quickly and easily create reports for themselves.
Although developing visualizations is often part of a developer’s job, our time is usually better spent developing mission-critical architecture and applications. Using IRIS and Power BI together ensures that developer time is used effectively and that managers are able to access and visualize data immediately — without waiting weeks for dashboards to be developed, tested, and deployed to production.
Perfect! Great! Thanks Henry.
Few queries -
1. Does Power Bi offer an advantage over Intersystems own analytics? If yes, what are those? In general, I believe visualization is way better in Power BI.,data modelling would be much easier. In addition Power BI offers its user to leverage from Microsoft's cognitive services. Did you notice any performance issue?
2. I believe the connector is free to avail, can you confirm if this true?
Thanks,
SS.
3. tagging @Carmen.Logue to provide more details. That's right. There is no charge for the PowerBI connector; but you do need licenses for Microsoft PowerBI. The connector is available with PowerBI starting in version 2019.2. See this article in the product documentation. 💡 This article is considered as InterSystems Data Platform Best Practice. Nice article: while testing, trying to load the tables (which I can select) I got the following errors:
I am using Iris installed locally and PowerBI Desktop.
Any suggestions? Access Denied errors can stem from a variety of reasons. As a sanity check, running the windows ODBC connection manager's test function never hurts to rule out connectivity issues. By any means, you can consult with the WRC on support issues like this.
Announcement
Michelle Spisak · Oct 17, 2019
New from InterSystems Online Learning: two new exercises that help you get hands-on with InterSystems IRIS to see how easy it is to use to solve your problems!
Using Multi-Model with Python and Node.js: This exercise takes you through the steps of using the InterSystems IRIS multi-model capability to create a Node.js application that sends JSON data straight to your database instance without any parsing or mapping.
Build a Smart Ticketing System: In this exercise, you will build on the Red Light Violation application used in the Interoperability QuickStart. Here, you will add another route to identify at-risk intersections based on data from Chicago traffic system. This involves:
Building a simple interface to consume data from a file and store data to a file.
Adding in logic to only list intersections that have been deemed high risk based on the number of red light violations.
Adding in routing to consume additional data about populations using REST from public APIs.
Modifying the data to be in the right format for the downstream system.
Get started with InterSystems IRIS today!