Clear filter
Article
Mark Bolinsky · Jan 29, 2016
** Revised Feb-12, 2018
While this article is about InterSystems IRIS, it also applies to Caché, Ensemble, and HealthShare distributions.
Introduction
Memory is managed in pages. The default page size is 4KB on Linux systems. Red Hat Enterprise Linux 6, SUSE Linux Enterprise Server 11, and Oracle Linux 6 introduced a method to provide an increased page size in 2MB or 1GB sizes depending on system configuration know as HugePages.
At first HugePages required to be assigned at boot time, and if not managed or calculated appropriately could result in wasted resources. As a result various Linux distributions introduced Transparent HugePages with the 2.6.38 kernel as enabled by default. This was meant as a means to automate creating, managing, and using HugePages. Prior kernel versions may have this feature as well however may not be marked as [always] and potentially set to [madvise].
Transparent Huge Pages (THP) is a Linux memory management system that reduces the overhead of Translation Lookaside Buffer (TLB) lookups on machines with large amounts of memory by using larger memory pages. However in current Linux releases THP can only map individual process heap and stack space.
The Problem
The majority of memory allocation in any Cache' system is the shared memory segments (global and routine buffers pools) and because THP does not handle these shared memory segments. As a result THP are not used for shared memory, and are only used for each individual process. This can be confirmed using a simple shell command.
The following is an example from a test system at InterSystems which shows 2MB THP allocated to Cache' processes:
# grep -e AnonHugePages /proc/*/smaps | awk '{ if($2>4) print $0} ' | awk -F "/" '{print $0; system("ps -fp " $3)} '
/proc/2945/smaps:AnonHugePages: 2048 kB
UID PID PPID C STIME TTY TIME CMD
root 2945 1 0 2015 ? 01:35:41 /usr/sbin/rsyslogd -n
/proc/70937/smaps:AnonHugePages: 2048 kB
UID PID PPID C STIME TTY TIME CMD
root 70937 70897 0 Jan27 pts/0 00:01:58 /bench/EJR/ycsb161b641/bin/cache WD
/proc/70938/smaps:AnonHugePages: 2048 kB
UID PID PPID C STIME TTY TIME CMD
root 70938 70897 0 Jan27 pts/0 00:00:00 /bench/EJR/ycsb161b641/bin/cache GC
/proc/70939/smaps:AnonHugePages: 2048 kB
UID PID PPID C STIME TTY TIME CMD
root 70939 70897 0 Jan27 pts/0 00:00:39 /bench/EJR/ycsb161b641/bin/cache JD
/proc/70939/smaps:AnonHugePages: 4096 kB
UID PID PPID C STIME TTY TIME CMD
root 70939 70897 0 Jan27 pts/0 00:00:39 /bench/EJR/ycsb161b641/bin/cache JD
/proc/70940/smaps:AnonHugePages: 2048 kB
UID PID PPID C STIME TTY TIME CMD
root 70940 70897 0 Jan27 pts/0 00:00:29 /bench/EJR/ycsb161b641/bin/cache SWD 1
/proc/70941/smaps:AnonHugePages: 2048 kB
UID PID PPID C STIME TTY TIME CMD
root 70941 70897 0 Jan27 pts/0 00:00:29 /bench/EJR/ycsb161b641/bin/cache SWD 2
/proc/70942/smaps:AnonHugePages: 2048 kB
UID PID PPID C STIME TTY TIME CMD
root 70942 70897 0 Jan27 pts/0 00:00:29 /bench/EJR/ycsb161b641/bin/cache SWD 3
/proc/70943/smaps:AnonHugePages: 2048 kB
UID PID PPID C STIME TTY TIME CMD
root 70943 70897 0 Jan27 pts/0 00:00:33 /bench/EJR/ycsb161b641/bin/cache SWD 7
/proc/70944/smaps:AnonHugePages: 2048 kB
UID PID PPID C STIME TTY TIME CMD
root 70944 70897 0 Jan27 pts/0 00:00:29 /bench/EJR/ycsb161b641/bin/cache SWD 4
/proc/70945/smaps:AnonHugePages: 2048 kB
UID PID PPID C STIME TTY TIME CMD
root 70945 70897 0 Jan27 pts/0 00:00:30 /bench/EJR/ycsb161b641/bin/cache SWD 5
/proc/70946/smaps:AnonHugePages: 2048 kB
UID PID PPID C STIME TTY TIME CMD
root 70946 70897 0 Jan27 pts/0 00:00:30 /bench/EJR/ycsb161b641/bin/cache SWD 6
/proc/70947/smaps:AnonHugePages: 4096 kB
In addition, there are potential performance penalties in the form of memory allocation delays at runtime especially for applications that may have a high rate of job or process creation.
The Recommendation
InterSystems recommends for the time being to disable THP as the intended performance gain is not applicable to IRIS shared memory segment, and the potential for a negative performance impact in some applications.
Check to see if your Linux system has Transparent HugePages enabled by running of the following commands:
For Red Hat Enterprise Linux kernels:
# cat /sys/kernel/mm/redhat_transparent_hugepage/enabled
For other kernels:
# cat /sys/kernel/mm/transparent_hugepage/enabled
The above command will display whether the [always], [madvise], or [never] flag is enabled. If THP is removed from the kernel then the /sys/kernel/mm/redhat_transparent_hugepage or /sys/kernel/mm/redhat/transparent_hugepage files do not exist.
To disable Transparent HugePages during boot perform the following steps:
1. Add the following entry to the kernel boot line in the /etc/grub.conf file:
transparent_hugepage=never
2. Reboot the operating system
There is a method to also disable THP on-the-fly, however this may not provide the desired result as that method will only stop the creation and usage of THP for new processes. THP already created will not be disassembled into regular memory pages. It is advised to completely reboot the system to have THP disabled at boot time.
*Note: It is recommended to confirm with your respective Linux distributor to confirm the methods used for disabling THP. One clarification comment I would like to add is the use of "traditional" HugePages through the process of boot-time reservation is still highly recommended for optimal performance . This process is detailed in the Cache' Installation Guide:
http://docs.intersystems.com/cache20152/csp/docbook/DocBook.UI.Page.cls?KEY=GCI_unixparms#GCI_unixparms_huge_page
I think further clarification is also needed, You mention that various Linux distributions introduce this with the 2.6.38 Kernel. However this starts with RHEL 6.0/Centos 6 .0 General Availability release. 6.8 is currently only kernel 2.6.32-642 and it has this available in it. Additional information about it's availability in version 6.0 can be found in the RHEl slideshow page 2 http://www.slideshare.net/raghusiddarth/transparent-hugepages-in-rhel-6 and on page 102 of the redhat 6.0 technical documentation https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/pdf/6.0_Technical_Notes/Red_Hat_Enterprise_Linux-6-6.0_Technical_Notes-en-US.pdf . I have not researched when this was rolled into fedora prior to 2.6.38 but as fedora tends to be a precursor to RHEL, it might also have been before kernel 2.6.38.It might be better to suggest that people run the check to see if it is enabled or not and that they should not be surprised if they are running a Linux with a kernel less than 2.6.38that does not support it. Mark, may I ask your for some clarification? You wrote:As a result THP are not used for shared memory, and are only used for each individual process. What's a problem here? Shared memory can use "normal" huge pages, meanwhile individual processes - THP. The memory layout on our developers' serber shows that it's possible.# uname -aLinux ubtst 4.4.0-59-generic #80-Ubuntu SMP Fri Jan 6 17:47:47 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux# cat /sys/kernel/mm/transparent_hugepage/enabled[always] madvise never# tail -11 /proc/meminfoAnonHugePages: 131072 kBCmaTotal: 0 kBCmaFree: 0 kBHugePages_Total: 1890HugePages_Free: 1546HugePages_Rsvd: 898HugePages_Surp: 0Hugepagesize: 2048 kBDirectMap4k: 243808 kBDirectMap2M: 19580928 kBDirectMap1G: 49283072 kB# ccontrol listConfiguration 'CACHE1' directory: /opt/cache1 versionid: 2015.1.4.803.0.16768 ...# cat /opt/cache1/mgr/cconsole.log | grep Allocated...01/27/17-16:41:57:276 (1425) 0 Allocated 1242MB shared memory using Huge Pages: 1024MB global buffers, 64MB routine buffers# grep -e AnonHugePages /proc/*/smaps | awk '{ if($2>4) print $0} ' | awk -F "/" '{print $0; system("ps -fp " $3)} '.../proc/165553/smaps:AnonHugePages: 2048 kBUID PID PPID C STIME TTY TIME CMDcacheusr 165553 1524 0 фев07 ? 00:00:00 cache -s/opt/cache1/mgr -cj -p18 SuperServer^%SYS.SERVER... Hi Alexey,Thank you for your comment. Yes, both THP and traditional/reserved Huge_pages can be used at the same time, however there is not benefit and in fact systems with many (thousands) of Caché processes, especially if there is a lot of process creation, has shown a performance penalty in testing. The overhead of instantiating the THP for those processes at a high rate can be noticeable. Your application may not exhibit this scenario and may be ok. The goal of this article is to provide guidance for those that may not know which is the best option to choose and/or point out that this is a change in recent Linux distributions. You may find that THP usage is perfectly fine for your application. There is no replacement for actual testing and benchmarking your application. :)Kind regards,Mark B- Of course their is no replacement to actual testing. What I am trying to say is that had I started reading the article straight through instead of skimming and jumping to the how to check if it was on, I probably would have read at the top "various Linux distibutions introduced Transparent HugePages with the 2.6.38 kernel' and stopped because my kernel is less than that. I really think that the current wording will lead people whom work at shops that are still rolling out new builds in RHEL or Centos 6 not to use the ideal settings. Maybe a complete re-arrange of the first three paragraphs into two or three paragraphs where the RHEL 6,... might make this clearer. With a sentence that reads something like, "This was first introduced in Red Hat Enterprise Linux 6, SUSE Linux Enterprise Server 11, and Oracle Linux 6; and then later introduced in may other Linux variants with the 2.6.38 kernel.Additionally it might make things clearer if you where to mention that for It's setting the item in brackets is what is the current setting as in my redhat the lines reads . [always] madvise never.It might also be useful to people on this to mention what to do in the case where the transparent Huge pages enabled is set to madvise . Hi Alexander,Thank you for you post. We are only relying on what RH documentation is stating as to when THP was introduced to the main stream kernel (2.6.38) and enabled by default as noted in the RH post you referenced. The option may have existed in previous kernels (although I would not recommending to try it), it may not have been enabled by default. All the documentation I can find on THP support in RH references the 2.6.38 kernel where is was merged feature.If you are finding it in previous kernels, confirm that THP are enabled by default or not. That would be interesting to know. Unfortunately there isn't much we can do other than to do the checks for enablement as mentioned in the post. As the ultimate confirmation, RH and the other Linux distributions would need to update their documentation to confirm when this behavior was enacted in the respective kernel versions. As I mentioned in other comments, the use of THP is not necessarily a bad thing and won't cause "harm" to a system, but there may be performance impacts for applications that have a large amount of process creation as part of their application.Kind regards,Mark B- I will revise the post to be more clear that THP is enabled by default in 2.6.38 kernel but may be available in prior kernels and to reference your respective Linux distributions documentation for confirming and changing the setting. Thanks for your comments. I do not claim to be a Huge Pages expert, but I have been doing some more reading on Transparent Huge pages and the madvise option. The following is untested and un-verified.It seem like if you are running Kernel 2.6.38 or newer that you may be able to use the madvise instead of never for the Transparent Huge Pages setting. According to http://manpages.ubuntu.com/manpages/trusty/man2/madvise.2.html the 2.6.38 kernel’s madvise has a MADV_HUGEPAGE option, that allows applications to enable Transparent Huge pages, If no MADV_* flag is thrown then it defaults to MADV_NORMAL or no special treatment. I believe this means that transparent huge pages should be off by default. If you are using RHEL 6 or probably most of its derivatives even though they have a madvise setting for their Transparent Huge pages settings it appears RHEL did not backport the MADV_HUGEPAGE Option to their madvise/Kernel (At least 2.6.32-504.81 and lower), so you have to set the box’s transparent Huge pages to never. (Man page in RHEL 6 with kernel= 2.6.32-504.8.1 lacking a MADV_HUGEPAGE and https://groups.google.com/forum/#!topic/tokudb-dev/_1YNBMlHftU Bradly Kuszmaul’s 5/8/13 post.)RHEL 7 & it’s derivatives are running the 3.X kernel and that man pages show a MADV_HUGEPAGE option so it looks like you can set the box to madvise and it will not use transparent huge pages.Once again I am not a Transparent Huge Pages expert and have not done any testing to verify the validity of this. Please update this a bit as RHEL 8.6 keeps track of wether Transparent Huge pages is enabled or not in cat /sys/kernel/mm/transparent_hugepage/enabled and not the redhat_transparent_hugepage that older version did.
Article
Sergey Mikhailenko · Jan 23, 2018
This article was written as an attempt to share the experience of installing the InterSystems Caché DBMS for production environment.
We all know that the development configuration of a DBMS is very different from real-life conditions.
As a rule, development is carried out in “hothouse conditions” with a bare minimum of security measures, but when we publish our project online, we must ensure its reliable and uninterrupted operation in a very aggressive environment.
##The process of installing the InterSystems Caché DBMS with maximum security settings
**OS security settings**
**The first step is the operating system. You need to do the following:**
> * Minimize the rights of the technical account of the Caché DBMS
> * Rename the administrator account of the local computer.
> * Leave the necessary minimum of users in the OS.
> * Timely install security updates for the OS and used services.
> * Use and regularly update anti-virus software
> * Disable or delete unused services.
> * Limit access to database files
> * Limit the rights to Caché data files (leave owner and DB admin rights only)
__For UNIX/Linux systems, create the following group and user types prior to installation:__
> * Owner user ID
> * Management group ID
> * Internal Caché system group ID
> * Internal Caché user ID
__InterSystems Caché installation-time security settings__
__InterSystems, the DBMS developer, strongly recommends deploying applications on Caché 2015.2 and newer versions only.__
__During installation, you need to perform the following actions:__
Select the “Locked Down” installation mode
Select the “Custom Setup” option, then select only the bare minimum of components that are required for the work of the application
During installation, choose the SuperServer port that is different from the standard TCP port 1972
During installation, specify the port of the internal web server that is different from the standard TCP port 57772
Specify a Caché instance location path that is different from the standard one (the default option for Windows systems is C:\lnterSystems\Cache, for UNIX/Linux systems — /usr/Cachesys)
Post-installation Caché security settings
**The following actions need to be performed after installation (most of them are already performed in the “Locked down” installation mode):**
All services and resources that are not used by application should be disabled.
For services using network access, IP addresses that can be used for remote interaction must be explicitly specified.
Unused CSP web applications must be disabled.
Access without authentication and authorization must be disabled.
Access to the CSP Gateway must be password-protected and restricted.
Audit must be enabled.
The Data encryption option must be enabled for the configuration file.
To ensure the security of system settings, Security Advisor must be launched from the management portal and its recommendations must be followed. __[Home] > [Security Management] > [Security Advisor]__
[For services (Services section):](http://docs.intersystems.com/ens20172/csp/docbook/DocBook.UI.Page.cls?KEY=GCAS_secmgmt_secadv_svcs)
>**Ability to set % globals should be turned off** — the possibility to modify % globals must be disabled, since such globals are often used for system code and modification of such variables can lead to unpredictable consequences.
>**Unauthenticated should be off** — unauthenticated access must be disabled. Unauthenticated access to the service makes it accessible to all users.
>**Service should be disabled unless required** — if a service is not used, it must be disabled. Access to any service that is not used by an application can provide an unjustifiably high level of access to the entire system.
>**Service should use Kerberos authentication** — access through any other authentication mechanism does not provide the maximum level of security
>**Service should have client IP addresses assigned** — IP addresses of connections to the services must be specified explicitly. Limiting the list of IP addresses that will be allowed to connect let you have greater control over connections to Caché
>**Service is Public** — public services allow all users, including the UnknownUser account that requires no authentication, to get unregulated access to Caché
[Applications (CSP, Privileged Routine, and Client Applications)](http://docs.intersystems.com/ens20172/csp/docbook/DocBook.UI.Page.cls?KEY=GCAS_secmgmt_secadv_apps)
>**Application is Public** — Public applications allow all users, including the UnknownUser account that requires no authentication, to get unregulated access to Caché
>**Application conditionally grants the %AII role** — a system cannot be considered secure if an application can potentially delegate all privileges to its users. Applications should not delegate all privileges
>**Application grants the %AII role** — the application explicitly delegates all privileges to its users. Applications should not delegate all privileges
#1.Managing users
##1.1 Managing system accounts
You need to make sure that unused system accounts are disabled or deleted, and that passwords are changed for used system accounts.
To identify such accounts, you need to use the Security Advisor component of the management portal. To do it, go to the management portal here: __[Home] > [Security Management] > [Security Advisor]__.
Change corresponding users’ passwords in all records in the Users section where Recommendations = “Password should be changed from default password”.

##1.2 Managing privileges accounts
If the DBMS has several administrators, a personal account should be created for each of them with just a minimum of privileges required for their job.
##1.3 Managing rights and privileges
When delegating access rights, you should use the minimum privileges principle. That is, you should forbid everything and then provide a bare minimum of rights required for this particular role. When granting privileges, you should use a role-based approach – that is, assign rights to a role, not a user, then assign a role to the necessary user.
##1.4 Delegation of access rights
In order to check security settings in terms of access rights delegation, launch Security Advisor. You need to perform the following actions depending on the recommendations provided by Security Advisor.
To roles:
This role holds privileges on the Auditing database — this role has privileges for accessing the auditing database. Read access makes it possible to use audit data in an inappropriate way. Write access makes it possible to compromise the audit data
This role holds the %Admin_Secure privilege — This role includes the %Admin_Secure resource, which allows holder to change access privileges for any user
This role holds WRITE privilege on the CACHESYS database — this role allows users to write to the CACHESYS system database, thus making it possible to change the system code and Caché system settings
Users:
At least 2 and at most 5 users should have the %AII role — at least 2 and no more than 5 users can have the %Аll role. Too few users with this role may result in problems with access during emergencies; too many users may jeopardize the overall security of the system.
This user holds the %AII role — this user has the %Аll role. You need to verify the necessity of assigning this role to the user.
UnknownUser account should not have the %AII role — the system cannot be considered secure if UnknownUser has the %Аll role.
Account has never been used — this account has never been used. Unused accounts may be used for unauthorized access to the system.
Account appears dormant and should be disabled — the account is inactive and must be disabled. Inactive accounts (ones that haven’t been used for 30 days) may be used for unauthorized access.
Password should be changed from default password — the default password value must be changed.
After deleting a user, make sure that roles and privileges created by this user have been deleted, if they are no longer required.
##1.5 Configuring the password policy
Password case sensitivity is enabled in Caché by default.
The password policy is applied through the following section of the management portal:
__[Home]>[Security Management] > [System Security Settings] > [System-wide Security Parameters]__.
The configuration of the necessary password complexity is carried out by specifying a password template in the Password Pattern parameter. By default, maximum security uses Password Pattern = 8.32ANP, which means that passwords must be 8 – 32 characters long, contain numbers, characters, and punctuation marks. The “Password validation routine” parameter is used for invoking specific password validity checking algorithms.
A detailed description is provided in [1], section “Password Strength and Password Policies”.
In addition to using internal mechanisms, authentication in Caché can be delegated to the operating system, Kerberos or LDAP servers.
Just recently, I had to check whether the Caché DBMS complied with the new edition of PCI DSS 3.2, the main security standard of the bank card industry adopted in April 2016.
**Compliance of Caché DBMS security settings with the requirements of the PCI DSS version 3.2 [5] standard**


##1.6 Configuration of terminating an inactive database connection
Database disconnect settings for inactive user sessions depend on the type of connection to Caché.
For SQL and object access via TCP, the parameter is set in the **[Home] > [Configuration] > [SQL Settings] > [General SQL Settings]** section of the management portal. Look for a parameter called TCP Keep Alive interval (in seconds): set it to 900, which will correspond to 15 minutes.
For web access, this parameter is specified in the “No Activity Timeout” for **[Home] > [Configuration] > [CSP Gateway Management]**. Replace the default parameter with 900 seconds and enable the “Apply time-out to all connections” parameter
#2 Event logging
##2.1 General settings
To enable auditing, you need to enable this option for the entire Caché DBMS instance. To do it, open the system management portal, navigate to the auditing management page (([Home] > [Security Management] > [Auditing]** and make sure that the “Disable Auditing” option is available, and “Enable Auditing” is unavailable. The opposite will mean that auditing is disabled.
If auditing is disabled, it should be enabled by selecting the “Enable Auditing” command from the menu. You can view the event log through the system management portal: **[Home] > [Security Management] > [Auditing] > [View Audit Database]**

There are also system classes (utilities) for viewing the event log. The log contains the following records, among others:
-Date and time
-Event type
-Account name (user identification)
Access to audit data is managed by the %DB_CACHEAUDIT resource. To disable public access to this resource, you need to make sure that both Read and Write operations are closed for Public access in its properties. Access to the list of resources is provided through the system management portal [Home] > [Security Management] > [Resources] > [Edit Resource]. Select the necessary resource, then click the Edit link.
By default, the %DB_CACHEAUDIT resource has the same-name role %DB_CACHEAUDIT. To limit access to logs, you need to define a list of users with this role, which can be done in the system management portal:
**[Home] > [Security Management] > [Roles] > [Edit Role]**, then use the Edit button in the %DB_CACHEAUDIT role
##2.2 List of logged event types
###2.2.1 Logging of access to tables containing bank card details (PCI DSS 10.2.1)
Logging of access to tables (datasets) containing bank card data is performed with the help of the following mechanisms:
>1. A system auditing mechanism that makes records of the “ResourceChange” type whenever access rights are changed for a resource responsible for storing bank card information (access to the audit log is provided from the system management portal: [Home] > [Security Management] > [Auditing] > [View Audit Database]);
>2. On the application level, it is possible to log access to a particular record by registering an application event in the system and calling it from your application when a corresponding event takes place.
**[System] > [Security Management] > [Configure User Audit Events] > [Edit Audit Event]**
###2.2.2 Logging attempts to use administrative privileges (PCI DSS 10.2.2)
The Caché DBMS logs the actions of all users and the configuration of the logging method is carried out by specifying the events that should be logged **[Home] > [Security Management] > [Auditing] > [Configure SystemEvents]**
Logging of all system events needs to be enabled.
###2.2.3 Logging of event log changes (PCI DSS 10.2.3)
The Caché DBMS uses a single audit log that cannot be changed, except for the natural change of its content and error entries, log purging, the change of audited events, which add corresponding AuditChange entries to the log.
The task of logging the AuditChange event is accomplished by enabling the auditing of all events (see 2.2.2).
###2.2.4 Logging of all unsuccessful attempts to obtain logical access (PCI-DSS 10.2.4)
The task of logging an unsuccessful attempt to obtain logical access is accomplished through enabling the auditing of all events (see 2.2.2).
When an attempt to obtain logical access is registered, a LoginFailure event is created in the audit log.
###2.2.5 Logging of attempts to obtain access to the system (PCI DSS 10.2.5)
The task of logging an attempt to access the system is accomplished by enabling the auditing of all events (see 2.2.2).
When an unsuccessful attempt to obtain access is registered, a “LoginFailure” event is created in the audit log. A successful log-in creates a “Login” event in the log.
###2.2.6 Logging of audit parameter changes (PCI DSS 10.2.6)
The task of logging changes in audit parameters is accomplished by enabling the auditing of all events (see 2.2.2).
When an attempt to obtain logical access is made, an “AuditChange” event is created in the audit log.
###2.2.7 Logging of the creation and deletion of system objects (PCI DSS 10.2.7)
The Caché DBMS logs the creation, modification, and removal of the following system objects: roles, privileges, resources, users.
The task of logging the creation and deletion of system objects is accomplished by enabling the auditing of all events (see 2.2.2).
When a system object is created, changed or removed, the following events are added to the audit log: “ResourceChange”, “RoleChange”, “ServiceChange”, “UserChange”.
##2.3 Protection of event logs
You need to make sure that access to the %DB_CACHEAUDIT resource is restricted. That is, only the admin and those responsible for log monitoring have read and write rights to this resource.
Following the recommendations above, I have managed to install Caché in the maximum security mode. To demonstrate compliance with the requirements of PCI DSS section 8.2.5 “Forbid the use of old passwords”, I created a small application that will be launched by the system when the user attempts to change the password and will validate whether it has been used before.
To install this program, you need to import the source code using Caché Studio, Atelier or the class import page through the control panel
ROUTINE PASSWORD
PASSWORD ; password verification program
#include %occInclude
CHECK(Username,Password) PUBLIC {
if '$match(Password,"(?=.*[0-9])(?=.*[a-zA-Z]).{7,}") quit $$$ERROR($$$GeneralError,"Password does not match the standard PCI_DSS_v3.2")
set Remember=4 ; the number of most recent passwords that cannot be used according to PCI-DSS
set GlobRef="^PASSWORDLIST" ; The name of the global link
set PasswordHash=$System.Encryption.SHA1Hash(Password)
if $d(@GlobRef@(Username,"hash",PasswordHash)){
quit $$$ERROR($$$GeneralError,"This password has already been used ")
}
set hor=""
for i=1:1 {
; Traverse the nods chronologically from new to old ones
set hor=$order(@GlobRef@(Username,"datetime",hor),-1)
quit:hor=""
; Delete the old one that’s over the limit
if i>(Remember-1) {
set hash=$g(@GlobRef@(Username,"datetime",hor))
kill @GlobRef@(Username,"datetime",hor)
kill:hash'="" @GlobRef@(Username,"hash",hash)
}
}
; Save the current one
set @GlobRef@(Username,"hash",PasswordHash)=$h
set @GlobRef@(Username,"datetime",$h)=PasswordHash
quit $$$OK
}
Let’s save the name of the program in the management portal.

It happened so that my product configuration was different from the test one not only in terms of security but also in terms of users. In my case, there were thousands of them, which made it impossible to create a new user by copying settings from an existing one.

DBMS developers limited list output to 1000 elements. After talking to [the InterSystems WRC technical support service](https://login.intersystems.com/login/SSO.UI.Login.cls?referrer=https%253A//wrc.intersystems.com/wrc/login.csp), I learned that the problem could be solved by creating a special global node in the system area using the following command:
%SYS>set ^CacheTemp.MgtPortalSettings($Username,"MaxUsers")=5000
This is how you can increase the number of users shown in the dropdown list. I explored this global a bit and found a number of other useful settings of the current user. However, there is a certain inconvenience here: this global is mapped to the temporary CacheTemp database and will be removed after the system is restarted. This problem can be solved by saving this global before shutting down the system and restoring it after the system is restarted.
To this end, I wrote [two programs](http://docs.intersystems.com/ens20172/csp/docbook/DocBook.UI.Page.cls?KEY=GSTU_customize#GSTU_customize_startstop),^%ZSART and ^%ZSTOP, with the required functionality.
The source code of the %ZSTOP program
%ZSTOP() {
Quit
}
/// save users’ preferences in a non-killable global
SYSTEM() Public {
merge ^tmpMgtPortalSettings=^CacheTemp.MgtPortalSettings
quit
}
The source code of the %ZSTART program
%ZSTART() {
Quit
}
/// restore users’ preferences from a non-killable global
SYSTEM() Public {
if $data(^tmpMgtPortalSettings) merge ^CacheTemp.MgtPortalSettings=^tmpMgtPortalSettings
quit
}
Going back to security and the requirements of the standard, we can’t ignore the backup procedure. The PCI DSS standard imposes certain requirements for backing up both data and event logs. In Caché, all logged events are saved to the CACHEAUDIT database that can be included in the list of backed up databases along with other ones.
The Caché DBMS comes with several pre-configured backup jobs, but they didn’t always work for me. Every time I needed something particular for a project, it wasn’t there in “out-of-the-box” jobs. In one project, I had to automate the control over the number of backup copies with an option of automatic purging of the oldest ones. In another project, I had to estimate the size of the future backup file. In the end, I had to write my own backup task.
CustomListBackup.cls
Include %occKeyword
/// Backup task class
Class App.Task.CustomListBackup Extends %SYS.Task.Definition [ LegacyInstanceContext ]
{
/// If ..AllDatabases=1, include all databases into the backup copy ..PrefixIncludeDB and ..IncludeDatabases are ignored
Property AllDatabases As %Integer [ InitialExpression = 0 ];
/// If ..AllDatabases=1, include all databases into the backup copy, excluding from ..IgnoreForAllDatabases (comma-delimited)
Property IgnoreForAllDatabases As %String(MAXLEN = 32000) [ InitialExpression = "Not applied if AllDatabases=0 " ];
/// If ..IgnoreTempDatabases=1, exclude temporary databases
Property IgnoreTempDatabases As %Integer [ InitialExpression = 1 ];
/// If ..IgnorePreparedDatabases=1, exclude pre-installed databases
Property IgnorePreparedDatabases As %Integer [ InitialExpression = 1 ];
/// If ..AllDatabases=0 and PrefixIncludeDB is not empty, we will be backing up all databases starting with ..PrefixIncludeDB
Property PrefixIncludeDB As %String [ SqlComputeCode = {S {*}=..ListNS()}, SqlComputed ];
/// If ..AllDatabases=0, back up all databases from ..IncludeDatabases (comma-delimited)
Property IncludeDatabases As %String(MAXLEN = 32000) [ InitialExpression = {"Not applied if AllDatabases=1"_..ListDB()} ];
/// Name of the task on the general list
Parameter TaskName = "CustomListBackup";
/// Path for the backup file
Property DirBackup As %String(MAXLEN = 1024) [ InitialExpression = {##class(%File).NormalizeDirectory("Backup")} ];
/// Path for the log
Property DirBackupLog As %String(MAXLEN = 1024) [ InitialExpression = {##class(%File).NormalizeDirectory("Backup")} ];
/// Backup type (Full, Incremental, Cumulative)
Property TypeBackup As %String(DISPLAYLIST = ",Full,Incremental,Cumulative", VALUELIST = ",Full,Inc,Cum") [ InitialExpression = "Full", SqlColumnNumber = 4 ];
/// Backup file name prefix
Property PrefixBackUpFile As %String [ InitialExpression = "back" ];
/// The maximum number of backup files, delete the oldest ones
Property MaxBackUpFiles As %Integer [ InitialExpression = 3 ];
ClassMethod DeviceIsValid(Directory As %String) As %Status
{
If '##class(%Library.File).DirectoryExists(Directory) quit $$$ERROR($$$GeneralError,"Directory does not exist")
quit $$$OK
}
ClassMethod CheckBackup(Device, MaxBackUpFiles, del = 0) As %Status
{
set path=##class(%File).NormalizeFilename(Device)
quit:'##class(%File).DirectoryExists(path) $$$ERROR($$$GeneralError,"Folder "_path_" does not exist")
set max=MaxBackUpFiles
set result=##class(%ResultSet).%New("%File:FileSet")
set st=result.Execute(path,"*.cbk",,1)
while result.Next()
{ If result.GetData(2)="F" {
continue:result.GetData(3)=0
set ts=$tr(result.GetData(4),"-: ")
set ts(ts)=$lb(result.GetData(1),result.GetData(3))
}
}
#; Let’s traverse all the files starting from the newest one
set i="" for count=1:1 { set i=$order(ts(i),-1) quit:i=""
#; Get the increase in bytes as a size difference with the previous backup
if $data(size),'$data(delta) set delta=size-$lg(ts(i),2)
#; Get the size of the most recent backup file in bytes
if '$data(size) set size=$lg(ts(i),2)
#; If the number of backup files is larger or equals to the upper limit, delete the oldest ones along with logs
if count'$g(free) $$$ERROR($$$GeneralError,"Estimated size of the new backup file is larger than the available disk space:("_$g(size)_"+"_$g(delta)_")>"_$g(free))
quit $$$OK
}
Method OnTask() As %Status
{
do $zu(5,"%SYS")
set list=""
merge oldDBList=^SYS("BACKUPDB")
kill ^SYS("BACKUPDB")
#; Adding new properties for the backup task
set status=$$$OK
try {
##; Check the number of database copies, delete the oldest one, if necessary
##; Check the remaining disk space and estimate the size of the new file
set status=..CheckBackup(..DirBackup,..MaxBackUpFiles,1)
quit:$$$ISERR(status)
#; All databases
if ..AllDatabases {
set vals=""
set disp=""
set rss=##class(%ResultSet).%New("Config.Databases:List")
do rss.Execute()
while rss.Next(.sc) {
if ..IgnoreForAllDatabases'="",(","_..IgnoreForAllDatabases_",")[(","_$zconvert(rss.Data("Name"),"U")_",") continue
if ..IgnoreTempDatabases continue:..IsTempDB(rss.Data("Name"))
if ..IgnorePreparedDatabases continue:..IsPreparedDB(rss.Data("Name"))
set ^SYS("BACKUPDB",rss.Data("Name"))=""
}
}
else {
#; if the PrefixIncludeDB property is not empty, we’ll back up all DB’s with names starting from ..PrefixIncludeDB
if ..PrefixIncludeDB'="" {
set rss=##class(%ResultSet).%New("Config.Databases:List")
do rss.Execute(..PrefixIncludeDB_"*")
while rss.Next(.sc) {
if ..IgnoreTempDatabases continue:..IsTempDB(rss.Data("Name"))
set ^SYS("BACKUPDB",rss.Data("Name"))=""
}
}
#; Include particular databases into the list
if ..IncludeDatabases'="" {
set rss=##class(%ResultSet).%New("Config.Databases:List")
do rss.Execute("*")
while rss.Next(.sc) {
if ..IgnoreTempDatabases continue:..IsTempDB(rss.Data("Name"))
if (","_..IncludeDatabases_",")'[(","_$zconvert(rss.Data("Name"),"U")_",") continue
set ^SYS("BACKUPDB",rss.Data("Name"))=""
}
}
}
do ..GetFileName(.backFile,.logFile)
set typeB=$zconvert($e(..TypeBackup,1),"U")
set:"FIC"'[typeB typeB="F"
set res=$$BACKUP^DBACK("",typeB,"",backFile,"Y",logFile,"NOINPUT","Y","Y","","","")
if 'res set status=$$$ERROR($$$GeneralError,"Error: "_res)
} catch { set status=$$$ERROR($$$GeneralError,"Error: "_$ze)
set $ze=""
}
kill ^SYS("BACKUPDB")
merge ^SYS("BACKUPDB")=oldDBList
quit status
}
/// Get file names
Method GetFileName(aBackupFile, ByRef aLogFile) As %Status
{
set tmpName=..PrefixBackUpFile_"_"_..TypeBackup_"_"_$s(..AllDatabases:"All",1:"List")_"_"_$zd($h,8)_$tr($j($i(cnt),3)," ",0)
do {
s aBackupFile=##class(%File).NormalizeFilename(..DirBackup_"/"_tmpName_".cbk")
} while ##class(%File).Exists(aBackupFile)
set aLogFile=##class(%File).NormalizeFilename(..DirBackupLog_"/"_tmpName_".log")
quit 1
}
/// Check if the database is pre-installed
ClassMethod IsPreparedDB(name)
{
if (",ENSDEMO,ENSEMBLE,ENSEMBLEENSTEMP,ENSEMBLESECONDARY,ENSLIB,CACHESYS,CACHELIB,CACHETEMP,CACHE,CACHEAUDIT,DOCBOOK,USER,SAMPLES,")[(","_$zconvert(name,"U")_",") quit 1
quit 0
}
/// Check if the database is temporary
ClassMethod IsTempDB(name)
{
quit:$zconvert(name,"U")["TEMP" 1
quit:$zconvert(name,"U")["SECONDARY" 1
quit 0
}
/// Get a comma-delimited list of databases
ClassMethod ListDB()
{
set list=""
set rss=##class(%ResultSet).%New("Config.Databases:List")
do rss.Execute()
while rss.Next(.sc) {
set list=list_","_rss.Data("Name")
}
quit list
}
ClassMethod ListNS() [ Private ]
{
set disp=""
set tRS = ##class(%ResultSet).%New("Config.Namespaces:List")
set tSC = tRS.Execute()
While tRS.Next() {
set disp=disp_","_tRS.GetData(1)
}
set %class=..%ClassName(1)
$$$comSubMemberSet(%class,$$$cCLASSproperty,"PrefixIncludeDB",$$$cPROPparameter,"VALUELIST",disp)
quit ""
}
ClassMethod oncompile() [ CodeMode = generator ]
{
$$$defMemberKeySet(%class,$$$cCLASSproperty,"PrefixIncludeDB",$$$cPROPtype,"%String")
set updateClass=##class("%Dictionary.ClassDefinition").%OpenId(%class)
set updateClass.Modified=0
do updateClass.%Save()
do updateClass.%Close()
}
}
All our major concerns are addressed here:
limitation of the number of copies,
removal of old copies,
estimation of the size of the new file,
different methods of selecting or excluding databases from the list.
Let’s import it into the system and create a new task using the Task manager.
And include the database into the list of copied databases.
All of the examples above are provided for Caché 2016.1 and are intended for educational purposes only. They can only be used in a product system after serious testing. I will be happy if this code helps you do your job better or avoid making mistakes.
[Github repository](https://github.com/SergeyMi37/ExampleBackupTask)
>The following materials were used for writing this article:
>1. Caché Security Administration Guide (InterSystems)
>2. Caché Installation Guide. Preparing for Caché Security (InterSystems)
>3. Caché System Administration Guide (InterSystems)
>4. Introduction to Caché. Caché Security (InterSystems)
>5. PCI DSS.RU. Requirements and the security audit procedure. Version 3.2
Great articleI found this too...The Caché DBMS comes with several pre-configured backup jobs, but they didn’t always work for me. Every time I needed something particular for a project, it wasn’t there in “out-of-the-box” jobs. In one project, I had to automate the control over the number of backup copies with an option of automatic purging of the oldest ones.I thought I'd share the class we use - this only deletes old backups
Class App.PurgeOldBackupFiles Extends %SYS.Task.Definition
{
Property BackupsToKeep As %Integer(MAXVAL = 30, MINVAL = 1) [ InitialExpression = 30, Required ];
Property BackupFolder As %String [ Required ];
Property BackupFileType As %String [ Required ];
Method OnTask() As %Status
{
//s BackupsToKeep = 2
//s Folder = "c:\backupfolder"
//s BackupFileType = "FullAllDatabases" // or "FullDBList"
s SortOrder = "DateModified"
If ..BackupsToKeep<1 Quit $$$ERROR($$$GeneralError,"Invalid - Number of Backups to Keep must be greater than or equal to 1")
If ..BackupFolder="" Quit $$$ERROR($$$GeneralError,"Invalid - BackupFolder - not supplied")
if ..BackupFileType = "" Quit $$$ERROR($$$GeneralError,"Invalid - BackupFileType - not supplied")
if (..BackupFileType '= "FullAllDatabases")&&(..BackupFileType '= "FullDBList") Quit $$$ERROR($$$GeneralError,"Invalid - BackupFileType")
s BackupCount=0
k zPurgeOldBackupFiles(..BackupFileType)
Set rs=##class(%ResultSet).%New("%Library.File:FileSet")
w !,"backuplist",!
s BackupFileWildcard = ..BackupFileType _ "*.cbk"
set status=rs.Execute(..BackupFolder, BackupFileWildcard, SortOrder)
WHILE rs.Next() {
Set FullFileName=rs.Data("Name")
Set FName=##class(%File).GetFilename(FullFileName)
Set FDateTime=##class(%File).GetFileDateModified(FullFileName)
w "File "_FName_" "_FDateTime,!
Set FDate=$PIECE(FDateTime,",")
Set CDate=$PIECE($H,",")
s BackupCount=$I(BackupCount)
s zPurgeOldBackupFiles(..BackupFileType, BackupCount)=FullFileName
}
s zPurgeOldBackupFiles(..BackupFileType, "BackupCount")=BackupCount
do rs.Close()
if BackupCount > ..BackupsToKeep {
for i=1:1:BackupCount-..BackupsToKeep {
s FullFileName = zPurgeOldBackupFiles(..BackupFileType, i)
d ##class(%File).Delete(FullFileName)
w "File Purged "_FullFileName_" Purged",!
}
}
q status
}
}
Thanks!
Announcement
David Reche · Jan 22, 2018
Come to Barcelona and join us !!
Agenda
09.00 – 09.30
Registration
General/Management Sessions
09.30 – 09.45
Welcome ( Jordi Calvera)
09.45 – 10.15
Your Success – Our Success ( Jordi Calvera)
10.15 – 11:00
Choosing a DBMS to Build Something that Matters of the Third Platform ( IDC, Philip Carnelley)
11:00 – 11:45
InterSystems IRIS Data Platform (Industries & in Action) ( Joe Lichtenberg)
11:45 – 12:15
Café
12:15 – 13:00
InterSystems Guide to the Data Galaxy ( Benjamin de Boe)
13:00 – 13:20
InterSystems Worldwide Support: A Competitive Advantage ( Stefan Schulte Strathaus)
13:20 – 13:50
Developers Community Meet Up ( Evgeny Shvarov & Francisco J. López)
13:50 – 14:00
Morning Sessions Closing ( Jordi Calvera)
14:00 – 15:15
Lunch & Networking
Technical Sessions
15.15 – 16.00
Docker Containers ( José Tomás Salvador)
16.00 – 16.30
ICM – InterSystems Cloud Manager* ( Luca Ravazzolo)
16.30 – 17:00
Escalabilidad & Sharding ( Pierre Yves Duquesnoy)
17:00 – 17:15
Coffee Break
17:15 – 17:45
Interacting Faster and with More Technologies ( David Reche)
17:45 – 18:15
Atelier: Fast and Intuitive Development Environment ( Alberto Fuentes)
18:15
Q&A panel ( Jordi Calvera)
Closing & Build Walkout Video
David, I edited your post, just to give a bit more information, and to be more than just Twitter. Don't miss my session ;) Thanks Dmitry is a good idea Of course!! For sure the best one I'll be there, sure
Announcement
James Schultz · Jun 14, 2018
Hi Community!Come join us on Developer Week in NYC on 18-20 of June!InterSystems has signed on for a high-level sponsorship and exhibitor space at this year's DeveloperWeek, billed as "New York City’s Largest Developer Conference & Expo". This is the first time we have participated in the event that organizers expect will draw more than 3,000 developers from 18th to 20th June.“The world is changing rapidly, and our target audience is far more diverse in terms of roles and interests than it used to be... DeveloperWeek NYC is a gathering of people who create applications for a living, and we want developers to see the power and capabilities of InterSystems. We need to know them, and they need to know us, as our software can be a foundation for their success.” – says Marketing Vice President Jim Rose.The main feature at InterSystems booth 812 is the new sandbox experience for InterSystems IRIS Data Platform™. Meanwhile Director of Data Platforms Product Management @Iran.Hutchinson is delivering two presentations on the conference agenda. One, "GraalVM: What it is. Why it’s important. Experiments you should try today" will be on the Main Stage on June 19 between 11:00 a.m. and 11:20 a.m. GraalVM is an open source set of projects driven by Oracle Labs, the Institute for Software Kepler University Linz Austria, and a community of contributors.On the following day, Hutchinson will lead a follow-on presentation to Frictionless Systems Founder Carter Jernigan's productivity boosting "Power Up with Flow: Get “In the Zone” to Get Things Done", which runs from 11:00 a.m. - 11:45 a.m. on Workshop Stage 2. In "Show and Tell Your Tips and Techniques – and Win in Powers of 2!" he leads an open exchange of productivity ideas, tips, and innovations culminating in prizes to the "Power of 2" for the best ideas. If you are attending, it takes place between 11:45 a.m. and 12:30 p.m. also on Workshop Stage 2.Don't forget to check these useful links:All details about the DeveloperWeek NYC The original agendaRegister now and see you in New York! Very cool Thanks! We're very excited to be participating!
Announcement
Michelle Spisak · Apr 30, 2018
When you hear people talk about moving their applications to the cloud, are you unsure of what exactly they mean? Do you want a solution for migrating your local, physical servers to a flexible, efficient cloud infrastructure? Join Luca Ravazzolo for Introducing InterSystems Cloud Manager, (May 17th, 2:00 p.m. EDT). In this webinar, Luca — Product Manager for InterSystems Cloud Manager — will explain cloud technology and how you can move your InterSystems IRIS infrastructure to the cloud in an operationally agile fashion. He will also be able to answer your questions following the webinar about this great new product from InterSystems! Thanks Michelle!I'm happy to answer any question anybody may have on the webinar where I presented InterSystems Cloud Manager and generally on the improvement an organization can achieve in its software-factory with the newly available technologies from InterSystems. This webinar recording has been posted: https://learning.intersystems.com/course/view.php?name=IntroICMwebinar And now this webinar recording is available on InterSystems Developers YouTube Channel: Please welcome!
Announcement
Jon Jensen · Jun 5, 2018
Hi Community!Save the date for InterSystems Global Summit 2018! This year Global Summit is Sept. 30 – Oct. 3, 2018 at the JW Marriott Hill Country Resort & Spa in San Antonio, TX.Registration for Global Summit is now open!Empowering What MattersInterSystems Global Summit 2018 is all about empowering you, because you and the applications you build with our technology MATTER. It is an unparalleled opportunity to connect with your peers and with InterSystems’ executives and technical experts.InterSystems Global Summit is an annual gathering for everyone in the InterSystems community. It is composed of three conferences:Solution Developer ConferenceTechnology Leadership ConferenceHealthcare Leadership ConferenceThe super early bird registration rate of $999 is available until August 10, 2018.Register now. See you at InterSystems Global Summit 2018!
Announcement
Evgeny Shvarov · May 1, 2017
Have you ever thought about leveraging IIS (Internet Information Services for Windows) to improve performance and security for your Caché web applications? Are you worried about the complexity of properly setting up IIS?See the webinar Configuring a Web Server presented by @Kyle.Baxter, InterSystems Senior Support Specialist. Learn how to install IIS, set up it up to work with the CSP Gateway, and configure the CSP Gateway to talk to Caché.If you have not subscribed to our Developer Community YouTube Channel yet, let's get started right now. Enjoy!
Announcement
Janine Perkins · May 5, 2017
Take this online course to learn the basics of SAML (Security Assertion Markup Language), the ways in which it can be used within Caché security features, and some use cases that can be applied to HealthShare productions.Audience: This course is for Caché and HealthShare developers who may want to use SAML as a part of their security strategy, want to learn SAML basics, and the capabilities of Caché and HealthShare for using SAML.Learn more.
Question
Kishan Ravindran · Jul 1, 2017
In my cache studio i couldn't find the a namespace of iknow so how can i check is my studio version is compatible to to the one i am using now. If i don't have one then can be able to create a new namespace in studio? I checked out how it can be done using intersystem doc. But for mine i think for my user the license does not apply is there any other way i can work on. Can anyone help me on this. Thank you John Murray and Benjamin DeBoe for your answers. Both your Document was helpful. Here's a way of discovering if your license includes the iKnow feature:USER>w $system.License.GetFeature(11)1USER>A return value of 1 indicates that you are licensed for iKnow. If the result is 0 then your license does not include iKnow.See here for documentation about this method, which tells you that 11 is the feature number for iKnow.Regarding namespaces, these are created in Portal, not in Studio. See this documentation. Thanks John,indeed, you'd need a proper license in order to work with iKnow. If the method referred above would return 0, please contact your sales representative to request a temporary trial license and appropriate assistance for implementing your use case.Also, iKnow doesn't come as a separate namespace. You can create (regular) namespaces as you prefer and use them to store iKnow domain data. You may need to enable your web application for iKnow, which is disabled by default for security reasons in the same way DeepSee is. See this paragraph here for more details.
Announcement
Evgeny Shvarov · Jul 24, 2017
Hi, Community!
Recently we announced Developer Community Meetup in Cambridge, MA. Let me describe what is it.
Meetup it's a one evening event where you can enjoy some sessions by presenters (today InterSystems engineers, tomorrow you can be a presenter) and chat with local developers and InterSystems developers, engineers and product managers (depends on who is in town) and discuss your problems, experience, and success with InterSystems technologies.
InterSystems would provide the place, technical sessions, some food and beverages.
We think the Developer Community would benefit from it. We can make the event a regular thing and introduce it in different cities and countries. The success of the event depends on whether you come.
D you want InterSystems meetup?
See you on 8th of August in Cambridge Innovation Center!
Announcement
Evgeny Shvarov · Aug 7, 2018
Hi, Community!
Just want to let you know that InterSystems IRIS is available on Google Cloud Marketplace.
Start here to get your InterSystems IRIS VM on GCP.
You can request a license for InterSystems IRIS here.
Learn more in an official press release Create an IRIS instance in 5 minutes video... https://youtu.be/f_uVe_Q5X-c
Article
Gevorg Arutiunian · Sep 4, 2018

I already talked about [GraphQL](https://community.intersystems.com/post/graphql-intersystems-data-platforms) and the ways of using it in this article. Now I am going to tell you about the tasks I was facing and the results that I managed to achieve in the process of implementing GraphQL for InterSystems platforms.
## What this article is about
- Generation of an [AST](https://en.wikipedia.org/wiki/Abstract_syntax_tree) for a GraphQL request and its validation
- Generation of documentation
- Generation of a response in the JSON format
Let’s take a look at the entire process from sending a request to receiving a response using this simple flow as an example:

The client can send requests of two types to the server:
- A schema request.
The server generates a schema and returns it to the client. We’ll cover this process later in this article.
- A request to fetch/modify a particular data set.
In this case, the server generates an AST, validates and returns a response.
## AST generation
The first task that we had to solve was to parse the received GraphQL request. Initially, I wanted to find an external library and use it to parse the response and generate an AST. However, I discarded the idea for a number of reasons. This is yet another black box, and you should keep the issue with long callbacks in mind.
That’s how I ended up with a decision to write my own parser, but where do I get its description? Things got better when I realized that [GraphQL](http://facebook.github.io/graphql/October2016/) was an open-source project with a pretty good description by Facebook. I also found multiple examples of parsers written in other languages.
You can find a description of an AST [here](http://facebook.github.io/graphql/October2016/#Document).
Let’s take a look at a sample request and tree:
```
{
Sample_Company(id: 15) {
Name
}
}
```
**AST**
```
{
"Kind": "Document",
"Location": {
"Start": 1,
"End": 45
},
"Definitions": [
{
"Kind": "OperationDefinition",
"Location": {
"Start": 1,
"End": 45
},
"Directives": [],
"VariableDefinitions": [],
"Name": null,
"Operation": "Query",
"SelectionSet": {
"Kind": "SelectionSet",
"Location": {
"Start": 1,
"End": 45
},
"Selections": [
{
"Kind": "FieldSelection",
"Location": {
"Start": 5,
"End": 44
},
"Name": {
"Kind": "Name",
"Location": {
"Start": 5,
"End": 20
},
"Value": "Sample_Company"
},
"Alias": null,
"Arguments": [
{
"Kind": "Argument",
"Location": {
"Start": 26,
"End": 27
},
"Name": {
"Kind": "Name",
"Location": {
"Start": 20,
"End": 23
},
"Value": "id"
},
"Value": {
"Kind": "ScalarValue",
"Location": {
"Start": 24,
"End": 27
},
"KindField": 11,
"Value": 15
}
}
],
"Directives": [],
"SelectionSet": {
"Kind": "SelectionSet",
"Location": {
"Start": 28,
"End": 44
},
"Selections": [
{
"Kind": "FieldSelection",
"Location": {
"Start": 34,
"End": 42
},
"Name": {
"Kind": "Name",
"Location": {
"Start": 34,
"End": 42
},
"Value": "Name"
},
"Alias": null,
"Arguments": [],
"Directives": [],
"SelectionSet": null
}
]
}
}
]
}
}
]
}
```
## Validation
Once we receive a tree, we’ll need to check if it has classes, properties, arguments and their types on the server – that is, we’ll need to validate it. Let’s traverse the tree recursively and check whether its elements match the ones on the server. [Here’s](https://github.com/intersystems-ru/GraphQL/blob/master/cls/GraphQL/Query/Validation.cls) how a class looks.
## Schema generation
A **schema** is a type of documentation for available classes and properties, as well as a description of property types in these classes.
GraphQL implementations in other languages or technologies use resolvers to generate schemas. A resolver is a description of the types of data available on the server.
**Examples or resolvers, requests, and responses**
```
type Query {
human(id: ID!): Human
}
type Human {
name: String
appearsIn: [Episode]
starships: [Starship]
}
enum Episode {
NEWHOPE
EMPIRE
JEDI
}
type Starship {
name: String
}
```
```
{
human(id: 1002) {
name
appearsIn
starships {
name
}
}
}
```
```json
{
"data": {
"human": {
"name": "Han Solo",
"appearsIn": [
"NEWHOPE",
"EMPIRE",
"JEDI"
],
"starships": [
{
"name": "Millenium Falcon"
},
{
"name": "Imperial shuttle"
}
]
}
}
}
```
However, before we generate a schema, we need to understand its structure, find a description or, even better, examples. The first thing I tried was attempting to find one that would help me understand the structure of a schema. Since GitHub has its own [GraphQL API](https://developer.github.com/v4/explorer/), it was easy to get one from there. But the problem was that its server-side was so huge that the schema itself occupied 64 thousand lines. I really hated the idea of delving into all that and started looking for other methods of obtaining a schema.
Since our platforms are based on a DBMS, my plan for the next step was to build and start GraphQL for PostgreSQL and SQLite. With PostgreSQL, I managed to fit the schema into just 22 thousand lines, and SQLite gave me an even better result with 18 thousand lines. It was better than the starting point, but still not enough, so I kept on looking.
I ended up choosing a [NodeJS](https://graphql.org/graphql-js/) implementation, made a build, wrote a simple resolver, and got a solution with just 1800 lines, which was way better!
Once I had wrapped my head around the schema, I decided to generate it automatically without creating resolvers on the server in advance, since getting meta information about classes and their relationships is really easy.
To generate your own schema, you need to understand a few simple things:
- You don’t need to generate it from scratch – take one from NodeJS, remove the unnecessary stuff and add the things that you do need.
- The root of the schema has a queryType type. You need to initialize its “name” field with some value. We are not interested in the other two types since they are still being implemented at this point.
- You need to add all the available classes and their properties to the **types** array.
```
{
"data": {
"__schema": {
"queryType": {
"name": "Query"
},
"mutationType": null,
"subscriptionType": null,
"types":[...
],
"directives":[...
]
}
}
}
```
- First of all, you need to describe the **Query** root element and add all the classes, their arguments, and class types to the **fields** array. This way, they will be accessible from the root element.
**Let’s take a look at two sample classes, Example_City and Example_Country**
```
{
"kind": "OBJECT",
"name": "Query",
"description": "The query root of InterSystems GraphQL interface.",
"fields": [
{
"name": "Example_City",
"description": null,
"args": [
{
"name": "id",
"description": "ID of the object",
"type": {
"kind": "SCALAR",
"name": "ID",
"ofType": null
},
"defaultValue": null
},
{
"name": "Name",
"description": "",
"type": {
"kind": "SCALAR",
"name": "String",
"ofType": null
},
"defaultValue": null
}
],
"type": {
"kind": "LIST",
"name": null,
"ofType": {
"kind": "OBJECT",
"name": "Example_City",
"ofType": null
}
},
"isDeprecated": false,
"deprecationReason": null
},
{
"name": "Example_Country",
"description": null,
"args": [
{
"name": "id",
"description": "ID of the object",
"type": {
"kind": "SCALAR",
"name": "ID",
"ofType": null
},
"defaultValue": null
},
{
"name": "Name",
"description": "",
"type": {
"kind": "SCALAR",
"name": "String",
"ofType": null
},
"defaultValue": null
}
],
"type": {
"kind": "LIST",
"name": null,
"ofType": {
"kind": "OBJECT",
"name": "Example_Country",
"ofType": null
}
},
"isDeprecated": false,
"deprecationReason": null
}
],
"inputFields": null,
"interfaces": [],
"enumValues": null,
"possibleTypes": null
}
```
- Our second step is to go one level higher and extend the **types** array with the classes that have already been described in the **Query** object with all of the properties, types, and relationships with other classes.
**Descriptions of classes**
```
{
"kind": "OBJECT",
"name": "Example_City",
"description": "",
"fields": [
{
"name": "id",
"description": "ID of the object",
"args": [],
"type": {
"kind": "SCALAR",
"name": "ID",
"ofType": null
},
"isDeprecated": false,
"deprecationReason": null
},
{
"name": "Country",
"description": "",
"args": [],
"type": {
"kind": "OBJECT",
"name": "Example_Country",
"ofType": null
},
"isDeprecated": false,
"deprecationReason": null
},
{
"name": "Name",
"description": "",
"args": [],
"type": {
"kind": "SCALAR",
"name": "String",
"ofType": null
},
"isDeprecated": false,
"deprecationReason": null
}
],
"inputFields": null,
"interfaces": [],
"enumValues": null,
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Example_Country",
"description": "",
"fields": [
{
"name": "id",
"description": "ID of the object",
"args": [],
"type": {
"kind": "SCALAR",
"name": "ID",
"ofType": null
},
"isDeprecated": false,
"deprecationReason": null
},
{
"name": "City",
"description": "",
"args": [],
"type": {
"kind": "LIST",
"name": null,
"ofType": {
"kind": "OBJECT",
"name": "Example_City",
"ofType": null
}
},
"isDeprecated": false,
"deprecationReason": null
},
{
"name": "Name",
"description": "",
"args": [],
"type": {
"kind": "SCALAR",
"name": "String",
"ofType": null
},
"isDeprecated": false,
"deprecationReason": null
}
],
"inputFields": null,
"interfaces": [],
"enumValues": null,
"possibleTypes": null
}
```
- The third point is that the “types” array contains descriptions of all popular scalar types, such as int, string, etc. We’ll add our own scalar types there, too.
## Response generation
Here is the most complex and exciting part. A request should generate a response. At the same time, the response should be in the JSON format and match the request structure.
For each new GraphQL request, the server has to generate a class containing the logic for obtaining the requested data. The class is not considered new if the values of arguments changed – that is, if we get a particular dataset for Moscow and then the same set for London, no new class will be generated, it’s just going to be new values. In the end, this class will contain an SQL query and the resulting dataset will be saved in the JSON format with its structure matching the GraphQL request.
**An example of a request and a generated class**
```
{
Sample_Company(id: 15) {
Name
}
}
```
```
Class gqlcq.qsmytrXzYZmD4dvgwVIIA [ Not ProcedureBlock ]
{
ClassMethod Execute(arg1) As %DynamicObject
{
set result = {"data":{}}
set query1 = []
#SQLCOMPILE SELECT=ODBC
&sql(DECLARE C1 CURSOR FOR
SELECT Name
INTO :f1
FROM Sample.Company
WHERE id= :arg1
) &sql(OPEN C1)
&sql(FETCH C1)
While (SQLCODE = 0) {
do query1.%Push({"Name":(f1)})
&sql(FETCH C1)
}
&sql(CLOSE C1)
set result.data."Sample_Company" = query1
quit result
}
ClassMethod IsUpToDate() As %Boolean
{
quit:$$$comClassKeyGet("Sample.Company",$$$cCLASShash)'="3B5DBWmwgoE" $$$NO
quit $$$YES
}
}
```
How this process looks in a scheme:

For now, the response is generated for the following requests:
- Basic
- Embedded objects
- Only the many-to-one relationship
- List of simple types
- List of objects
Below is a scheme containing the types of relationships that are yet to be implemented:

Summary
- **Response** — at the moment, you can get a set of data for relatively simple requests.
- **Automatically generated schema** — the schema is generated for stored classes accessible to the client, but not for pre-defined resolvers.
- **A fully functional parser** – the parser is fully implemented, you can get a tree by making a request of any complexity.
→ [Link to the project repository](https://github.com/intersystems-community/GraphQL)
→ [Link to the demo server](http://37.139.6.217:57773/graphiql/index.html)
Demo server doesn't work Thank you, I fixed this issue!
Announcement
Evgeny Shvarov · Oct 1, 2018
hi Community!Here is the link: https://portal.xyvid.com/group/GS18It will start now! Today is the second InterSystem Global Summit KeyNotes day! Don't miss breaking news!
Announcement
David Reche · Nov 13, 2018
Hi Everyone!
We are pleased to invite you to the InterSystems Iberia Summit 2018 on 27th of November in Madrid, Spain!
The New Challenges of Connected Health Matter
Date: November 27, 2018
Place: Hotel Meliá Serrano, Madrid
Please check the original agenda of the event.
REGISTER NOW and hope to see you at the Iberia Healthcare Summit 2018 in Madrid!
Announcement
Anastasia Dyubaylo · Jul 3, 2023
Hi Community,
It's voting time! Cast your votes for the best applications in our InterSystems Grand Prix Contest 2023:
🔥 VOTE FOR THE BEST APPS 🔥
How to vote? Details below.
Experts nomination:
InterSystems experienced jury will choose the best apps to nominate the prizes in the Experts Nomination. Please welcome our experts:
⭐️ @Alexander.Koblov, Support Specialist⭐️ @Guillaume.Rongier7183, Sales Engineer⭐️ @Eduard.Lebedyuk, Senior Cloud Engineer⭐️ @Steve.Pisani, Senior Solution Architect⭐️ @Timothy.Leavitt, Development Manager⭐️ @Evgeny.Shvarov, Developer Ecosystem Manager⭐️ @Dean.Andrews2971, Head of Developer Relations⭐️ @Alexander.Woodhead, Senior Systems Developer⭐️ @Andreas.Dieckow , Principal Product Manager⭐️ @Aya.Heshmat, Product Specialist⭐️ @Benjamin.DeBoe, Product Manager⭐️ @Robert.Kuszewski, Product Manager⭐️ @Carmen.Logue , Product Manager⭐️ @Jeffrey.Fried, Director of Product Management⭐️ @Luca.Ravazzolo, Product Manager⭐️ @Raj.Singh5479, Product Manager⭐️ @Patrick.Jamieson3621, Product Manager⭐️ @Stefan.Wittmann, Product Manager⭐️ @Steven.LeBlanc, Product Specialist⭐️ @Thomas.Dyar, Product Specialist⭐️ @Daniel.Franco, Interoperability Product Management
Community nomination:
For each user, a higher score is selected from two categories below:
Conditions
Place
1st
2nd
3rd
If you have an article posted on DC and an app uploaded to Open Exchange (OEX)
9
6
3
If you have at least 1 article posted on DC or 1 app uploaded to OEX
6
4
2
If you make any valid contribution to DC (posted a comment/question, etc.)
3
2
1
Level
Place
1st
2nd
3rd
VIP Global Masters level or ISC Product Managers
15
10
5
Ambassador GM level
12
8
4
Expert GM level or DC Moderators
9
6
3
Specialist GM level
6
4
2
Advocate GM level or ISC Employees
3
2
1
Blind vote!
The number of votes for each app will be hidden from everyone. Once a day we will publish the leaderboard in the comments to this post.
The order of projects on the contest page will be as follows: the earlier an application was submitted to the competition, the higher it will be on the list.
P.S. Don't forget to subscribe to this post (click on the bell icon) to be notified of new comments.
To take part in the voting, you need:
Sign in to Open Exchange – DC credentials will work.
Make any valid contribution to the Developer Community – answer or ask questions, write an article, contribute applications on Open Exchange – and you'll be able to vote. Check this post on the options to make helpful contributions to the Developer Community.
If you changed your mind, cancel the choice and give your vote to another application!
Support the application you like!
Note: contest participants are allowed to fix the bugs and make improvements to their applications during the voting week, so don't miss and subscribe to application releases! So! After the first day of the voting we have:
Community Nomination, Top 5
oex-vscode-snippets-template by @John.Murray
irisChatGPT by @Muhammad.Waseem
oex-mapping by @Robert.Cemper1003
irisapitester by @Daniel.Aguilar
DevBox by @Sean.Connelly
➡️ Voting is here.
Experts, we are waiting for your votes! 🔥
Participants, improve & promote your solutions! Devs!
Here are the results after two days of voting:
Community Nomination, Top 5
oex-vscode-snippets-template by @John Murray
oex-mapping by @Robert Cemper
irisChatGPT by @Muhammad Waseem
irisapitester by @Daniel Aguilar
iris-user-manager by @Oliver.Wilms
➡️ Voting is here.
So, the voting continues.
Please support the application you like! Hi Developers!
Here are the results at the moment:
Community Nomination, Top 5
iris-fhir-generative-ai by @José.Pereira
irisChatGPT by @Muhammad Waseem
oex-mapping by @Robert Cemper
IntegratedMLandDashboardSample by @珊珊.喻
oex-vscode-snippets-template by @John Murray
➡️ Voting is here.
Expert Nomination, Top 5
irisChatGPT by @Muhammad Waseem
iris-fhir-generative-ai by @José.Pereira
oex-vscode-snippets-template by @John Murray
ZProfile by @Dmitry.Maslennikov
DevBox by @Sean Connelly
➡️ Voting is here.
Don't forget to vote for your favorite app! Hi, Dev's!
And here're the results at the moment:
Community Nomination, Top 5
iris-fhir-generative-ai by @José Roberto Pereira
IntegratedMLandDashboardSample by @Shanshan Yu
IntegratedML-IRIS-PlatformEntryPrediction by @Zhang.Fatong
irisChatGPT by @Muhammad Waseem
oex-mapping by @Robert.Cemper1003
➡️ Voting is here.
Expert Nomination, Top 5
iris-fhir-generative-ai by @José Roberto Pereira
irisChatGPT by @Muhammad Waseem
IRIS FHIR Transcribe Summarize Export by @Ikram.Shah3431
oex-mapping by @Robert.Cemper1003
FHIR - AI and OpenAPI Chain by @Ikram.Shah3431
➡️ Voting is here. Was there a problem with the voting earlier this week? I know of two people who voted on Monday but when they re-checked yesterday or today found that their votes were no longer registered, so they had to vote again.
I think something similar happened during a previous contest. It happened to me too. My votes were deleted I had to vote again and I don't know who voted for me I have the same issue. I voted on the 03rd of July. I've read your comment and discovered that my votes have disappeared. So I voted again now.
Dear all,
We encountered an issue with the voting and are currently working to fix it. Please don't worry - all your votes will be restored and counted in the voting.
We hope for your understanding. Thank you! Developers!
Last call!Only one day left to the end of voting!
Cast your votes for applications you like! @Anastasia.Dyubaylo @Irina.Podmazko I'm unable to vote because of an issue with open exchange login. I'm able to login to community but not open exchange. I think the issue is related to my email id which is too long. I see an error like below.
length longer than MAXLEN allowed of 50\r\n > ERROR #5802: Datatype validation failed on property 'MP.Data.User:Referrer', with value equal to
Do you if there is a workaround? Is there any alternate way in which I can share my votes with you? Hi! Could you please try again? I've fixed this issue. Hi Semion, Thanks for fixing! I'm able to login now, but when I try to vote it says, I should be an active contributor to be able to vote. I've posted a few articles and comments already - https://community.intersystems.com/user/sowmiya-nagarajan. Do you know if these issues are related? What should I do to fix this? The activity status is updated every few days so that new users cannot cheating with votes during the contest. You are participant, so I can fix it for you. Done. Could you please try to vote again? Got it, Thanks Semion! I'm able to vote now.