Search

Clear filter
Announcement
Anastasia Dyubaylo · Apr 12, 2019

InterSystems at DockerCon 2019: "Containerized Databases for Enterprise Applications" Session

Hi Community! We're pleased to invite you to the DockerCon 2019 – the #1 container industry conference for all things Kubernetes, microservices, and DevOps. The event will be held at the Moscone Center in San Francisco from April 29 to May 2. In addition, there will be a special session "Containerized Databases for Enterprise Applications" presented by @Thomas.Carroll, Product Specialist at InterSystems. See the details below. Containerized Databases for Enterprise Applications | Wednesday, May 1, 12:00 PM - 12:40 PM – Room 2001 Session Track: Using Docker for IT Infra & Ops Containers are now being used in organizations of all sizes. From small startups to established enterprises, data persistence is necessary in many mission critical applications. “Containers are not for database applications” is a misconception and nothing could be further from the truth. This session aims to help practitioners navigate the minefield of database containerization and avoid some of the major pitfalls that can occur. Discussion includes traditional enterprise database concerns surrounding data resilience, high availability, and storage and how they mesh with a containerized deployment. Speaker BioJoe is a Product Specialist at InterSystems, a passionate problem solver, and a container evangelist. He started his career as a solution architect for enterprise database applications before transitioning to product management. Joe is in the trenches of InterSystems transformation to a container-first, cloud-first product strategy. When he isn’t at a Linux shell he enjoys long walks on the beach, piña coladas (hold the rain), woodworking, and BBQ. Be the first to register now! It's really great news. And so cool that InterSystems started to participate more in developers conferences. I wish to participate all of them :)
Question
Stephan Gertsobbe · Jul 13, 2019

Report Generator for InterSystems IRIS Data Platform - What are you using

Hi all, we are wondering if anybody has a reporting tool that is capable using IRIS Objects?I know there are things like Crystal Reports and others out there who can read the SQL Data throug ODBC but we need the capability of using object methods while running the report.Since now we where using a JAVA based report generator (ReportWeaver) but since the object binding for JAVA doesn't exist anymore in IRIS data platform, did any of you have an alternative report generator? Looking forward to any answers cheersStephan No that's not really what I meat. My question was much more generic about data projection of object data like listOfObjects.When you look at the projected data of those "object" collections they are projected as $LISTBUILD lists in SQL. So the question was, is there a reporting tool out there in use, that can handle the object data from IRIS as for IRIS there is no object binding anymore like there was for Caché.For Java there is the cachedb.jar and that binding doesn't exist for IRIS. "using object methods while running the report"This is a rather generic statement.If you are using CLASS METHODS (as I 'd assume) you can project each class method as Stored SQL Procedure too.By this, you can make them available to be used over JDBC.Could be an eventual workaround.
Question
Tom Philippi · Apr 20, 2017

Enabling SSL / TLS on an InterSystems (soap) web service, part 2

We are in the process of setting enabling SSL on a soap web service exposed via InterSystems, but are running into trouble. We have installed our certificates on our webserver (Apache 2.4) and enabled SSL over the default port 57772. However, we now get an error when sending a soap message to the web service (it used to work over http). Specifically the CSP gateway refuses to route te emssage the soap web service:<SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:s="http://www.w3.org/2001/XMLSchema"> <SOAP-ENV:Body> <SOAP-ENV:Fault> <faultcode>SOAP-ENV:Server</faultcode> <faultstring>CSP Gateway Error (version:2016.1.2.209.0 build:1601.1554e)</faultstring> <detail> <error xmlns="http://tempuri.org"> <special>Systems Management</special> <text>Invalid Request : Cannot identify application path</text> </error> </detail> </SOAP-ENV:Fault> </SOAP-ENV:Body></SOAP-ENV:Envelope>Probably either the CSP gateway or the web server was misconfigured. Anyone an idea in which direction we might look. (BTW accessing the management port now returns the same error as does using SSL port 443).PS this issue was also submitted to WRC Tom,I presume by now you've had this answered by the WRC, but the issue is most likely that the private Apache web server that ships with Caché/Ensemble does not currently support SSL. In order to configure SSL, you would need to configure a full Apache or IIS web server, which is typically recommended for any public-facing, production-level deployment anyway.-Steve
Article
Mark Bolinsky · Jul 1, 2016

InterSystems Example Reference Architecture for Microsoft Azure Resource Manager (ARM)

++Update: August 2, 2018 This article provides a reference architecture as a sample for providing robust performing and highly available applications based on InterSystems Technologies that are applicable to Caché, Ensemble, HealthShare, TrakCare, and associated embedded technologies such as DeepSee, iKnow, Zen and Zen Mojo. Azure has two different deployment models for creating and working with resources: Azure Classic and Azure Resource Manager. The information detailed in this article is based on the Azure Resource Manager model (ARM). Summary Microsoft Azure cloud platform provides a feature rich environment for Infrastructure-as-a-Service (IaaS) as a cloud offering fully capable of supporting all of InterSystems products. Care must be taken, as with any platform or deployment model, to ensure all aspects of an environment are considered such as performance, availability, operations, and management procedures. Specifics of each of those areas will be covered in this article. Performance Within Azure ARM there are several options available for compute virtual machines (VMs) and associated storage options, and the most directly related to InterSystems products are network attached IaaS disks stored as VHD files in Azure page blob storage. There are several other options such as Blob (block), File and others, however those are more specific to an individual application’s requirements rather than supporting the operations of Caché. There are two types of storage where the disks are stored: Premium and Standard. Premium storage is more suited for production workloads that require guaranteed predictable low-latency Input/Output Operations per Second (IOPs) and throughput. Standard storage is a more economical option for non-production or archive type workloads. Care must be taken when selecting a particular VM type because not all VM types can have access to premium storage. Virtual IP Address and Automatic Failover Most IaaS cloud providers lacked the ability to provide for a Virtual IP (VIP) address that is typically used in database failover designs. To address this, several of the most commonly used connectivity methods, specifically ECP clients and CSP Gateways, have been enhanced within Caché to no longer rely on VIP capabilities making them mirror-aware. Connectivity methods such as xDBC, direct TCP/IP sockets, or other direct connect protocols, require the use of a VIP. To address those, InterSystems database mirroring technology makes it possible to provide automatic failover for those connectivity methods within Azure using APIs to interact with the Azure Internal Load Balancer (ILB) to achieve VIP-like functionality, thus providing a complete and robust high availability design within Azure. Details of this can be found in the Community article Database Mirroring without a Virtual IP address. Backup Operations Performing a backup using either traditional file-level or snapshot based backups can be a challenge in cloud deployments. This can now be achieved within Azure ARM platform using Azure Backup and Azure Automation Run Books along with InterSystems External Freeze and Thaw API capabilities to allow for true 24x7 operational resiliency and assurance of clean regular backups. Alternatively, many of the third-party backup tools available on the market can be used by deploying backup agents within the VM itself and leveraging file-level backups in conjunction with Logical Volume Manager (LVM) snapshots. Example Architecture As part of this document, a sample Azure architecture is provided as a starting point for your application specific deployment, and can be used as a guideline for numerous deployment possibilities. This reference architecture demonstrates a highly robust Caché database deployment including database mirror members for high availability, application servers using InterSystems Enterprise Cache Protocol (ECP), web servers with InterSystems CSP Gateway, and both internal and external Azure load balancers. Azure Architecture Deploying any Caché based application on Microsoft Azure requires some specific considerations in certain areas. The section discusses these areas that need to be considered in addition to any regular technical requirements you may have for your application. Two examples are being provided in this document one based on InterSystems TrakCare unified healthcare information system, and another option based on a complete InterSystems HealthShare health informatics platform deployment including: Information Exchange, Patient Index, Health Insight, Personal Community, and Health Connect. Virtual Machines Azure virtual machines (VMs) are available in two tiers: basic and standard. Both types offer a choice of sizes. The basic tier does not provide some capabilities available in the standard tier, such as load balancing and auto-scaling. For this reason, the standard tier is used for TrakCare deployments. Standard tier VMs come in various sizes grouped in different series, i.e. A, D, DS, F, FS, G, and GS. The DS, GS, and new FS sizes support the use of Azure Premium Storage. Production servers typically need to use Premium Storage for reliable, low-latency and high-performance. For this reason, the example TrakCare and HealthShare deployment architectures detailed in this document will be using either FS, DS or GS series VMs. Note that not all virtual machine sizes are available in all regions. For more details of sizes for virtual machines see: Windows Virtual Machine Sizes Linux Virtual Machine Sizes Storage Azure Premium Storage is required for TrakCare and HealthShare servers. Premium Storage stores data on Solid State Drives (SSDs) and provides high throughput at low latencies, whereas Standard Storage stores data on Hard Disk Drives (HDDs) resulting in lower performance levels. Azure Storage is a redundant and highly available system, however, it is important to notice that Availability Sets currently don’t provide redundancy across storage fault domains and in rare circumstances this can lead to issues. Microsoft has mitigation workarounds and is working on making this process widely available and easier to end-customers. It is advisable to work directly with your local Microsoft team to determine if any mitigation is required. When a disk is provisioned against a premium storage account, IOPS and throughput, (bandwidth) depends on the size of the disk. Currently, there are three types of premium storage disks: P10, P20, and P30. Each one has specific limits for IOPS and throughput as specified in the following table. Premium Disks Type P4 P6 P10 P15 P20 P30 P40 P50 Disk Size 32GB 64GB 128GB 256GB 512GB 1024GB 2048GB 4096GB IOPS per disk 120 240 500 1100 2300 5000 7500 7500 Throughput per disk 25MB/s 50MB/s 100MB/s 125MB/s 150MB/s 200MB/s 250MB/s 250MB/s Note: Ensure there is sufficient bandwidth available on a given VM to drive the disk traffic. For example, a STANDARD_DS13 VM has 256 MB per second dedicated bandwidth available for all premium storage disk traffic. That means four P30 premium storage disks attached to this VM have a throughput limit of 256 MB per second and not the 800 MB per second that four P30 disks could theoretically provide. For more details and limits on premium storage disks, including provisioned capacity, performance, sizes, IO sizes, Cache hits, throughput targets, and throttling see: Premium Storage High Availability InterSystems recommends having two or more virtual machines in a defined Availability Set. This configuration is required because during either a planned or unplanned maintenance event, at least one virtual machine will be available to meet the 99.95% Azure SLA. This is important because during data center updates, VMs are brought down in parallel, upgraded, and brought back online in no particular order leaving the application unavailable during this maintenance window. Therefore, a highly available architecture requires two of every server, i.e. load balanced web servers, database mirrors, multiple application servers and so on. For more information on Azure high availability best practices see: Managing Availability Web Server Load Balancing External and internal load balanced web servers may be required for your Caché based application. External load balancers are used for access over the Internet or WAN (VPN or Express Route) and internal load balancers are potentially used for internal traffic. The Azure load balancer is a Layer-4 (TCP, UDP) type load balancer that distributes incoming traffic among healthy service instances in cloud services or virtual machines defined in a load balancer set. The web server load balancers must be configured with client IP address session persistence (2 tuple) and the shortest probe timeout possible, which is currently 5 seconds. TrakCare requires session persistence for the period a user is logged in. The following diagram provided by Microsoft demonstrates a simple example of the Azure Load Balancer within an ARM deployment model. For more information on Azure load balancer features such as distribution algorithm, port forwarding, service monitoring, Source NAT, and different types of available load balancers see: Load Balancer Overview In addition to the Azure external load balancer, Azure provides the Azure Application Gateway. The Application Gateway is a L7 load balancer (HTTP/HTPS) with support for cookie-based session affinity and SSL termination (SSL offload). SSL offloading removes the encryption/decryption overhead from the Web servers, since the SSL connection is terminated at the load balancer. This approach simplifies management as the SSL certificate is deployed and managed in the getaway instead of all the nodes in the web farm. For more information, see: Application Gateway overview Configure an Application Gateway for SSL offload by using Azure Resource Manager Database Mirroring When deploying Caché based applications on Azure, providing high availability for the Caché database server requires the use of synchronous database mirroring to provide high availability in a given primary Azure region and potentially asynchronous database mirroring to replicate data to a hot standby in a secondary Azure region for disaster recovery depending on your uptime service level agreements requirements. A database mirror is a logical grouping of two database systems, known as failover members, which are physically independent systems connected only by a network. After arbitrating between the two systems, the mirror automatically designates one of them as the primary system; the other one automatically becomes the backup system. External client workstations or other computers connect to the mirror through the mirror Virtual IP (VIP), which is specified during mirroring configuration. The mirror VIP is automatically bound to an interface on the primary system of the mirror. Note: In Azure, it is not possible to configure the mirror VIP, so an alternative solution has been devised. The current recommendation for deploying a database mirror in Azure is to configure three VMs (primary, backup, arbiter) in the same Azure Availability Set. This ensures that at any given time, Azure will guarantee external connectivity with at least two of these VMs with a 99.95% SLA, and that each will be in different update and fault domains. This provides adequate isolation and redundancy of the database data itself. Additional details on can be found here: Azure Availability Sets Azure Server Level Agreements (SLAs) A challenge within any IaaS cloud provider, including Azure, is the handling of automatic failover of the client connections to the application with the absence of Virtual IP capabilities. To retain automatic failover for client connections a couple directions have been taken. Firstly, InterSystems has enhanced the CSP gateway to become mirror-aware so connectivity from a web server with the CSP Gateway to a database server no longer requires a VIP. The CSP gateway will auto-negotiate with both the of the mirror members and redirect to the appropriate member whichever is the primary mirror member. This goes along with the already mirror-aware capabilities of ECP clients if using them. Secondly, connectivity outside of the CSP Gateways and ECP clients still requires a VIP-like capability. InterSystems recommends the use of the polling method with the mirror_status.cxw health check status page detailed in the community article Database Mirroring without a Virtual IP address. The Azure Internal Load Balancer (ILB) will provide a single IP address as a VIP-like capability to direct all network traffic to the primary mirror member. The ILB will only distribute traffic to the primary mirror member. This method does not rely on polling, and allows for an immediate redirection upon any mirror member within a mirror configuration becoming the primary member. Polling may be used in conjunction with this method is some DR scenarios using Azure Traffic Manager. Backup and Restore There are multiple options available for backup operations. The following three options are viable for your Azure deployment with InterSystems products. The first two options incorporate a snapshot type procedure which involves suspending database writes to disk prior to create the snapshot and then resuming updates once the snapshot was successful. The following high-level steps are taken to create a clean backup using either of the snapshot methods: Pause writes to the database via database Freeze API call. Create snapshots of the OS + data disks. Resume Caché writes via database Thaw API call. Backup facility archives to backup location Additional steps such as integrity checks can be added on a periodic interval to ensure clean and consistent backup. The decision points on which option to use depends on the operational requirements and policies of your organization. InterSystems is available to discuss the various options in more detail. Azure Backup Backup operations can now be achieved within Azure ARM platform using Azure Backup and Azure Automation Runbooks along with InterSystems External Freeze and Thaw API capabilities to allow for true 24x7 operational resiliency and assurance of clean regular backups. Details for managing and automating Azure Backups can be found here. Logical Volume Manager Snapshots Alternatively, many of the third-party backup tools available on the market can be used by deploying individual backup agents within the VM itself and leveraging file-level backups in conjunction with Logical Volume Manager (LVM) snapshots. One of the major benefits to this model is having the ability to have file-level restores of either Windows or Linux based VMs. A couple of points to note with this solution, is since Azure and most other IaaS cloud providers do not provide tape media, all backup repositories are disk-based for short term archiving and have the ability to leverage blob or bucket type low cost storage for long-term retention (LTR). It is highly recommended if using this method to use a backup product that supports de-duplication technologies to make the most efficient use of disk-based backup repositories. Some examples of these backup products with cloud support include but is not limited to: Commvault, EMC Networker, HPE Data Protector, and Veritas Netbackup. InterSystems does not validate or endorses one product over the other. Caché Online Backup For small deployments the built-in Caché Online Backup facility is also a viable option as well. This InterSystems database online backup utility backs up data in database files by capturing all blocks in the databases then writes the output to a sequential file. This proprietary backup mechanism is designed to cause no downtime to users of the production system. In Azure, after the online backup has finished, the backup output file and all other files in use by the system must be copied to an Azure File share. This process needs to be scripted and executed within the virtual machine. The Azure File shares should use an Azure RA-GRS storage account for maximum availability. Note Azure File shares have a maximum share size of 5TB, a maximum file size of 1TB, and maximum 60 MB/s throughput per share (shared by all clients). Online backup is the entry-level approach for smaller sites wishing to implement a low cost solution for backup. However, as databases increase in size, external backups with snapshot technology are recommended as a best practice with advantages including the backup of external files, faster restore times, and an enterprise-wide view of data and management tools. Disaster Recovery When deploying a Caché based application on Azure, Disaster Recovery (DR) resources including network, servers and storage are recommended to be in different Azure region. The amount of capacity required in the designated DR Azure region depends on your organizational needs. In most cases 100% of the production capacity is required when operating in a DR mode, however lesser capacity can be provisioned until more is needed as an elastic model. Asynchronous database mirroring is used to continuously replicate to the DR Azure region’s virtual machines. Mirroring uses database transaction journals to replicate updates over a TCP/IP network in a way that has minimal performance impact on the primary system. Compression and encryption is highly recommended to be configured with these DR Asynchronous mirror members. All external clients on the general Internet who wish to access the application will be routed through an Azure Traffic Manager as a DNS service. Microsoft Azure Traffic Manager (ATM) is used as a switch to direct traffic to the current active data center. Azure Traffic Manager supports a number of algorithms to determine how end users are routed to the various service endpoints. Details of various algorithms can be found here. For the purpose of this document, the ‘priority’ traffic-routing method will be used in conjunction with Traffic Manager endpoint monitoring and failover. Details of endpoint monitoring and failover can be found here. Traffic Manager works by making regular requests to each endpoint and then verifying the response. If an endpoint fails to provide a valid response, Traffic Manager shows its status as Degraded. It is no longer included in DNS responses, which instead will return an alternative, available endpoint. In this way, user traffic is directed away from failing endpoints and toward endpoints that are available. Using the above methods, only the specific region and specific mirror member will only ever allow traffic to it. This is controlled by the endpoint definition which is a mirror_status page presented from the InterSystems CSP Gateway. Only the primary mirror member will ever report “success” as a HTTP 200 from the monitor probing. The following diagram provided by Microsoft demonstrates at a high-level the priority traffic-routine algorithm. The Azure Traffic Manager will yield a single endpoint such as: "https://my-app.trafficmanager.net" that all clients can connect to. In addition, an A record could be configured to provide a vanity URL such as "https://www.my-app-domain.com". The Azure Traffic Manager shall be configured with one profile that contains the addresses of both regions’ end point. At any given time, only one of the regions will report online based on the endpoint monitoring. This ensures that traffic only flows to one region at a given time. There are no added steps needed for failover between the regions since the endpoint monitoring will detect the application in the primary Azure region is down and the application is now live in the secondary Azure region. This is because the DR Async mirror member being promoted to primary and then allows the CSP Gateway to report HTTP 200 to the Traffic Manager endpoint monitoring. There are many alternatives to the above described solution, and can be customized based on your organization operational requirements and service level agreements. Network Connectivity Depending on your application’s connectivity requirements, there are multiple connectivity models using either Internet, IPSEC VPN, or a dedicated link using Azure Express Route are available. The method to choose will depend on the application and user needs. The bandwidth usage for each of the three methods vary, and best to check with your Azure representative or Azure Portal for confirmation of available connectivity options for a given region. If you are using Express Route, there are several options including multiple circuits and multi-region access that can be enabled for disaster recovery scenarios. It is important to work with the Express Route provider to understand the high availability and disaster recovery scenarios they support. Security Care needs to be taken when deciding to deploy an application in a public cloud provider. Your organization’s standard security policies, or new ones developed specifically for cloud, should be followed to maintain security compliance of your organization. Cloud deployments have the added risk of data now outside client data centers and physical security control. The use of InterSystems database and journal encryption for data at rest (databases and journals) and data in flight (network communications) with AES and SSL/TLS encryption respectively are highly recommended. As with all encryption key management, proper procedures need to be documented and followed per your organization’s policies to ensure data safety and prevent unwanted data access or security breech. When access is allowed over the Internet, third party firewall devices may be required for extra functionality such as intrusion detection, denial of service protection etc. Architecture Diagram Examples The diagrams below illustrates a typical Caché installation providing high availability in the form of database mirroring (both synchronous failover and DR Asynchronous), application servers using ECP, and multiple load balanced web servers. TrakCare Example The following diagram illustrates a typical TrakCare deployment with multiple load balanced webservers, two EPS print servers as ECP clients, and database mirror configuration. The Virtual IP address is only used for connectivity not associated with ECP or the CSP Gateway. The ECP clients and CSP Gateway are mirror-aware and do not require a VIP. The sample reference architecture diagram below includes high availability in the active or primary region, and disaster recovery to another Azure region if the primary Azure region is unavailable. Also within this example, the database mirrors contain the TrakCare DB, TrakCare Analytics, and Integration namespace all within that single mirror set. TrakCare Azure Reference Architecture Diagram - PHYSICAL ARCHITECTURE In addition, the following diagram is provided showing a more logical view of architecture with the associated high-level software products installed and functional purpose. TrakCare Azure Reference Architecture Diagram - LOGICAL ARCHITECTURE HealthShare Example The following diagram illustrates a typical HealthShare deployment with multiple load balanced webservers, with multiple HealthShare products including Information Exchange, Patient Index, Personal Community, Health Insight, and Health Connect. Each of those respective products include a database mirror pair for high availability within an Azure availability set. The Virtual IP address is only used for connectivity not associated with ECP or the CSP Gateway. The CSP Gateways used for web service communications between the HealthShare products are mirror-aware and do not require a VIP. The sample reference architecture diagram below includes high availability in the active or primary region, and disaster recovery to another Azure region if the primary Azure region is unavailable. HealthShare Azure Reference Architecture Diagram – PHYSICAL ARCHITECTURE In addition, the following diagram is provided showing a more logical view of architecture with the associated high-level software products installed, connectivity requirements and methods, and the respective functional purpose. HealthShare Azure Reference Architecture Diagram – LOGICAL ARCHITECTURE Given that the Azure pricing for storage contains a transaction element, is there any indication as to how many of these transactions will be consumed opening or saving an object as well as other common actions - obviously a simple object will use much less than a complex one. This is great Mark, excellent write up.Ran into a similar problem a couple of years ago on AWS with the mirror VIP, had a less sophisiticated solution with a custom business service on a target production/namespace listening for a keep alive socket the ELB to detect which Mirror Member was active.... re-used it for an auto-scaling group too for an indicator for availability we could put logic behind. Those links up there to the routines appears broke for me, would love to take a look at that magic.What's Azure's VPN for solution look like for site 2 site connections? The diagrams above maybe suggest this is possibly bolted to on-prem, but just curious if you had any comments to that with Azure.Did you provision a DNS Zone on a legible domain for internal communications? I abused a couple of *.info domains for this purpose and found that the hostnames enumerated from Cache were from the Instances and not very usable for interhost communication and broke things like Enterprise Manager, HS Endpoint Enumeration, etc.Does Azure have an Internet Gateway or a NAT solution to provide communication outbound from a single address (or fault tolerance) ? The diagram for Web Server Load Balancing looks like they work for both inbound and outbound just wondered if that was the case.Again, excellent resource, thanks for taking the time. Hi Matthew,Thank you for your question. Pricing is tricky and best discussed with your Microsoft representative. When looking at premium storage accounts, you only pay for the provisioned disk type not transactions, however there are caveats. For example if you need only 100GB of storage will be be charges for a P0 disk @ 128GB. A good Microsoft article to help explain the details can be found here.Regards,Mark B Hi Ron,There are many options available for may different deployment scenarios. Specifically for the multi-site VPN you can use the Azure VPN Gateway. Here is a diagram provided by Microsoft's documentation showing it. Here is the link as well to the multi-site VPN details.As for Internet gateways, yes they have that concept and the load balancers can be internal or external. You control access with network security groups and also using the Azure Traffic Manager and also using Azure DNS services. There are tons of options here and really up to you and what/how you want to control and manage the network. Here is a link to Azure's documentation about how to make a load balancer Internet facing.The link to the code for some reason wasn't marked as public in the github repository. I'll take care of that now.Regards,Mark B-
Article
Peter Steiwer · Jan 10, 2020

Understanding Missing Relationship Build Errors in InterSystems IRIS Business Intelligence

When using Related Cubes in InterSystems IRIS BI, cubes must be built in the proper order. The One side must be built before the Many side. This is because during build time for the Many side, it looks up the record on the One side and creates a link. If the referenced record is not found on the One side, a Missing Relationship build error is generated. The One side is going to be the independent side of the relationship, AKA the side of the relationship that is referenced by the Many side or the Dependent cube. For example: Patients contain a reference to their Doctor. The Doctor does not contain references to each of their Patients. Doctors is the One, or Independent side. Patients is the Many, or Dependent side. For more information about setting up Cube Relationships, please see the documentation. WARNING: If you rebuild the One side without rebuilding the Many side, the Many side may point to the wrong record. It is not guaranteed that a record in your cube will always have the same ID. The relationship link that is created is based on ID. YOU MUST REBUILD THE MANY SIDE AFTER BUILDING THE ONE SIDE. To ensure your cubes are always built in the proper order, you can use the Cube Manager. When debugging Build Errors, please also debug them in the Build Order. This is because errors can cascade and you don't want to spend time debugging an error just to find out it is because a different error happened first. Understanding the Missing Relationship Build Error Message SAMPLES>do ##class(%DeepSee.Utils).%PrintBuildErrors("RELATEDCUBES/PATIENTS") 1 Source ID: 1 Time: 01/03/2020 15:30:42 ERROR #5001: Missing relationship reference in RelatedCubes/Patients: source ID 1 missing reference to RxPrimaryCarePhysician 1744 Here is an example of what the Missing relationship build error looks like. We will extract some of these values from the message to understand what is happening. Missing relationship reference in [Source Cube]: source ID [Source ID] missing reference to [Related Cube Reference] [Related Source ID] In our error message, we have the following values: Source Cube = RelatedCubes/Patients Source ID = 1 Related Cube Reference = RxPrimaryCarePhysician Related Source ID = 1744 Most of these are pretty straightforward except for the Related Cube Reference. Sometimes the name is obvious, other times it is not. Either way, we can do a little bit of work to find the cube this reference. Step 1) Find the Fact Class for the Source Cube. SAMPLES>w ##class(%DeepSee.Utils).%GetCubeFactClass("RelatedCubes/Patients") BI.Model.RelCubes.RPatients.Fact Step 2) Run an SQL query to get the Fact class the Related Cube Reference is pointing to: SELECT Type FROM %Dictionary.PropertyDefinition where ID='[Source Cube Fact Class]||[Related Cube Reference]' example: SELECT Type FROM %Dictionary.PropertyDefinition where ID='BI.Model.RelCubes.RPatients.Fact||RxPrimaryCarePhysician' Which returns a value of: BI.Model.RelCubes.RDoctors.Fact Step 3) Now that we have the Related Cube Fact Class, we can run an SQL query to see if this Related Source ID does not have an associated fact in our Related Cube Fact Table. SELECT * FROM BI_Model_RelCubes_RDoctors.Fact WHERE %SourceId=1744 Please note that we had to use the SQL table name instead of the class name here. This can typically be done by replacing all "." excluding ".Fact" with "_". In this case, 0 rows were returned. This means it is still the case that the required related fact does not exist in the related cube. Sometimes after spending the time to get to this point, a synchronize may have happened to pull this new data in. At this point, the Build Error may no longer be true, but it has not yet been cleared out of the Build Errors global. Regular synchronization does not clean entries in this global that have been fixed. The only way to clean the Build Errors global is to run a Build against the cube OR running the following method: Do ##class(%DeepSee.Utils).%FixBuildErrors("CUBE NAME WITH ERRORS") If we now had data for the previous SQL query, the %FixBuildErrors method should fix the record and clear the error. Step 4) Since we do not have this record in our Related Cube Fact Table, we should check the Related Cube Source Table to see if the record exists. First we have to find the Related Source Class by viewing the SOURCECLASS parameter of the Related Cube Fact Class: SAMPLES>w ##class(BI.Model.RelCubes.RDoctors.Fact).#SOURCECLASS BI.Study.Doctor Step 5) Now that we have the Related Source Class, we can query the Related Source Table to see if the Related Source ID exists: SELECT * FROM BI_Study.Doctor WHERE %ID=1744 If this query returns results, you should determine why this record does not exist in the Related Cube Fact Table. This could simply be because it has not yet synchronized. It could also have gotten an Error while building this fact. If this is the case, you need to remember to diagnose all Build Errors in the proper Build Order. It can often be the case that lots of errors cascade from one error. If this query does not return results, you should determine why this record is missing from the Related Source Table. Perhaps some records have been deleted on the One side but records on the Many side have not yet been reassigned or deleted. Perhaps the Cube Relationship is configured incorrectly and the Related Source ID is not the correct value and the Cube Relationship definition should be changed. This guide is a good place to start, but please feel free to contact the WRC. The WRC can help debug and diagnose this with you.
Announcement
Anastasia Dyubaylo · Jan 24, 2020

[February 18-19, 2020] InterSystems Iberia Summit 2020 in Barcelona

Dear Community, We're pleased to invite you to the InterSystems Iberia Summit 2020, which will take place from February 18th to 19th in Barcelona, Spain! You're more than welcome to join us at the Melia Barcelona Sarriá Hotel! InterSystems Local Summits are held in different countries and are the premier event for the local InterSystems technology communities – a gathering of companies and developers at the forefront of their respective industries. These events attract a wide range of attendees, from C-level executives, experts, managers, directors and developers. Attendees gather to network with peers, connect with InterSystems partners, learn best practices and get a firsthand look at upcoming features and future innovations from InterSystems. Join InterSystems Iberia Summit 2020 and be kept abreast of the latest developments! Don’t miss out and register for the event by clicking right here. Can’t wait to see you in Barcelona! 😉 So, remember: ⏱ Time: February 18-19, 2020 📍Venue: Melia Barcelona Sarriá Hotel, Barcelona, Spain ✅ Registration: SAVE YOUR SEAT TODAY I'm going to Barcelona this year, catch me there if you would like to see VSCode-ObjectScript in action if you would like to discuss new features you want to see there if you face in issues. See you there For Footbal fans, we will get dinner al Camp Nou, Barcelona FC Stadium :-D May I say "Messi, could you please pass me the bread?" Great, I have some questions about VSCode-ObjectScript. It is the best occasion to ask the creator of the application directly. Don’t Miss a chance to hat-trick on vodka with Messi ⚽️⚽️⚽️ Published "Developer Resources" slides from InterSystems Iberia Summit in Barcelona.
Announcement
Olga Zavrazhnova · Jan 22, 2020

Review InterSystems IRIS or Caché - get TWO $25 Visa Cards!

Hey Developers, We invite you to review the InterSystems Caché or IRIS on the Gartner Peer Insights and get two $25 VISA Cards - one from Gartner Peer Insights and one from InterSystems! We had this promotion in 2018-2019, and now it's so good to announce it also for 2020! See the rules below. ✅ #1: Follow this unique link and submit a review for IRIS or Caché. ✅ #2: Make a screenshot of the headline and text of your review. ✅ #3: Upload a screenshot in this challenge on Global Masters. After your review is published you will get two $25 VISA Cards! Note: • Use mentioned above unique link in order to qualify for the gift cards. The Gift Cards are granted after review is published on the Gartner Peer Insights . • Quantities are limited, and Gartner must approve the survey to qualify for the gift card. Gartner will not approve reviews from resellers, systems integrators or MSP/ISV’s of InterSystems. Survey must be completed by Dec. 31, 2020. • The survey takes about 10-15 minutes. Gartner will authenticate the identity of the reviewer, but the published reviews are anonymous. You can check the status of your review and gift card in your Gartner Peer Insights reviewer profile at any time. Done? Awesome! Your cards are on the way! Hello Olga, Are InterSystems employees allowed to fill a Review or is it only for clients and partners using InterSystems technologies?Even if it is not for the reward, at least we can review the products we are working on, for our company's fame. Regards, Jacques Hi Jacques! InterSystems employees are not allowed to review for obvious reasons ;) But for our company's fame and to help Developers Community you are always welcome to contribute articles and submit apps on Open Exchange ) Hello Evgeny,True ^^It would have been nice to say nice things on GartnerI'm following Open Exchange carefully, and hope to have a good idea to develop and to share on it. Cheers, Jacques Hi Community, quick update!Gartner launched this promotion again and if you have not published a review yet - you can get a $25 VISA gift card from Gartner! Take a few minutes and write your review on an InterSystems product: IRIS or Caché (don't forget to upload screenshot of your review on Global Masters to get $25 VISA also from InterSystems after your review is published! Upload in this challenge) Ensemble HealthShare TrakCare Gartner will ask you to sign in with LinkedIn or create a free Gartner account so they can verify your identity. Note! Quantities are limited, and Gartner must approve the survey to qualify for the gift card. Gartner will not approve reviews from resellers, systems integrators or MSP/ISV’s of InterSystems. Survey must be completed by Dec. 31, 2020. Unfortunately Gartner won't accept your review if you work for a company that is linked to Intersystems in any way such as a VAR. Hi David, yeh, unfortunately. I added this info in the description to notify people.
Announcement
Evgeny Shvarov · Mar 4, 2020

The 1st Programming Contest: InterSystems IRIS, Docker and ObjectScript

Hi Developers! In March we are starting our first InterSystems IRIS Programming Contest! It's a competition in creating open-source solutions using InterSystems IRIS Data Platform. The topic for the first contest is InterSystems IRIS, Docker and ObjectScript! The contest will last three weeks: March 9-31, 2020. Prizes: There will be money prizes for Experts Nomination - winners will be determined by a specially selected jury: 🥇 1st place - $2,000 🥈 2nd place - $1,000 🥉 3rd place - $500 Also, there will be Community Nomination - an application that will receive the most votes in total: 🏆 1st place - $1,000 And we provide winners with high-level badges on Global Masters. If several participants score the same amount of votes they all are considered as winners and the money prize shared between the winners. General requirements: 1. The application should be posted as Open Source under a certain license (e.g. MIT License). 2. The application should be approved and published on Open Exchange. 3. The application should use InterSystems IRIS or InterSystems IRIS for Health. 4. Both existing and new applications can participate in the contest. 5. One contestant can upload an unlimited number of applications. Who can participate? Each member registered in the Developer Community from any country can participate in a contest, except for InterSystems employees. Judgment: Judges for the Experts Nomination are InterSystems Product Managers, Developer Community Moderators and Global Masters advocates with VIP, Ambassador & Expert levels. One judge can vote only for one application. The power of the vote is different: PM vote - 3 points Moderator vote - 2 points VIP GM Advocate vote - 2 points Ambassador GM Advocate vote - 1 point Expert GM Advocate vote - 1 point Judges for the Community Nomination are any registered community members who posted at least once and will have one vote point. Judges can participate in a contest, but cannot vote for their own applications. Judgment criteria In the Experts Nomination, we will choose the best application which: Makes the world a better place or makes the life of a developer better; Has the best functionality - how much the application/library does; Has a readable and qualitative ObjectScript code. Contest Period: March 9-22, 2020: Two weeks to upload your applications to Open Exchange (also during this period, you can edit your projects). March 23-29, 2020: One week to vote. All winners will be announced on March 30th, 2020. The Topic ➡️ InterSystems ObjectScript and InterSystems IRIS in Docker Container ⬅️ We will choose the best application built using InterSystems ObjectScript and which could be launched on either InterSystems IRIS Community Edition(IRIS CE), or InterSystems IRIS for Health Community Edition (IRIS CE4H). InterSystems IRIS CE Docker and ObjectScript requirement: If we clone or download the application it should be runnable e.g. with: $ docker-compose up -d The application could be implemented as CLI, with execution in the IRIS terminal. e.g.: $ docker-compose exec iris iris session iris Node: 981b8e5c8f7a, Instance: IRIS USER>w ##class(Your.Application).Run() Here is the sample application. For a given example the test will be: $ docker-compose exec iris iris session iris Node: 981b8e5c8f7a, Instance: IRIS IRISAPP>w ##class(Contest.ObjectScript).TheQuestionOfTheUniverse() The answer is 42 IRISAPP> The README.md file in the description should contain the section, which describes how the CLI functionality could be tested. We will accept for the contest GitHub repositories which are recognized by GitHub as ObjectScript, e.g. like here: To make GitHub introduce this indication store your ObjectScript code in .cls files. Use ObjectScript Contest Template - create your repository and substitute files in /src folder to your solution, or use it as a GitHub template for your new GitHub repository, or import the set of Docker-enable files to your repository. Learn more. Watch the related video on how to make a repository from GitHub Template. Here are a few examples of applications that fit the topic of the contest and match IRIS Community Edition running in Docker requirements: Python Gateway, Healthcare XML, Document Template, Game of Life, ForEach, ObjectScript Template. We are starting in March, please ask your questions! How to apply for the Contest Log in to the Open Exchange, open your applications section. Open the application which you want to apply for the contest and click Apply for Contest. Make sure the status is 'Published'. The application will go for the review and if it fits the topic of the contest the application will be listed on the Contest Board. Stay tuned, the post will be updated. These two applications also meet the requirements and topic: ObjectScript Math and JSONManyToMany by @Peter.Steiwer: there is ObjectScript CLI and you can launch it with $ docker-compose up -d and then run and test it with: $ docker-compose exec iris iris session iris Node: 981b8e5c8f7a, Instance: IRIS USER>zn "IRISAPP" IRISAPP> w ##class(Math.Math).LeastCommonMultiple(134,382) Just to give you a few ideas about what you could submit for the contest check the Rosetta Code - there are still a bunch of opportunities to implement this or that popular algorithms which have implementations in different languages but not in ObjectScript. We are starting tomorrow! How to apply for the Programming Contest Log in to Open Exchange, open your applications section. Open the application which you want to apply for the contest and click Apply for Contest. Make sure the status is 'Published'. The application will go for the review and if it fits the topic of the contest the application will be listed on the Contest Board. Fixed the link of the contest For those, who start with ObjectScript here are some recommendations from InterSystems Online-Learning team: ObjectScript First Look — brief hands-on introduction to ObjectScript. ObjectScript Tutorial — provides an interactive introduction to the ObjectScript language. Using ObjectScript — provides an overview of and details about the ObjectScript programming language. ObjectScript Reference — provides reference material for ObjectScript. Orientation Guide for Server-Side Programming — presents the essentials for programmers who write server-side code using InterSystems products. InterSystems ObjectScript Basics — an interactive course covering ObjectScript basics. Also tagging our Online Learning experts @Michelle.Spisak and @jennifer.ames here to share more details. Hi Developers, Wonder if you could vote for the solution you like? You can! If you contribute to DC! Everyone contributing articles, questions and replies is very welcome to give the vote in the Community Nomination. An application that will receive the most votes in total will win in this nomination. Prizes are waiting for you, stay tuned! 🚀 Hey Developers, The first application is already in the Contest Board! Who's next? We prepared the template repository, especially for the contest. The post has been updated too. We have two winning nominations: 🏆 Experts Nomination - the best application that will be selected by a special jury of InterSystems Experts. 🏆 Community Nomination - the best application that will be chosen by a majority vote of all DC Contributors. So ladies and gentlemen, upload your apps and vote! Want more? Enjoy watching the new video on the InterSystems Developers YouTube, specially recorded by @Evgeny.Shvarov for the IRIS contest: ⏯ How to Create and Submit an Application for InterSystems IRIS Online Programming Contest 2020 Stay tuned! Mmhh. Perhaps, I'm thinking about ... I want to describe the idea more. Rosetta Code - is the site with implementations of the same algorithms in different languages. E.g. check Python - and see hundreds of algorithms implemented. And check ObjectScript - not that much, but some have an implementation in Python and ObjectScript. E.g. here is MD5 implementation in ObjectScript and in Python What you could do is: Take the algorithm implemented in Python or other languages and introduce the implementation in ObjectScript. And you are very welcome to participate in the contest with such a project. Hi Lorenzo, You're very welcome to join our contest! Don't doubt 😉 Hey Developers, One more application is already in the game. Please check our Contest Board! Full speed ahead! 🔥 Hey Community, Our Contest Board has been updated again. Two more applications are already participating in the IRIS contest! Make your bets! 😉 Please welcome the new video specially recorded for the IRIS contest: ⏯ How to Enable Docker and VSCode to Your InterSystems IRIS Solution This video describes how you can add InterSystems IRIS Docker and VSCode environment to your current repository with InterSystems ObjectScript using iris-docker-dev-kit. Enjoy! Developers! You have 6 days to submit your application for the InterSystems IRIS Online contest! Don't hesitate to submit if you didn't finish it - you'll be able to fix the bugs and make improvements during the voting week too! Hey Developers! Only 5 days left to upload your apps to our IRIS Contest! Show your best ObjectScript skills on InterSystems IRIS and earn some $ and glory! 🔥 Just a few days left to send your app for the contest, I'm already in. )) OK! For now, we have 8 applications in the contest! And 3 days left! Competition is growing! Looking forward to more solutions! More and more nominees for winning! Already 8 applications in the Contest Board! Who's next? 🚀 Yeah + 1 with Dmitriy I wish to read nice ObjectScript code ! Voting for the best application will begin very soon! Only 3 days left before the end of registration for the IRIS contest. Don't miss your chance to win! 🏆 Hey Community, Now we have 11 applications in our Contest Board! And only 2 days left before the start of voting. Hurry up to upload your application! Last call! Registration for the IRIS Contest ends today! Now we have 15 apps - make your bets! 😉 The voting has been started! Choose your best InterSystems IRIS application!
Article
Evgeny Shvarov · Feb 21, 2020

InterSystems IRIS Docker Container Image With ObjectScript Package Manager

Hi Developers! Another way to start using InterSystems ObjectScript Package Manager is to use prebuilt container images of InterSystems IRIS Community Edition and InterSystems IRIS for Health Community Edition. We deploy this IRIS images on DockerHub and you can run it with the following command: docker run --rm -p 52773:52773 --init --name my-iris -d intersystemsdc/iris-community:2019.4.0.383.0-zpm Launch a terminal with: docker exec -it my-iris iris session IRIS And install zpm-module as: USER>zpm zpm: USER>install objectscript-math [objectscript-math] Reload START [objectscript-math] Reload SUCCESS [objectscript-math] Module object refreshed. [objectscript-math] Validate START [objectscript-math] Validate SUCCESS [objectscript-math] Compile START [objectscript-math] Compile SUCCESS [objectscript-math] Activate START [objectscript-math] Configure START [objectscript-math] Configure SUCCESS [objectscript-math] Activate SUCCESS zpm: USER> And use same commands for InterSystems IRIS for Health using the tag: intersystemsdc/irishealth-community:2019.4.0.383.0-zpm The images are being published on IRIS Community Edition and IRIS Community Edition for Health repositories of Docker Hub. We will update tags with every new release of IRIS and ZPM. Happy coding!
Announcement
Rubens Silva · Mar 16, 2020

IRIS-CI: A docker image for running InterSystems IRIS in CI environments

Hello all! As we ObjectScript developers have been experiencing, preparing an environment to run CI related tasks can be quite the chore. This is why I have been thinking about how we could improve this workflow and the result of that effort is [IRIS-CI](https://openexchange.intersystems.com/package/iris-ci). See how it works [here](https://imgur.com/N7uVDNK). ### Quickstart 1.Download the image from the Docker Hub registry: ``` docker pull rfns/iris-ci:0.5.3 ``` 2. Run the container (with the default settings): ``` docker run --rm --name ci -t -v /path/to/your/app:/opt/ci/app rfns/iris-ci:0.5.3 ``` Notice that volume mounting to `/path/to/your/app?` This is where the app should be. And that's it: the only thing required to start running the test suites is the path of the application. Also, since this is supposed to be a ephemeral and run-once container, there's no need to keep it listed after executing it, that's why there's the `--rm` flag. ### TL;DR; If you want an example on how to how use it: Check the usage with my another project [dotenv](https://github.com/rfns/dotenv/blob/master/.github/workflows/ci.yml). ### Advanced setup Some projects might need sophisticated setups in order to run the test suites, for such circunstances there's two customization levels: 1. Environment variables 2. Volume overwrite ### Environment variables Environment variables are the most simple customization format and should suffice for most situations. There's two ways to provide an environment variable: * `-e VAR_NAME="var value"` while using `docker run`. * By providing a .env file by mounting an extra volume for `docker run` like this: `-v /my/app/.env:/opt/ci/.env`. > NOTE: In case a variable is defined in both formats, using the `-e` format takes precedence over using a `.env` file. ### Types of environment variables * Variables prefixed with `CI_{NAME}` are passed down as `name` to the installer manifest. * Variables prefixed with `TESPARAM_{NAME}`are passed down as `NAME` to the unit test manager's UserFields property. * `TEST_SUITE`and `TEST_CASE` to control where to locate and which test case to target. Every variable is available to read from the `configuration.Envs` list, which is [passed](https://github.com/rfns/iris-ci/blob/master/ci/Runner.cls#L6) [down](https://github.com/rfns/iris-ci/blob/master/ci/Runner.cls#L53) through `Run` and `OnAfterRun` class methods. If `TEST_CASE` is not specified then the `recursive` flag will be set. In a project with many classes it might be interesting to at least define the `TEST_SUITE` and reduce the search scope due to performance concerns. ### Volume overwrite This image ships with a default installer that's focused on running test suites. But it's possbile to overwrite the following files in order to make it execute different tasks like: generating a XML file for old Caché versions. * `/opt/ci/App/Installer.cls` * `/opt/ci/Runner.cls` For more details on how to implement them, please check the default implementations: [Installer.cls](https://github.com/rfns/iris-ci/blob/master/ci/App/Installer.cls) [Runner.cls](https://github.com/rfns/iris-ci/blob/master/ci/Runner.cls) > TIP: Before overwriting the default Installer.cls check if you really need to, because the current implementation [also allows to create configurated web applications.](https://github.com/rfns/iris-ci#using-the-default-installer-manifest-for-unit-tests) EDIT: Link added.
Announcement
Ksenia Samokhvalova · Mar 19, 2020

Share your thoughts about InterSystems Documentation by filling out a survey!

Hello Developer Community! We are looking to better understand how our users use the Documentation. If you have a few minutes, please fill out this quick survey - https://www.surveymonkey.com/r/HK7F5P7! Feedback from real users like you in invaluable to us and helps us create better product. Your feedback can go further than the survey - we would love to interview you about your experience, just indicate in the survey that you’re open to talking to us! Thank you so much! If you have any questions, please contact me at Ksenia.samokhvalova@intersystems.com I look forward to hearing from you! Ksenia Ksenia SamokhvalovaUX Designer | InterSystemsKsenia.samokhvalova@intersystems.com
Article
Mikhail Khomenko · Mar 12, 2020

Deploying an InterSystems IRIS Solution on EKS using GitHub Actions

Imagine you want to see what InterSystems can give you in terms of data analytics. You studied the theory and now you want some practice. Fortunately, InterSystems provides a project that contains some good examples: Samples BI. Start with the README file, skipping anything associated with Docker, and go straight to the step-by-step installation. Launch a virtual instance, install IRIS there, follow the instructions for installing Samples BI, and then impress the boss with beautiful charts and tables. So far so good. Inevitably, though, you’ll need to make changes. It turns out that keeping a virtual machine on your own has some drawbacks, and it’s better to keep it with a cloud provider. Amazon seems solid, and you create an AWS account (free to start), read that using the root user identity for everyday tasks is evil, and create a regular IAM user with admin permissions. Clicking a little, you create your own VPC network, subnets, and a virtual EC2 instance, and also add a security group to open the IRIS web port (52773) and ssh port (22) for yourself. Repeat the installation of IRIS and Samples BI. This time, use Bash scripting, or Python if you prefer. Again, impress the boss. But the ubiquitous DevOps movement leads you to start reading about Infrastructure as Code and you want to implement it. You choose Terraform, since it’s well-known to everyone and its approach is quite universal—suitable with minor adjustments for various cloud providers. You describe the infrastructure in HCL language, and translate the installation steps for IRIS and Samples BI to Ansible. Then you create one more IAM user to enable Terraform to work. Run it all. Get a bonus at work. Gradually you come to the conclusion that in our age of microservices it’s a shame not to use Docker, especially since InterSystems tells you how. You return to the Samples BI installation guide and read the lines about Docker, which don’t seem to be complicated: $ docker pull intersystemsdc/iris-community:2019.4.0.383.0-zpm$ docker run --name irisce -d --publish 52773:52773 intersystemsdc/iris-community:2019.4.0.383.0-zpm$ docker exec -it irisce iris session irisUSER>zpmzpm: USER>install samples-bi After directing your browser to http://localhost:52773/csp/user/_DeepSee.UserPortal.Home.zen?$NAMESPACE=USER, you again go to the boss and get a day off for a nice job. You then begin to understand that “docker run” is just the beginning, and you need to use at least docker-compose. Not a problem: $ cat docker-compose.ymlversion: "3.7"services: irisce: container_name: irisce image: intersystemsdc/iris-community:2019.4.0.383.0-zpm ports: - 52773:52773$ docker rm -f irisce # We don’t need the previous container$ docker-compose up -d So you install Docker and docker-compose with Ansible, and then just run the container, which will download an image if it’s not already present on the machine. Then you install Samples BI. You certainly like Docker, because it’s a cool and simple interface to various kernel stuff. You start using Docker elsewhere and often launch more than one container. And find that often containers must communicate with each other, which leads to reading about how to manage multiple containers. And you come to Kubernetes. One option to quickly switch from docker-compose to Kubernetes is to use kompose. Personally, I prefer to simply copy Kubernetes manifests from manuals and then edit for myself, but kompose does a good job of completing its small task: $ kompose convert -f docker-compose.ymlINFO Kubernetes file "irisce-service.yaml" createdINFO Kubernetes file "irisce-deployment.yaml" created Now you have the deployment and service files that can be sent to some Kubernetes cluster. You find out that you can install a minikube, which lets you run a single-node Kubernetes cluster and is just what you need at this stage. After a day or two of playing with the minikube sandbox, you’re ready to use a real live Kubernetes deployment somewhere in the AWS cloud. Getting Set Up So, let’s do this together. At this point we'll make a couple assumptions: First, we assume you have an AWS account, you know its ID, and you don’t use root credentials. You create an IAM user (let's call it “my-user”) with administrator rights and programmatic access only and store its credentials. You also create another IAM user, called “terraform,” with the same permissions: On its behalf, Terraform will go to your AWS account and create and delete the necessary resources. The extensive rights of both users are explained by the fact that this is a demo. You save credentials locally for both IAM users: $ cat ~/.aws/credentials[terraform]aws_access_key_id = ABCDEFGHIJKLMNOPQRSTaws_secret_access_key = ABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890123[my-user]aws_access_key_id = TSRQPONMLKJIHGFEDCBAaws_secret_access_key = TSRQPONMLKJIHGFEDCBA01234567890123 Note: Don’t copy and paste the credentials from above. They are provided here as an example and no longer exist. Edit the ~/.aws/credentials file and introduce your own records. Second, we’ll use the dummy AWS Account ID (01234567890) for the article, and the AWS region “eu-west-1.” Feel free to use another region. Third, we assume you’re aware that AWS is not free and you’ll have to pay for resources used. Next, you’ve installed the AWS CLI utility for command-line communication with AWS. You can try to use aws2, but you’ll need to specifically set aws2 usage in your kube config file, as described here. You’ve also installed the kubectl utility for command-line communication with AWS Kubernetes. And you’ve installed the kompose utility for docker-compose.yml for converting Kubernetes manifests. Finally, you’ve created an empty GitHub repository and cloned it to your host. We’ll refer to its root directory as <root_repo_dir>. In this repository, we’ll create and fill three directories: .github/workflows/, k8s/, and terraform/. Note that all the relevant code is duplicated in the github-eks-samples-bi repo to simplify copying and pasting. Let’s continue. AWS EKS Provisioning We already met EKS in the article Deploying a Simple IRIS-Based Web Application Using Amazon EKS. At that time, we created a cluster semi-automatically. That is, we described the cluster in a file, and then manually launched the eksctl utility from a local machine, which created the cluster according to our description. eksctl was developed for creating EKS clusters and it’s good for a proof-of-concept implementation, but for everyday usage it’s better to use something more universal, such as Terraform. A great resource, AWS EKS Introduction, explains the Terraform configuration needed to create an EKS cluster. An hour or two spent getting acquainted with it will not be a waste of time. You can play with Terraform locally. To do so, you’ll need a binary (we’ll use the latest version for Linux at the time of writing of the article, 0.12.20), and the IAM user “terraform” with sufficient rights for Terraform to go to AWS. Create the directory <root_repo_dir>/terraform/ to store Terraform code: $ mkdir <root_repo_dir>/terraform$ cd <root_repo_dir>/terraform You can create one or more .tf files (they are merged at startup). Just copy and paste the code examples from AWS EKS Introduction and then run something like: $ export AWS_PROFILE=terraform$ export AWS_REGION=eu-west-1$ terraform init$ terraform plan -out eks.plan You may encounter some errors. If so, play a little with debug mode, but remember to turn it off later: $ export TF_LOG=debug$ terraform plan -out eks.plan<many-many lines here>$ unset TF_LOG This experience will be useful, and most likely you’ll get an EKS cluster launched (use “terraform apply” for that). Check it out in the AWS console: Clean up when you get bored: $ terraform destroy Then go to the next level and start using the Terraform EKS module, especially since it’s based on the same EKS introduction. In the examples/ directory you’ll see how to use it. You’ll also find other examples there. We simplified the examples somewhat. Here’s the main file in which the VPC creation and EKS creation modules are called: $ cat <root_repo_dir>/terraform/main.tfterraform { required_version = ">= 0.12.0" backend "s3" { bucket = "eks-github-actions-terraform" key = "terraform-dev.tfstate" region = "eu-west-1" dynamodb_table = "eks-github-actions-terraform-lock" }} provider "kubernetes" { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) token = data.aws_eks_cluster_auth.cluster.token load_config_file = false version = "1.10.0"} locals { vpc_name = "dev-vpc" vpc_cidr = "10.42.0.0/16" private_subnets = ["10.42.1.0/24", "10.42.2.0/24"] public_subnets = ["10.42.11.0/24", "10.42.12.0/24"] cluster_name = "dev-cluster" cluster_version = "1.14" worker_group_name = "worker-group-1" instance_type = "t2.medium" asg_desired_capacity = 1} data "aws_eks_cluster" "cluster" { name = module.eks.cluster_id} data "aws_eks_cluster_auth" "cluster" { name = module.eks.cluster_id} data "aws_availability_zones" "available" {} module "vpc" { source = "git::https://github.com/terraform-aws-modules/terraform-aws-vpc?ref=master" name = local.vpc_name cidr = local.vpc_cidr azs = data.aws_availability_zones.available.names private_subnets = local.private_subnets public_subnets = local.public_subnets enable_nat_gateway = true single_nat_gateway = true enable_dns_hostnames = true tags = { "kubernetes.io/cluster/${local.cluster_name}" = "shared" } public_subnet_tags = { "kubernetes.io/cluster/${local.cluster_name}" = "shared" "kubernetes.io/role/elb" = "1" } private_subnet_tags = { "kubernetes.io/cluster/${local.cluster_name}" = "shared" "kubernetes.io/role/internal-elb" = "1" }} module "eks" { source = "git::https://github.com/terraform-aws-modules/terraform-aws-eks?ref=master" cluster_name = local.cluster_name cluster_version = local.cluster_version vpc_id = module.vpc.vpc_id subnets = module.vpc.private_subnets write_kubeconfig = false worker_groups = [ { name = local.worker_group_name instance_type = local.instance_type asg_desired_capacity = local.asg_desired_capacity } ] map_accounts = var.map_accounts map_roles = var.map_roles map_users = var.map_users} Let’s look a little more closely at the “terraform” block in main.tf: terraform { required_version = ">= 0.12.0" backend "s3" { bucket = "eks-github-actions-terraform" key = "terraform-dev.tfstate" region = "eu-west-1" dynamodb_table = "eks-github-actions-terraform-lock" }} Here we indicate that we’ll adhere to the syntax not lower than Terraform 0.12 (much has changed compared with earlier versions), and also that Terraform shouldn’t store its state locally, but rather remotely, in the S3 bucket. It’s convenient if the terraform code can be updated from different places by different people, which means we need to be able to lock a user’s state, so we added a lock using a dynamodb table. Read more about locks on the State Locking page. Since the name of the bucket should be unique throughout AWS, the name “eks-github-actions-terraform” won’t work for you. Please think up your own and make sure it’s not already taken (so you’re getting a NoSuchBucket error): $ aws s3 ls s3://my-bucketAn error occurred (AllAccessDisabled) when calling the ListObjectsV2 operation: All access to this object has been disabled$ aws s3 ls s3://my-bucket-with-name-that-impossible-to-rememberAn error occurred (NoSuchBucket) when calling the ListObjectsV2 operation: The specified bucket does not exist Having come up with a name, create the bucket (we use the IAM user “terraform” here. It has administrator rights so it can create a bucket) and enable versioning for it (which will save your nerves in the event of a configuration error): $ aws s3 mb s3://eks-github-actions-terraform --region eu-west-1make_bucket: eks-github-actions-terraform$ aws s3api put-bucket-versioning --bucket eks-github-actions-terraform --versioning-configuration Status=Enabled$ aws s3api get-bucket-versioning --bucket eks-github-actions-terraform{ "Status": "Enabled"} With DynamoDB, uniqueness is not needed, but you do need to create a table first: $ aws dynamodb create-table \ --region eu-west-1 \ --table-name eks-github-actions-terraform-lock \ --attribute-definitions AttributeName=LockID,AttributeType=S \ --key-schema AttributeName=LockID,KeyType=HASH \ --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 Keep in mind that, in case of Terraform failure, you may need to remove a lock manually from the AWS console. But be careful when doing so.With regard to the module eks/vpc blocks in main.tf, the way to reference the module available on GitHub is simple: git::https://github.com/terraform-aws-modules/terraform-aws-vpc?ref=master Now let’s look at our other two Terraform files (variables.tf and outputs.tf). The first holds our Terraform variables: $ cat <root_repo_dir>/terraform/variables.tfvariable "region" { default = "eu-west-1"} variable "map_accounts" { description = "Additional AWS account numbers to add to the aws-auth configmap. See examples/basic/variables.tf for example format." type = list(string) default = []} variable "map_roles" { description = "Additional IAM roles to add to the aws-auth configmap." type = list(object({ rolearn = string username = string groups = list(string) })) default = []} variable "map_users" { description = "Additional IAM users to add to the aws-auth configmap." type = list(object({ userarn = string username = string groups = list(string) })) default = [ { userarn = "arn:aws:iam::01234567890:user/my-user" username = "my-user" groups = ["system:masters"] } ]} The most important part here is adding the IAM user “my-user” to the map_users variable, but you should use your own account ID here in place of 01234567890. What does this do? When you communicate with EKS through the local kubectl client, it sends requests to the Kubernetes API server, and each request goes through authentication and authorization processes so Kubernetes can understand who sent the request and what they can do. So the EKS version of Kubernetes asks AWS IAM for help with user authentication. If the user who sent the request is listed in AWS IAM (we pointed to his ARN here), the request goes to the authorization stage, which EKS processes itself, but according to our settings. Here, we indicated that the IAM user “my-user” is very cool (group “system: masters”). Finally, the outputs.tf file describes what Terraform should print after it finishes a job: $ cat <root_repo_dir>/terraform/outputs.tfoutput "cluster_endpoint" { description = "Endpoint for EKS control plane." value = module.eks.cluster_endpoint} output "cluster_security_group_id" { description = "Security group ids attached to the cluster control plane." value = module.eks.cluster_security_group_id} output "config_map_aws_auth" { description = "A kubernetes configuration to authenticate to this EKS cluster." value = module.eks.config_map_aws_auth} This completes the description of the Terraform part. We’ll return soon to see how we’re going to launch these files. Kubernetes Manifests So far, we’ve taken care of where to launch the application. Now let’s look at what to run. Recall that we have docker-compose.yml (we renamed the service and added a couple of labels that kompose will use shortly) in the <root_repo_dir>/k8s/ directory: $ cat <root_repo_dir>/k8s/docker-compose.ymlversion: "3.7"services: samples-bi: container_name: samples-bi image: intersystemsdc/iris-community:2019.4.0.383.0-zpm ports: - 52773:52773 labels: kompose.service.type: loadbalancer kompose.image-pull-policy: IfNotPresent Run kompose and then add what’s highlighted below. Delete annotations (to make things more intelligible): $ kompose convert -f docker-compose.yml --replicas=1$ cat <root_repo_dir>/k8s/samples-bi-deployment.yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata: labels: io.kompose.service: samples-bi name: samples-bispec: replicas: 1 strategy: type: Recreate template: metadata: labels: io.kompose.service: samples-bi spec: containers: - image: intersystemsdc/iris-community:2019.4.0.383.0-zpm imagePullPolicy: IfNotPresent name: samples-bi ports: - containerPort: 52773 resources: {} lifecycle: postStart: exec: command: - /bin/bash - -c - | echo -e "write\nhalt" > test until iris session iris < test; do sleep 1; done echo -e "zpm\ninstall samples-bi\nquit\nhalt" > samples_bi_install iris session iris < samples_bi_install rm test samples_bi_install restartPolicy: Always We use the Recreate update strategy, which means that the pod will be deleted first and then recreated. This is permissible for demo purposes and allows us to use fewer resources.We also added the postStart hook, which will trigger immediately after the pod starts. We wait until IRIS starts up and install the samples-bi package from the default zpm-repository.Now we add the Kubernetes service (also without annotations): $ cat <root_repo_dir>/k8s/samples-bi-service.yamlapiVersion: v1kind: Servicemetadata: labels: io.kompose.service: samples-bi name: samples-bispec: ports: - name: "52773" port: 52773 targetPort: 52773 selector: io.kompose.service: samples-bi type: LoadBalancer Yes, we’ll deploy in the “default” namespace, which will work for the demo. Okay, now we know where and what we want to run. It remains to see how. The GitHub Actions Workflow Rather than doing everything from scratch, we’ll create a workflow similar to the one described in Deploying InterSystems IRIS solution on GKE Using GitHub Actions. This time we don’t have to worry about building a container. The GKE-specific parts are replaced by those specific to EKS. Bolded parts are related to receiving the commit message and using it in conditional steps: $ cat <root_repo_dir>/.github/workflows/workflow.yamlname: Provision EKS cluster and deploy Samples BI thereon: push: branches: - master # Environment variables.# ${{ secrets }} are taken from GitHub -> Settings -> Secrets# ${{ github.sha }} is the commit hashenv: AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} AWS_REGION: ${{ secrets.AWS_REGION }} CLUSTER_NAME: dev-cluster DEPLOYMENT_NAME: samples-bi jobs: eks-provisioner: # Inspired by: ## https://www.terraform.io/docs/github-actions/getting-started.html ## https://github.com/hashicorp/terraform-github-actions name: Provision EKS cluster runs-on: ubuntu-18.04 steps: - name: Checkout uses: actions/checkout@v2 - name: Get commit message run: | echo ::set-env name=commit_msg::$(git log --format=%B -n 1 ${{ github.event.after }}) - name: Show commit message run: echo $commit_msg - name: Terraform init uses: hashicorp/terraform-github-actions@master with: tf_actions_version: 0.12.20 tf_actions_subcommand: 'init' tf_actions_working_dir: 'terraform' - name: Terraform validate uses: hashicorp/terraform-github-actions@master with: tf_actions_version: 0.12.20 tf_actions_subcommand: 'validate' tf_actions_working_dir: 'terraform' - name: Terraform plan if: "!contains(env.commit_msg, '[destroy eks]')" uses: hashicorp/terraform-github-actions@master with: tf_actions_version: 0.12.20 tf_actions_subcommand: 'plan' tf_actions_working_dir: 'terraform' - name: Terraform plan for destroy if: "contains(env.commit_msg, '[destroy eks]')" uses: hashicorp/terraform-github-actions@master with: tf_actions_version: 0.12.20 tf_actions_subcommand: 'plan' args: '-destroy -out=./destroy-plan' tf_actions_working_dir: 'terraform' - name: Terraform apply if: "!contains(env.commit_msg, '[destroy eks]')" uses: hashicorp/terraform-github-actions@master with: tf_actions_version: 0.12.20 tf_actions_subcommand: 'apply' tf_actions_working_dir: 'terraform' - name: Terraform apply for destroy if: "contains(env.commit_msg, '[destroy eks]')" uses: hashicorp/terraform-github-actions@master with: tf_actions_version: 0.12.20 tf_actions_subcommand: 'apply' args: './destroy-plan' tf_actions_working_dir: 'terraform' kubernetes-deploy: name: Deploy Kubernetes manifests to EKS needs: - eks-provisioner runs-on: ubuntu-18.04 steps: - name: Checkout uses: actions/checkout@v2 - name: Get commit message run: | echo ::set-env name=commit_msg::$(git log --format=%B -n 1 ${{ github.event.after }}) - name: Show commit message run: echo $commit_msg - name: Configure AWS Credentials if: "!contains(env.commit_msg, '[destroy eks]')" uses: aws-actions/configure-aws-credentials@v1 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ secrets.AWS_REGION }} - name: Apply Kubernetes manifests if: "!contains(env.commit_msg, '[destroy eks]')" working-directory: ./k8s/ run: | aws eks update-kubeconfig --name ${CLUSTER_NAME} kubectl apply -f samples-bi-service.yaml kubectl apply -f samples-bi-deployment.yaml kubectl rollout status deployment/${DEPLOYMENT_NAME} Of course, we need to set the credentials of the “terraform” user (take them from the ~/.aws/credentials file), letting Github use its secrets: Notice the highlighted parts of workflow. They will enable us to destroy an EKS cluster by pushing a commit message that contains a phrase “[destroy eks]”. Note that we won’t run “kubernetes apply” with such a commit message.Run a pipeline, but first create a .gitignore file: $ cat <root_repo_dir>/.gitignore.DS_Storeterraform/.terraform/terraform/*.planterraform/*.json$ cd <root_repo_dir>$ git add .github/ k8s/ terraform/ .gitignore$ git commit -m "GitHub on EKS"$ git push Monitor deployment process on the "Actions" tab of GitHub repository page. Please wait for successful completion. When you run a workflow for the very first time, it will take about 15 minutes on the “Terraform apply” step, approximately as long as it takes to create the cluster. At the next start (if you didn’t delete the cluster), the workflow will be much faster. You can check this out: $ cd <root_repo_dir>$ git commit -m "Trigger" --allow-empty$ git push Of course, it would be nice to check what we did. This time you can use the credentials of IAM “my-user” on your laptop: $ export AWS_PROFILE=my-user$ export AWS_REGION=eu-west-1$ aws sts get-caller-identity$ aws eks update-kubeconfig --region=eu-west-1 --name=dev-cluster --alias=dev-cluster$ kubectl config current-contextdev-cluster $ kubectl get nodesNAME STATUS ROLES AGE VERSIONip-10-42-1-125.eu-west-1.compute.internal Ready <none> 6m20s v1.14.8-eks-b8860f $ kubectl get poNAME READY STATUS RESTARTS AGEsamples-bi-756dddffdb-zd9nw 1/1 Running 0 6m16s $ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 172.20.0.1 <none> 443/TCP 11msamples-bi LoadBalancer 172.20.33.235 a2c6f6733557511eab3c302618b2fae2-622862917.eu-west-1.elb.amazonaws.com 52773:31047/TCP 6m33s Go to http://a2c6f6733557511eab3c302618b2fae2-622862917.eu-west-1.elb.amazonaws.com:52773/csp/user/_DeepSee.UserPortal.Home.zen?$NAMESPACE=USER (substitute link by your External-IP), then type “_system”, “SYS” and change the default password. You should see a bunch of BI dashboards: Click on each one’s arrow to deep dive: Remember, if you restart a samples-bi pod, all your changes will be lost. This is intentional behavior as this is a demo. If you need persistence, I've created an example in the github-gke-zpm-registry/k8s/statefulset.tpl repository. When you’re finished, just remove everything you’ve created: $ git commit -m "Mr Proper [destroy eks]" --allow-empty$ git push Conclusion In this article, we replaced the eksctl utility with Terraform to create an EKS cluster. It’s a step forward to “codify” all of your AWS infrastructure.We showed how you can easily deploy a demo application with git push using Github Actions and Terraform.We also added kompose and a pod’s postStart hooks to our toolbox.We didn’t show TLS enabling this time. That’s a task we’ll undertake in the near future. 💡 This article is considered as InterSystems Data Platform Best Practice.
Announcement
Jeff Fried · Mar 31, 2020

InterSystems IRIS and IRIS for Health 2020.1 are GA (Generally Available)

GA releases are now published for the 2020.1 version of InterSystems IRIS, IRIS for Health, and IRIS Studio! A full set of kits and containers for these products are available from the WRC Software Distribution site, including community editions of InterSystems IRIS and IRIS for Health. The build number for these releases is 2020.1.0.215.0. InterSystems IRIS Data Platform 2020.1 makes it even easier to develop and deploy real-time, machine learning-enabled applications that bridge data and application silos. It has many new capabilities including: Kernel Performance enhancements, including reduced contention for blocks and cache lines Universal Query Cache - every query (including embedded & class ones) now gets saved as a cached query Universal Shard Queue Manager - for scale-out of query load in sharded configurations Selective Cube Build - to quickly incorporate new dimensions or measures Security improvements, including hashed password configuration Improved TSQL support, including JDBC support Dynamic Gateway performance enhancements Spark connector update MQTT support in ObjectScript InterSystems IRIS for Health 2020.1 includes all of the enhancements of InterSystems IRIS. In addition, this release includes: In-place conversion to IRIS for Health HL7 Productivity Toolkit including Migration Tooling and Cloverleaf conversion X12 enhancements FHIR R4 base standard support As this is an EM (extended maintenance) release, many customers may want to know the differences between 2020.1 and 2019.1. These are listed in the release notes: InterSystems IRIS 2020.1 release notes IRIS for Health 2020.1 release notes Documentation can be found here: InterSystems IRIS 2020.1 documentation IRIS for Health 2020.1 documentation InterSystems IRIS Studio 2020.1 is a standalone development image supported on Microsoft Windows. It works with InterSystems IRIS and IRIS for Health version 2020.1 and below, as well as with Caché and Ensemble. The platforms on which InterSystems IRIS and IRIS for Health 2020.1 are supported for production and development are detailed in the Supported Platforms document. Will there be an update to Caché 2018 as well ? The Community Editions can also be found in the Docker Store: docker pull store/intersystems/iris-community:2020.1.0.215.0 docker pull store/intersystems/irishealth-community:2020.1.0.215.0 Hi @Kurt.Hofman - yes there is a 2018.1.4 in the works for Caché and Ensemble, though not for a couple of months. It is a maintenance release. Thanks, Jeff! Are there related updates of Docker images? Thank you, Steve!
Article
Evgeny Shvarov · Jan 12, 2020

InterSystems Open Exchange: How to Publish a New Release of Your Application?

Hi Developers! Suppose you published your application on Open Exchange with version 1.00. And then you've added a new outstanding feature and you make a new release. You can also make a new release of your application on Open Exchange. Why make releases on Open Exchange? This the way for you to highlight the new features of your application. When you publish a new release the following happens: Release notes appear on the News page of Open Exchange The version of your app changes Version History tab is updated All the followers of you, your application or your company receive an email notification. Weekly and monthly Open Exchange digest on OEX and Developers Community will include the note on your release. How to make a new release Open your published application page and click Settings-> Edit: Make changes in description or tag if the new release brings these changes and click Save. Save it even you don't have any changes for the App's properties. Then click 'Send for Approval to update the version and submit Release Notes: You will see the window with version number and release notes. We automatically up the minor version of the current version number but it's up to you what version to release or even not change the version number at all. Release notes supports Markdown so prepare the markdown text in any editor it supports (e.g. VSCode) and copy-n-paste it here. Then click on Send button: The markdown I submitted here: ## InterSystems IRIS docker image update In this release I updated an [InterSystems Docker](https://hub.docker.com/publishers/intersystems) image with the new 2019.4 release Once the version is approved release notes will be sent to all your subscribers and be published on the News page: Please submit your comments below if you have any questions and also submit suggestions and bug reports here. Make new releases of your InterSystems Applications on Open Exchange and Stay tuned!
Announcement
Jamie Kantor · Jan 14, 2020

InterSystems IRIS Core Solutions Developer Specialist Exam and Caché Developers

Hi, there, everyone. We here in the certification team have been getting some questions about Caché developers taking the InterSystems IRIS Core Solutions Developer exam. I thought now would be a good time to clear up some doubts the community may have. Even if you haven’t been working yet in InterSystems IRIS, the exam may well suit you already if you currently have experience in Caché. By looking at the Exam Details, you’ll see that there is only one IRIS-specific topic. Our exam designers did that on purpose because we knew that many of our partners and customers were considering or in the process of moving to IRIS as we developed the exam. We knew that most of our developers wouldn’t benefit from an exam that focused solely on new IRIS features. So, almost all of the exam is based on functionality you may already be familiar with. However, this isn’t a Caché exam because the exam topics represent how a developer would use InterSystems IRIS today. Those of you who have been using InterSystems technology for a longer time will know that InterSystems has evolved a rich selection of features that developers can choose from. So, when we worked with our internal and community experts to design the exam, we made sure to select the IRIS areas and features that favor contemporary application development. That wasn’t an easy task because we know that many InterSystems developers have their favorite approaches to application development or their team may already have pre-established coding practices that cause them to use InterSystems technologies in a very specific way. In any case, we took these (and several other) ideas into consideration and think we got the balance right. So, if you are a Caché developer and you want to know if the InterSystems IRIS Core Solutions Developer Specialist exam is right for you, we invite you to read the Exam Content and Topics in the Exam Details. We also strongly encourage you to review the sample questions to see the style and approaches to the topics. You might want to look up some of the listed features in our documentation as well. Finally, our certification team is here to answer any questions you may have about this or any other exam at certification@intersystems.com Thanks and Best Regards, James Kantor - Certification Manager, InterSystems