Clear filter
Article
Mark Bolinsky · Dec 5, 2016
Enterprises need to grow and manage their global computing infrastructures rapidly and efficiently while simultaneously optimizing and managing capital costs and expenses. Amazon Web Services (AWS) and Elastic Compute Cloud (EC2) computing and storage services meet the needs of the most demanding Caché based application by providing
a highly robust global computing infrastructure. The Amazon EC2 infrastructure enables companies to rapidly provision compute capacity and/or quickly and flexibly extend their existing on- premises infrastructure into the cloud. AWS provides a rich set of services and robust, enterprise-grade mechanisms for security, networking, computation, and storage.
At the heart of AWS is the Amazon EC2. A cloud computing infrastructure that supports a variety of operating systems and machine configurations (e.g., CPU, RAM, network). AWS provides preconfigured virtual machine (VM) images known as Amazon Machine Images, or AMIs, with guest operating systems including various Linux® and Windows distributions and versions. They may have additional software used as the basis for virtualized instances running in AWS. You can use these AMIs as starting points to instantiate and install or configure additional software, data, and more to create application- or workload-specific AMIs.
Care must be taken, as with any platform or deployment model, to ensure all aspects of an application environment are considered such as performance, availability, operations, and management procedures.
Specifics of each of following areas will be covered in this document.
Network setup and configuration. This section covers the setup of the network for Caché-based applications within AWS, including subnets to support the logical server groups for different layers and roles within the reference architecture.
Server setup and configuration. This section covers the services and resources involved in the design of the various servers for each layer. It also includes the architecture for high availability across availability zones.
Security. This section discusses security mechanisms in AWS, including how to configure the instance and network security to enable authorized access to the overall solution as well as between layers and instances.
Deployment and management. This section provides details on packaging, deployment, monitoring, and management.
Architecture and Deployment Scenarios
This document provides several reference architectures within AWS as examples for providing robust well performing and highly available applications based on InterSystems technologies including Caché, Ensemble, HealthShare, TrakCare, and associated embedded technologies such as DeepSee, iKnow, CSP, Zen and Zen Mojo.
To understand how Caché and associated components can be hosted on AWS, let’s first review the architecture and components of a typical Caché deployment and explore some common scenarios and topologies.
Caché Architecture Review
InterSystems data platform continuously evolves to provide an advanced database management system and rapid application development environment for breakthroughs in processing and analyzing complex data models and developing Web and mobile applications.
This is a new generation of database technology that provides multiple modes of data access. Data is only described once in a single integrated data dictionary and is instantly available using object access, high-performance SQL, and powerful multidimensional access – all of which can simultaneously access the same data.
The available Caché high-level architecture component tiers and services are illustrated in Figure-1. These general tiers also apply to both InterSystems TrakCare and HealthShare products as well.
Figure-1: High-level component tiers
Common Deployment Scenarios
There are numerous combinations possible for deployment, however in this document two scenarios will be covered; a hybrid model and complete cloud hosted model.
Hybrid Model
In this scenario, a company wants to leverage both on-premises enterprise resources and AWS EC2 resources for either disaster recovery, internal maintenance contingency, re-platforming initiatives, or short/long term capacity augmentation when needed. This model can offer a high level of availability for business continuity and disaster recovery for a failover mirror member set on-premises.
Connectivity for this model in this scenario relies on a VPN tunnel between the on-premises deployment and the AWS availability zone(s) to present AWS resources as an extension to the enterprise’s data center. There are other connectivity methods such as AWS Direct Connect. However, this is not covered as part of this document. Further details about AWS Direct Connect can be found here.
Details for setting up this example Amazon Virtual Private Cloud (VPC) to support disaster recovery of your on-premises data center can he found here.
Figure-2: Hybrid model using AWS VPC for disaster recovery of on-premises
The above example shows a failover mirror pair operating within your on-premises data center with a VPN connection to your AWS VPC. The VPC illustrated provides multiple subnets in dual availability zones in a given AWS region. There are two Disaster Recovery (DR) Async mirror members (one in each availability zone) to provide resiliency.
Cloud Hosted Model
In this scenario, your Caché-based application including both data and presentation layers are kept completely in the AWS cloud using multiple availability zones within a single AWS region. The same VPN tunnel, AWS Direct Connect, and even pure Internet connectivity models are available.
Figure-3: Cloud hosted model supporting full production workload
The above example in Figure-3 illustrates a deployment model for supporting an entire production deployment of your application in your VPC. This model leverages dual availability zones with synchronous failover mirroring between the availability zones along with load balanced web servers and associated application servers as ECP clients. Each tier is isolated in a separate security group for network security controls. IP addresses and port ranges are only opened as required based on your application’s needs.
Storage and Compute Resource
Storage
There are multiple types of storage options available. For the purpose of this reference architecture Amazon Elastic Block Store (Amazon EBS) and Amazon EC2 Instance Store (also called ephemeral drives) volumes are discussed for several possible use cases. Additional details of the various storage options can be found here and here.
Elastic Block Storage (EBS)
EBS provides durable block-level storage for use with Amazon EC2 instances (virtual machines) which can be formatted and mounted as traditional file systems in either Linux or Windows, and most importantly the volumes are off-instance storage that persists independently from the running life of a single Amazon EC2 instance which is important for database systems.
In addition, Amazon EBS provides the ability to create point-in-time snapshots of volumes, which are persisted in Amazon S3. These snapshots can be used as the starting point for new Amazon EBS volumes, and to protect data for long-term durability. The same snapshot can be used to instantiate as many volumes as you want. These snapshots can be copied across AWS regions, making it easier to leverage multiple AWS regions for geographical expansion, data center migration, and disaster recovery. Sizes for Amazon EBS volumes range from 1 GB to 16 TB, and are allocated in 1 GB increments.
Within Amazon EBS there are three different types: Magnetic Volumes, General Purpose (SSD), and Provisioned IOPS (SSD). The following sub sections provide a short description of each.
Magnetic Volumes
Magnetic volumes offer cost-effective storage for applications with moderate or bursting I/O requirements. Magnetic volumes are designed to deliver approximately 100 input/output operations per second (IOPS) on average, with a best effort ability to burst to hundreds of IOPS. Magnetic volumes are also well- suited for use as boot volumes, where the burst capability provides fast instance startup times.
General Purpose (SSD)
General Purpose (SSD) volumes offer cost-effective storage that is ideal for a broad range of workloads. These volumes deliver single-digit millisecond latencies, the ability to burst to 3,000 IOPS for extended periods of time, and a baseline performance of 3 IOPS/GB up to a maximum of 10,000 IOPS (at 3,334 GB). General Purpose (SSD) volumes can range in size from 1 GB to 16 TB.
Provisioned IOPS (SSD)
Provisioned IOPS (SSD) volumes are designed to deliver predictable high performance for I/O-intensive workloads, such as database workloads that are sensitive to storage performance and consistency in random access I/O throughput. You specify an IOPS rate when creating a volume, and then Amazon EBS delivers within 10 percent of the provisioned IOPS performance 99.9 percent of the time over a given year. A Provisioned IOPS (SSD) volume can range in size from 4 GB to 16 TB, and you can provision up to 20,000 IOPS per volume. The ratio of IOPS provisioned to the volume size requested can be a maximum of 30; for example, a volume with 3,000 IOPS must be at least 100 GB in size. Provisioned IOPS (SSD) volumes have a throughput limit range of 256 KB for each IOPS provisioned, up to a maximum of 320 MB/second (at 1,280 IOPS).
The architectures discussed in this document use EBS volumes as these are more suited for production workloads that require predictable low-latency Input/Output Operations per Second (IOPs) and throughput. Care must be taken when selecting a particular VM type because not all EC2 instance types can have access to EBS storage.
Note: Because Amazon EBS volumes are network-attached devices, other network I/O performed by an Amazon EC2 instance, and also the total load on the shared network, can affect the performance of individual Amazon EBS volumes. To allow your Amazon EC2 instances to fully utilize the Provisioned IOPS on an Amazon EBS volume, you can launch selected Amazon EC2 instance types as Amazon EBS–optimized instances.
Details about EBS volumes can be found here.
EC2 Instance Storage (ephemeral drives)
EC2 instance storage consists of a preconfigured and pre-attached block of disk storage on the same physical server that hosts your operating Amazon EC2 instance. The amount of the disk storage provided varies by Amazon EC2 instance type. In the Amazon EC2 instance families that provide instance storage, larger instances tend to provide both more and larger instance store volumes.
There are storage-optimized (I2) and dense-storage (D2) Amazon EC2 instance families that provide special-purpose instance storage that are targeted to specific use cases. For example, I2 instances provide very fast SSD-backed instance storage capable of supporting over 365,000 random read IOPS and 315,000 write IOPS and provide cost attractive pricing models.
Unlike EBS volumes, the storage is not permanent and can be only used for the instance’s lifetime, and cannot be detached or attached to another instance. Instance storage is meant for temporary storage of information that is continually changing. In the realm of InterSystems technologies and products, items such as using Ensemble or Health Connect as an Enterprise Service Bus (ESB), application servers using Enterprise Cache Protocol (ECP), or using web servers with CSP Gateway would be great use cases for this type of storage and storage-optimized instance types along with using provisioning and automation tools to streamline their effectiveness and support elasticity.
Details about Instance store volumes can be found here.
Compute
EC2 Instances
There are numerous instance types available that are optimized for various use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacities allowing for countless combinations to right-size the resource requirements for your application.
For the purpose of this document General Purpose M4 Amazon EC2 instance types will be referenced as means to right-size an environment and these instances provide EBS volume capabilities and optimizations. Alternatives are possible based on your application’s capacity requirements and pricing models.
M4 instances are the latest generation of General Purpose instances. This family provides a balance of compute, memory, and network resources, and it is a good choice for many applications. Capacities range from 2 to 64 virtual CPU and 8 to 256GB of memory with corresponding dedicated EBS bandwidth.
In addition to the individual instance types, there are also tiered classifications such as Dedicated Hosts, Spot instances, Reserved instances, and Dedicated instances each with varying degrees of pricing, performance, and isolation.
Confirm availability and details of the currently available instances here.
Availability and Operations
Web/App Server Load Balancing
External and internal load balanced web servers may be required for your Caché based application. External load balancers are used for access over the Internet or WAN (VPN or Direct Connect) and internal load balancers are potentially used for internal traffic. AWS Elastic Load balancing provides two types of load balancers – Application load balancer and Classic Load balancer.
Classic Load Balancer
The Classic Load Balancer routes traffic based on application or network level information and is ideal for simple load balancing of traffic across multiple EC2 instances where high availability, automatic scaling, and robust security are required. The specific details and features can be found here.
Application Load Balancer
An Application Load Balancer is a load balancing option for the Elastic Load Balancing service that operates at the application layer and allows you to define routing rules based on content across multiple services or containers running on one or more Amazon EC2 instances. Additionally, there is support for WebSockets and HTTP/2. The specific details and features can be found here.
Example
In this following example, a set of three web servers are defined with each one in a separate availability zone to provide the highest levels of availability. The web server load balancers must be configured with Sticky Sessions to support the ability to pin user sessions to specific EC2 instances using cookies. Traffic will be routed to the same instances as the user continues to access your application.
The following diagram in Figure-4 illustrates a simple example of the Classic Load Balancer within AWS.
Figure-4: Example of a Classic Load Balancer
Database Mirroring
When deploying Caché based applications on AWS, providing high availability for the Caché database server requires the use of synchronous database mirroring to provide high availability in a given primary AWS region and potentially asynchronous database mirroring to replicate data to a hot standby in a secondary AWS region for disaster recovery depending on your uptime service level agreements requirements.
A database mirror is a logical grouping of two database systems, known as failover members, which are physically independent systems connected only by a network. After arbitrating between the two systems, the mirror automatically designates one of them as the primary system; the other member automatically becomes the backup system. External client workstations or other computers connect to the mirror through the mirror Virtual IP (VIP), which is specified during mirroring configuration. The mirror VIP is automatically bound to an interface on the primary system of the mirror.
Note: In AWS, it is not possible to configure the mirror VIP in the traditional manner, so an alternative solution has been devised. However, mirroring is supported across subnets.
The current recommendation for deploying a database mirror in AWS is to configure three instances (primary, backup, arbiter) in the same VPC across three different availability zones. This ensures that at any given time, AWS will guarantee external connectivity with at least two of these VMs with a 99.95% SLA. This provides adequate isolation and redundancy of the database data itself. Details on AWS EC2 service level agreements can be found here.
There is no hard upper limit on network latency between failover members. The impact of increasing latency differs by application. If the round trip time between the failover members is similar to the disk write service time, no impact is expected. Round trip time may be a concern, however, when the application must wait for data to become durable (sometimes referred to as a journal sync). Details of database mirroring and network latency can be found here.
Virtual IP Address and Automatic Failover
Most IaaS cloud providers lack the ability to provide for a Virtual IP (VIP) address that is typically used in database failover designs. To address this, several of the most commonly used connectivity methods, specifically ECP clients and CSP Gateways, have been enhanced within Caché, Ensemble, and HealthShare to no longer rely on VIP capabilities making them mirror-aware.
Connectivity methods such as xDBC, direct TCP/IP sockets, or other direct connect protocols, still require the use of a VIP. To address those, InterSystems database mirroring technology makes it possible to provide automatic failover for those connectivity methods within AWS using APIs to interact with either an AWS Elastic Load Balancer (ELB) to achieve VIP-like functionality, thus providing a complete and robust high availability design within AWS.
Additionally, AWS has recently introduced a new type of ELB called an Application Load Balancer. This type of load balancer runs at Layer 7 and supports content-based routing and supports applications that run in containers. Content based routing is especially useful for Big Data type projects using a partitioned data or data sharding deployment.
Just as with Virtual IP, this is an abrupt change in network configuration and does not involve any application logic to inform existing clients connected to the failed primary mirror member that a failover is happening. Depending on the nature of the failure, those connections can get terminated as a result of the failure itself, due to application timeout or error, as a result of the new primary forcing the old primary instance down, or due to expiration of the TCP keep-alive timer used by the client.
As a result, users may have to reconnect and log in. Your application’s behavior would determine this behavior. Details about the various types of available ELB can be found here.
AWS EC2 Instance call-out to AWS Elastic Load Balancer Method
In this model the ELB can either have a server pool defined with both failover mirror members and potentially DR asynchronous mirror member(s) with only one of the entries active that is the current primary mirror member, or only a server pool with single entry of the active mirror member.
Figure-5: API Method to interact with Elastic Load Balancer (internal)
When a mirror member becomes the primary mirror member an API call is issued from your EC2 instance to the AWS ELB to adjust/instruct the ELB of the new primary mirror member.
Figure-6: Failover to Mirror Member B using API to Load Balancer
The same model applies for promoting a DR Asynchronous mirror member in the event of both the primary and backup mirror members become unavailable.
Figure-7: Promotion of DR Async mirror to primary using API to Load Balancer
As per standard recommended DR procedure, in Figure-6 above the promotion of the DR member involves a human decision due to the possibility of data loss from asynchronous replication. Once that action is taken however, no administrative action is required on the ELB. It automatically routes traffic once the API is called during promotion.
API Details
This API to call-out to the AWS load balancer resource is defined in the ^ZMIRROR routine specifically as part of the procedure call:
$$CheckBecomePrimaryOK^ZMIRROR()
Within this procedure, insert whatever API logic or methods you chose to use from AWS ELB REST API, command line interface, etc. An effective and secure way to interact with the ELB is to use AWS Identity and Access Management (IAM) roles so you don’t have to distribute long-term credentials to an EC2 instance. The IAM role supplies temporary permissions that Caché can use to interact with the AWS ELB. Details for using IAM roles assigned to your EC2 instances can be found here.
AWS Elastic Load Balancer Polling Method
A polling method using the CSP Gateway’s mirror_status.cxw page available in 2017.1 can be used as the polling method in the ELB health monitor to each mirror member added to the ELB server pool. Only the primary mirror will respond ‘SUCCESS’ thus directing network traffic to only the active primary mirror member.
This method does not require any logic to be added to ^ZMIRROR. Please note that most load-balancing network appliances have a limit on the frequency of running the status check. Typically, the highest frequency is no less than 5 seconds, which is usually acceptable to support most uptime service level agreements.
A HTTP request for the following resource will test the Mirror Member status of the LOCAL Cache configuration.
/csp/bin/mirror_status.cxw
For all other cases, the path to these Mirror status requests should resolve to the appropriate Cache server and NameSpace using the same hierarchical mechanism as that used for requesting real CSP pages.
Example: To test the Mirror Status of the configuration serving applications in the /csp/user/ path:
/csp/user/mirror_status.cxw
Note: A CSP license is not consumed by invoking a Mirror Status check.
Depending on whether or not the target instance is the active Primary Member the Gateway will return one of the following CSP responses:
** Success (Is the Primary Member)
===============================
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 7
SUCCESS
** Failure (Is not the Primary Member)
===============================
HTTP/1.1 503 Service Unavailable
Content-Type: text/plain
Connection: close
Content-Length: 6
FAILED
** Failure (The Cache Server does not support the Mirror_Status.cxw request)
===============================
HTTP/1.1 500 Internal Server Error
Content-Type: text/plain
Connection: close
Content-Length: 6
FAILED
The following figures illustrate the various scenarios of using the polling method.
Figure-8: Polling to all mirror members
As the above Figure-8 shows, all mirror members are operational, and only the primary mirror member is returning “SUCCESS” to the load balancer, and so network traffic will be directed to only this mirror member.
Figure-9: Failover to Mirror Member B using polling
The following diagram demonstrates the promotion of DR asynchronous mirror member(s) into the load-balanced pool, this typically assumes the same load-balancing network appliance is servicing all mirror members (geographically split scenarios are covered later in this article).
As per standard recommended DR procedure, the promotion of the DR member involves a human decision due to the possibility of data loss from asynchronous replication. Once that action is taken however, no administrative action is required on the ELB. It automatically discovers the new primary.
Figure-10: Failover and Promotion of DR Asynchronous Mirror Member using polling
Backup and Restore
There are multiple options available for backup operations. The following three options are viable for your AWS deployment with InterSystems products. The first two options incorporate a snapshot type procedure which involves suspending database writes to disk prior to creation of the snapshot and then resuming updates once the snapshot was successful. The following high-level steps are taken to create a clean backup using either of the snapshot methods:
Pause writes to the database via database Freeze API call.
Create snapshots of the operating system + data disks.
Resume Caché writes via database Thaw API call.
Backup facility archives to backup location
Additional steps such as integrity checks can be added on a periodic interval to ensure a clean and consistent backup.
The decision points on which option to use depends on the operational requirements and policies of your organization. InterSystems is available to discuss the various options in more detail.
EBS Snapshots
EBS snapshots are very fast and efficient ways to create a point-in-time snapshot onto highly available and lower cost Amazon S3 storage. EBS snapshots along with InterSystems External Freeze and Thaw API capabilities allow for true 24x7 operational resiliency and assurance of clean regular backups. There are numerous options for automating the process using both AWS provided services such as Amazon CloudWatch Events or 3rd party solutions available in the marketplace such as Cloud Ranger or N2W Software Cloud Protection Manager to name a few.
Additionally, you can programmatically create your own customized backup solution via the use of AWS direct API calls. Details on how to leverage the APIs are available here and here.
Note: InterSystems does not endorse or explicitly validate any of these third party products. Testing and validation is up to the customer.
Logical Volume Manager Snapshots
Alternatively, many of the third-party backup tools available on the market can be used by deploying individual backup agents within the VM itself and leveraging file-level backups in conjunction with Linux Logical Volume Manager (LVM) snapshots or Windows Volume Shadow Copy Service (VSS).
One of the major benefits to this model is having the ability to have file-level restores of Linux and Windows based instances. A couple of points to note with this solution; since AWS and most other IaaS cloud providers do not provide tape media, all backup repositories are disk-based for short term archiving and have the ability to leverage Amazon S3 low cost storage and eventually Amazon Glacier for long-term retention (LTR). It is highly recommended if using this method to use a backup product that supports de-duplication technologies to make the most efficient use of disk-based backup repositories.
Some examples of these backup products with cloud support include but is not limited to: Commvault, EMC Networker, HPE Data Protector, and Veritas Netbackup.
Note: InterSystems does not endorse or explicitly validate any of these third party products. Testing and validation is up to the customer.
Caché Online Backup
For small deployments the built-in Caché Online Backup facility is also a viable option. The InterSystems database online backup utility backs up data in database files by capturing all blocks in the databases then writes the output to a sequential file. This proprietary backup mechanism is designed to cause no downtime to users of the production system.
In AWS, after the online backup has finished, the backup output file and all other files in use by the system must be copied to an EC2 acting as a file share (CIFS/NFS). This process needs to be scripted and executed within the virtual machine.
Online backup is the entry-level approach for smaller sites wishing to implement a low cost solution for backup. However, as databases increase in size, external backups with snapshot technology are recommended as a best practice with advantages including the backup of external files, faster restore times, and an enterprise-wide view of data and management tools.
Disaster Recovery
When deploying a Caché based application on AWS, DR resources including network, servers and storage are recommended to be in a different AWS region or at a minimum separate availability zones. The amount of capacity required in the designated DR AWS region depends on your organizational needs. In most cases 100% of the production capacity is required when operating in a DR mode, however lesser capacity can be provisioned until more is needed as an elastic model. Lesser capacity can be in the form of fewer web and application servers and potentially even a smaller EC2 instance type for the database server can be used and upon promotion the EBS volumes are attached to a large EC2 instance type.
Asynchronous database mirroring is used to continuously replicate to the DR AWS region’s EC2 instances. Mirroring uses database transaction journals to replicate updates over a TCP/IP network in a way that has minimal performance impact on the primary system. Journal file compression and encryption is highly recommended to be configured with these DR Asynchronous mirror members.
All external clients on the public Internet who wish to access the application will be routed through Amazon Route53 as an added DNS service. Amazon Route53 is used as a switch to direct traffic to the current active data center. Amazon Route53 performs three main functions:
Domain registration – Amazon Route53 lets you register domain names such as example.com.
Domain Name System (DNS) service – Amazon Route53 translates friendly domains names like www.example.com into IP addresses like 192.0.2.1. Amazon Route53 responds to DNS queries using a global network of authoritative DNS servers, which reduces latency.
Health checking – Amazon Route53 sends automated requests over the Internet to your application to verify that it's reachable, available, and functional.
Details of these functions can be found here.
For the purpose of this document DNS Failover and Route53 Health Checking will be discussed. Details of Health Check monitoring and DNS failover can be found here and here.
Route53 works by making regular requests to each endpoint and then verifying the response. If an endpoint fails to provide a valid response. It is no longer included in DNS responses, which instead will return an alternative, available endpoint. In this way, user traffic is directed away from failing endpoints and toward endpoints that are available.
Using the above methods traffic will only be allowed to a specific region and specific mirror member. This is controlled by the endpoint definition which is a mirror_status.cxw page discussed previously in this article presented from the InterSystems CSP Gateway. Only the primary mirror member will ever report “SUCCESS” as a HTTP 200 from the health check.
The following diagram demonstrates at a high-level the Failover Routing Policy. Details of this method and others can be found here.
Figure-11: Amazon Route53 Failover Routine Policy
At any given time, only one of the regions will report online based on the endpoint monitoring. This ensures that traffic only flows to one region at a given time. There are no added steps needed for failover between the regions since the endpoint monitoring will detect the application in the designated primary AWS region is down and the application is now live in the secondary AWS region. This is because the DR Asynchronous mirror member has been manually promoted to primary which then allows the CSP Gateway to report HTTP 200 to the Elastic Load Balancer endpoint monitoring.
There are many alternatives to the above described solution, and can be customized based on your organization operational requirements and service level agreements.
Monitoring
Amazon CloudWatch is available to provide monitoring services for all your AWS cloud resources and your applications. Amazon CloudWatch can be used to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in AWS resources. Amazon CloudWatch can monitor AWS resources such as Amazon EC2
instances as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health. Details can be found here.
Automated Provisioning
There are numerous tools available on the market today including Terraform, Cloud Forms, Open Stack, and Amazon’s own CloudFormation. Using these and coupling with other tools such as Chef, Puppet, Ansible, and others can provide for the complete Infrastructure-as-Code supporting DevOps or simply bootstrapping your application in a completely automated fashion. Details of Amazon CloudFormation can be found here.
Network Connectivity
Depending on your application’s connectivity requirements, there are multiple connectivity models available using either Internet, VPN, or a dedicated link using Amazon Direct Connect. The method to choose will depend on the application and user needs. The bandwidth usage for each of the three methods vary, and best to check with your AWS representative or Amazon Management Console for confirmation of available connectivity options for a given region.
Security
Care needs to be taken when deciding to deploy an application in any public IaaS cloud provider. Your organization’s standard security policies, or new ones developed specifically for cloud, should be followed to maintain security compliance of your organization. You will also have to understand data sovereignty which is relevant when an organization’s data is stored outside of their country and is subject to the laws of the country in which the data resides. Cloud deployments have the added risk of data now outside client data centers and physical security control. The use of InterSystems database and journal encryption for data at rest (databases and journals) and data in flight (network communications) with AES and SSL/TLS encryption respectively are highly recommended.
As with all encryption key management, proper procedures need to be documented and followed per your organization’s policies to ensure data safety and prevent unwanted data access or security breech.
Amazon provides extensive documentation and examples to provide for a highly secure operating environment for your Caché based applications. Be sure to review the Identity Access Management (IAM) for various discussion topics found here.
Architecture Diagram Examples
The diagram below illustrates a typical Caché installation providing high availability in the form of database mirroring (both synchronous failover and DR Asynchronous), application servers using ECP, and multiple load balanced web servers.
TrakCare Example
The following diagram illustrates a typical TrakCare deployment with multiple load balanced webservers, two print servers as ECP clients, and database mirror configuration. The Virtual IP address is only used for connectivity not associated with ECP or the CSP Gateway. The ECP clients and CSP Gateway are mirror-aware and do not require a VIP.
If you are using Direct Connect, there are several options including multiple circuits and multi-region access that can be enabled for disaster recovery scenarios. It is important to work with the telecommunications provider(s) to understand the high availability and disaster recovery scenarios they support.
The sample reference architecture diagram below includes high availability in the active or primary region, and disaster recovery to another AWS region if the primary AWS region is unavailable. Also within this example, the database mirrors contain the TrakCare DB, TrakCare Analytics, and Integration namespace all within that single mirror set.
Figure-12: TrakCare AWS Reference Architecture Diagram – Physical Architecture
In addition, the following diagram is provided showing a more logical view of architecture with the associated high-level software products installed and functional purpose.
Figure-13: TrakCare AWS Reference Architecture Diagram – Logical Architecture
HealthShare Example
The following diagram illustrates a typical HealthShare deployment with multiple load balanced webservers, with multiple HealthShare products including Information Exchange, Patient Index, Personal Community, Health Insight, and Health Connect. Each of those respective products include a database mirror pair for high availability within multiple availability zones. The Virtual IP address is only used for connectivity not associated with ECP or the CSP Gateway. The CSP Gateways used for web service communications between the HealthShare products are mirror-aware and do not require a VIP.
The sample reference architecture diagram below includes high availability in the active or primary region, and disaster recovery to another AWS region if the primary region is unavailable.
Figure-14: HealthShare AWS Reference Architecture Diagram – Physical Architecture
In addition, the following diagram is provided showing a more logical view of architecture with the associated high-level software products installed, connectivity requirements and methods, and the respective functional purpose.
Figure-15: HealthShare AWS Reference Architecture Diagram – Logical Architecture
Announcement
Olga Zavrazhnova · Dec 24, 2019
Hi Community,
Great news for all Global Masters lovers!
Now you can redeem a Certification Voucher for 10,000 points!
Voucher gives you 1 exam attempt for any exam available at the InterSystems exam system. We have a limited edition of 10 vouchers, so don't hesitate to get yours!
Passing the exam allows you to claim the electronic badge that can be embedded in social media accounts to show the world that your InterSystems technology skills are the first-rate.
➡️ Learn more about the InterSystems Certification Program here.
NOTE:
Reward is available for Global Masters members of Advocate level and above; InterSystems employees are not eligible to redeem this reward.
Vouchers are non-transferable.
Good for one attempt for any exam in InterSystems exam system.
Can be used for a future exam.
Valid for one year from the redemption’s date (the Award Date).
Redeem the prize and prove your mastery of our technology! 👍🏼 Already 9 available ;) How to get this voucher? @Kejia.Lin This appears to be a reward on Global Masters. You can also find a quick link at the light blue bar on the top of this page. Once you join Global Masters you can get points for interacting with the community and ultimately use these points to claim rewards, such as the one mentioned here.
Announcement
Anastasia Dyubaylo · Feb 22, 2020
Hi Developers,
New video, recorded by @Benjamin.DeBoe, is available on InterSystems Developers YouTube:
⏯ Code in Any Language with InterSystems IRIS
InterSystems Product Manager @Benjamin.DeBoe talks about combining your preferred tools and languages when building your application on InterSystems IRIS Data Platform.
Try InterSystems IRIS: https://www.intersystems.com/try
Enjoy watching the video! 👍🏼
Announcement
Derek Robinson · Feb 21, 2020
I wanted to share each of the first three episodes of our new Data Points podcast with the community here — we previously posted announcements for episodes on IntegratedML and Kubernetes — so here is our episode on InterSystems IRIS as a whole! It was great talking with @jennifer.ames about what sets IRIS apart, some of the best use cases she's seen in her years as a trainer in the field and then as an online content developer, and more. Check it out, and make sure to subscribe at the link above — Episode 4 will be released next week!
Article
Mark Bolinsky · Mar 3, 2020
InterSystems and Intel recently conducted a series of benchmarks combining InterSystems IRIS with 2nd Generation Intel® Xeon® Scalable Processors, also known as “Cascade Lake”, and Intel® Optane™ DC Persistent Memory (DCPMM). The goals of these benchmarks are to demonstrate the performance and scalability capabilities of InterSystems IRIS with Intel’s latest server technologies in various workload settings and server configurations. Along with various benchmark results, three different use-cases of Intel DCPMM with InterSystems IRIS are provided in this report.
Overview
Two separate types of workloads are used to demonstrate performance and scaling – a read-intensive workload and a write-intensive workload. The reason for demonstrating these separately is to show the impact of Intel DCPMM on different use cases specific to increasing database cache efficiency in a read-intensive workload, and increasing write throughput for transaction journals in a write-intensive workload. In both of these use-case scenarios, significant throughput, scalability and performance gains for InterSystems IRIS are achieved.
The read-intensive workload leveraged a 4-socket server and massive long-running analytical queries across a dataset of approximately 1.2TB of total data. With DCPMM in “Memory Mode”, benchmark comparisons yielded a significant reduction in elapsed runtime of approximately six times faster when compared to a previous generation Intel E7v4 series processor with less memory. When comparing like-for-like memory sizes between the E7v4 and the latest server with DCPMM, there was a 20% improvement. This was due to both the increased InterSystems IRIS database cache capability afforded by DCPMM and the latest Intel processor architecture.
The write-intensive workload leverages a 2-socket server and InterSystems HL7 messaging benchmark which consists of numerous inbound interfaces, each message has several transformations and then four outbound messages for each inbound. One of the critical components in sustaining high throughput is the message durability guarantees of IRIS for Health, and the transaction journal write performance is crucial in that operation. With DCPMM in “APP DIRECT” mode as DAX XFS presenting an XFS file system for transaction journals, this benchmark demonstrated a 60% increase in message throughput.
To summarize the test results and configurations; DCPMM offers significant throughput gains when used in the proper InterSystems IRIS setting and workload. The high-level benefits are increasing database cache efficiency and reducing disk IO block reads in read-intensive workloads and also increasing write throughput for journals in write-intensive workloads.
In addition, Cascade Lake based servers with DCPMM provide an excellent update path for those looking into refreshing older hardware and improving performance and scaling. InterSystems technology architects are available to help with those discussions and provide advice on suggested configurations for your existing workloads.
READ-INTENSIVE WORKLOAD BENCHMARK
For the read-intensive workload, we used an analytical query benchmark comparing an E7v4 (Broadwell) with 512GiB and 2TiB database cache sizes, against the latest 2nd Generation Intel® Xeon® Scalable Processors (Cascade Lake) with 1TB and 2TB database cache sizes using Intel® Optane™ DC Persistent Memory (DCPMM).
We ran several workloads with varying global buffer sizes to show the impact and performance gain of larger caching. For each configuration iteration we ran a COLD, and a WARM run. COLD is where the database cache was not pre-populated with any data. WARM is where the database cache has already been active and populated with data (or at least as much as it could) to reduce physical reads from disk.
Hardware Configuration
We compared an older 4-Socket E7v4 (aka Broadwell) host to a 4-socket Cascade Lake server with DCPMM. This comparison was chosen because it would demonstrate performance gains for existing customers looking for a hardware refresh along with using InterSystems IRIS. In all tests, the same version of InterSystems IRIS was used so that any software optimizations between versions were not a factor.
All servers have the same storage on the same storage array so that disk performance wasn’t a factor in the comparison. The working set is a 1.2TB database. The hardware configurations are shown in Figure-1 with the comparison between each of the 4-socket configurations:
Figure-1: Hardware configurations
Server #1 Configuration
Server #2 Configuration
Processors: 4 x E7-8890 v4 @ 2.5Ghz
Processors: 4 x Platinum 8280L @ 2.6Ghz
Memory: 2TiB DRAM
Memory: 3TiB DCPMM + 768GiB DRAM
Storage: 16Gbps FC all-flash SAN @ 2TiB
Storage: 16Gbps FC all-flash SAN @ TiB
DCPMM: Memory Mode only
Benchmark Results and Conclusions
There is a significant reduction in elapsed runtime (approximately 6x) when comparing 512GiB to either 1TiB and 2TiB DCPMM buffer pool sizes. In addition, it was observed that in comparing 2TiB E7v4 DRAM and 2TiB Cascade Lake DCPMM configurations there was a ~20% improvement as well. This 20% gain is believed to be mostly attributed to the new processor architecture and more processor cores given that the buffer pool sizes are the same. However, this is still significant in that in the 4-socket Cascade Lake tested only had 24 x 128GiB DCPMM installed, and can scale to 12TiB DCPMM, which is about 4x the memory of what E7v4 can support in the same 4-socket server footprint.
The following graphs in figure-2 depict the comparison results. In both graphs, the y axis is elapsed time (lower number is better) comparing the results from the various configurations.
Figure-2: Elapse time comparison of various configurations
WRITE-INTENSIVE WORKLOAD BENCHMARK
The workload in this benchmark was our HL7v2 messaging workload using all T4 type workloads.
The T4 Workload used a routing engine to route separately modified messages to each of four outbound interfaces. On average, four segments of the inbound message were modified in each transformation (1-to-4 with four transforms). For each inbound message four data transformations were executed, four messages were sent outbound, and five HL7 message objects were created in the database.
Each system is configured with 128 inbound Business Services and 4800 messages sent to each inbound interface for a total of 614,400 inbound messages and 2,457,600 outbound messages.
The measurement of throughput in this benchmark workload is “messages per second”. We are also interested in (and recorded) the journal writes during the benchmark runs because transaction journal throughput and latency are critical components in sustaining high throughput. This directly influences the performance of message durability guarantees of IRIS for Health, and the transaction journal write performance is crucial in that operation. When journal throughput suffers, application processes will block on journal buffer availability.
Hardware Configuration
For the write-intensive workload, we decided to use a 2-socket server. This is a smaller configuration than our previous 4-socket configuration in that it only had 192GB of DRAM and 1.5TiB of DCPMM. We compared the workload of Cascade Lake with DCPMM to that of the previous 1st Generation Intel® Xeon® Scalable Processors (Skylake) server. Both servers have locally attached 750GiB Intel® Optane™ SSD DC P4800X drives.
The hardware configurations are shown in Figure-3 with the comparison between each of the 2-socket configurations:
Figure-3: Write intensive workload hardware configurations
Server #1 Configuration
Server #2 Configuration
Processors: 2 x Gold 6152 @ 2.1Ghz
Processors: 2 x Gold 6252 @ 2.1Ghz
Memory: 192GiB DRAM
Memory:1.5TiB DCPMM + 192GiB DRAM
Storage: 2 x 750GiB P4800X Optane SSDs
Storage: 2 x 750GiB P4800X Optane SSDs
DCPMM: Memory Mode & App Direct Modes
Benchmark Results and Conclusions
Test-1: This test ran the T4 workload described above on the Skylake server detailed as Server #1 Configuration in Figure-3. The Skylake server provided a sustained throughput of ~3355 inbound messages a second with a journal file write rate of 2010 journal writes/second.
Test-2: This test ran the same workload on the Cascade Lake server detailed as Server #2 Configuration in Figure-3, and specifically with DCPMM in Memory Mode. This demonstrated a significant improvement of sustained throughput of ~4684 inbound messages per second with a journal file write rate of 2400 journal writes/second. This provided a 39% increase compared to Test-1.
Test-3: This test ran the same workload on the Cascade Lake server detailed as Server #2 Configuration in Figure-3, this time using DCPMM in App Direct Mode but not actually configuring DCPMM to do anything. The purpose of this was to gauge just what the performance and throughput would be comparing Cascade Lake with DRAM only to Cascade Lake with DCPMM + DRAM. Results we not surprising in that there was a gain in throughput without DCPMM being used, albeit a relatively small one. This demonstrated an improvement of sustained throughput of ~4845 inbound message a second with a journal file write rate of 2540 journal writes/second. This is expected behavior because DCPMM has a higher latency compared to DRAM, and with the massive influx of updates there is a penalty to performance. Another way of looking at it there is <5% reduction in write ingestion workload when using DCPMM in Memory Mode on the same exact server. Additionally, when comparing Skylake to Cascade Lake (DRAM only) this provided a 44% increase compared to the Skylake server in Test-1.
Test-4: This test ran the same workload on the Cascade Lake server detailed as Server #2 configuration in Figure-3, this time using DCPMM in App Direct Mode and using App Direct Mode as DAX XFS mounted for the journal file system. This yielded even more throughput of 5399 inbound messages per second with a journal file write rate of 2630/sec. This demonstrated that DCPMM in App Direct mode for this type of workload is the better use of DCPMM. Comparing these results to the initial Skylake configuration there was a 60% increase in throughput compared to the Skylake server in Test-1.
InterSystems IRIS Recommended Intel DCPMM Use Cases
There are several use cases and configurations for which InterSystems IRIS will benefit from using Intel® Optane™ DC Persistent Memory.
Memory Mode
This is ideal for massive database caches for either a single InterSystems IRIS deployment or a large InterSystems IRIS sharded cluster where you want to have much more (or all!) of your database cached into memory. You will want to adhere to a maximum of 8:1 ratio of DCPMM to DRAM as this is important for the “hot memory” to stay in DRAM acting as an L4 cache layer. This is especially important for some shared internal IRIS memory structures such as seize resources and other memory cache lines.
App Direct Mode (DAX XFS) – Journal Disk Device
This is ideal for using DCPMM as a disk device for transaction journal files. DCPMM appears to the operating system as a mounted XFS file system to Linux. The benefit of using DAX XFS is this alleviates the PCIe bus overhead and direct memory access from the file system. As demonstrated in the HL7v2 benchmark results, the write latency benefits significantly increased the HL7 messaging throughput. Additionally, the storage is persistent and durable on reboots and power cycles just like a traditional disk device.
App Direct Mode (DAX XFS) – Journal + Write Image Journal (WIJ) Disk Device
In this use case, this extends the use of App Direct mode to both the transaction journals and the write image journal (WIJ). Both of these files are write-intensive and will certainly benefit from ultra-low latency and persistence.
Dual Mode: Memory + App Direct Modes
When using DCPMM in dual mode, the benefits of DCPMM are extended to allow for both massive database caches and ultra-low latency for the transaction journals and/or write image journal devices. In this use case, DCPMM appears both as mounted XFS filesystem to the OS and RAM to operating systems. This is achieved by allocating a percentage of DCPMM as DAX XFS and the remaining is allocated in memory mode. As mentioned previously, the DRAM installed will operate as an L4 like cache to the processors.
“Quasi” Dual Mode
To extend the use case models on a bit of slant with a Quasi Dual Mode in that you have a concurrent transaction and analytic workloads (also known as HTAP workloads) type workload where there is a high rate of inbound transactions/updates for OLTP type workloads, and also an analytical or massive querying need, then having each InterSystems IRIS node type within an InterSystems IRIS sharded cluster operating with different modes for DCPMM.
In this example there is the addition of InterSystems IRIS compute nodes which will handle the massive querying/analytics workload running with DCPMM Memory Mode so that they benefit from massive database cache in the global buffers, and the data nodes either running in Dual mode or App Direct the DAX XFS for the transactional workloads.
Conclusion
There are numerous options available for InterSystems IRIS when it comes to infrastructure choices. The application, workload profile, and the business needs drive the infrastructure requirements, and those technology and infrastructure choices influence the success, adoption, and importance of your applications to your business. InterSystems IRIS with 2nd Generation Intel® Xeon® Scalable Processors and Intel® Optane™ DC Persistent Memory provides for groundbreaking levels of scaling and throughput capabilities for your InterSystems IRIS based applications that matter to your business.
Benefits of InterSystems IRIS and Intel DCPMM capable servers include:
Increases memory capacity so that multi-terabyte databases can completely reside in InterSystems IRIS or InterSystems IRIS for Health database cache with DCPMM in Memory Mode. In comparison to reading from storage (disks), this can increase query response performance by up six times with no code changes due to InterSystems IRIS proven memory caching capabilities that take advantage of system memory as it increases in size.
Improves the performance of high-rate data interoperability throughput applications based on InterSystems IRIS and InterSystems IRIS for Health, such as HL7 transformations, by as much as 60% in increased throughput using the same processors and only changing the transaction journal disk from the fastest available NVMe drives to leveraging DCPMM in App Direct mode as a DAX XFS file system. Exploiting both the memory speed data transfers and data persistence is a significant benefit to InterSystems IRIS and InterSystems IRIS for Health.
Augment the compute resources where needed for a given workload whether read or write-intensive, or both, without over-allocating entire servers just for the sake of one resource component with DCPMM in Mixed Mode.
InterSystems Technology Architects are available to discuss hardware architectures ideal for your InterSystems IRIS based application. Great article, Mark!
I have a few notes and questions:
1. Here's a brief comparison of different storage categories:
Intel® Optane™ DC Persistent Memory has read throughput of 6.8 GB/s and write throughput 1.85 GB/s (source).
Intel® Optane™ SSD has read throughput of 2.5 GB/s and write throughput of 2.2 GB/s at (source).
Modern DDR4 RAM has read throughput of ~25 GB/s.
While I certainly see the appeal of DC Persistent Memory if we need more memory than RAM can provide, is it useful on smaller scale? Say I have a few hundred gigabytes of indices I need to keep in global buffer and be able to read-access fast. Would plain DDR4 RAM be better? Costs seem comparable and read throughput of 25 Gb/s seems considerably better.
2. What RAM was used in a Server #1 configuration?
3. Why are there different CPUs between servers?
4. Workload link does not work. 6252 supports DCPM, while 6152 does not 6252 can be used for both DCPM and DRAM configuration. Hi Eduard,
Thanks for you questions.
1- On small scale I would stay with traditional DRAM. DCPMM becomes beneficial when >1TB of capacity.
2- That was DDR4 DRAM memory in both read-intensive and write-intensive Server #1 configurations. In the read-intensive server configuration it was specifically DDR-2400, and in the write-intensive server configuration it was DDR-2600.
3- There are different CPUs in configuration in the read-intensive workload because this testing is meant to demonstrate upgrade paths from older servers to new technologies and the scalability increases offered in that scenario. The write-intensive workload only used a different server in the first test to compare previous generation to the current generation with DCPMM. Then the three following results demonstrated the differences in performance within the same server - just different DCPMM configurations.
4- Thanks. I will see what happened to the link and correct. Correct. Gold 6252 series (aka "Cascade Lake") supports both DCPMM and DRAM. However, keep in mind that when using DCPMM you need to have DRAM and should adhere to at least a 8:1 ratio of DCPMM:DRAM.
Article
Renan Lourenco · Mar 9, 2020
# InterSystems IRIS for Health ENSDEMO
Yet another basic setup of ENSDEMO content into InterSystems IRIS for Health.
**Make sure you have Docker up and running before starting.**
## Setup
Clone the repository to your desired directory
```bash
git clone https://github.com/OneLastTry/irishealth-ensdemo.git
```
Once the repository is cloned, execute:
**Always make sure you are inside the main directory to execute docker-compose commands.**
```bash
docker-compose build
```
## Run your Container
After building the image you can simply execute below and you be up and running 🚀:
*-d will run the container detached of your command line session*
```bash
docker-compose up -d
```
You can now access the manager portal through http://localhost:9092/csp/sys/%25CSP.Portal.Home.zen
- **Username:** SuperUser
- **Password:** SYS
- **SuperServer port:** 9091
- **Web port:** 9092
- **Namespace:** ENSDEMO

To start a terminal session execute:
```bash
docker exec -it ensdemo iris session iris
```
To start a bash session execute:
```bash
docker exec -it ensdemo /bin/bash
```
Using [InterSystems ObjectScript](https://marketplace.visualstudio.com/items?itemName=daimor.vscode-objectscript) Visual Studio Code extension, you can access the code straight from _vscode_

## Stop your Container
```bash
docker-compose stop
```
## Support to ZPM
```bash
zpm "install irishealth-ensdemo"
```
Nice, Rhenan! And ZPM it, please, too! Interesting.
Is it available for InterSystems IRIS? Will do soon! Haven't tested but I would guess yes, I will run some tests changing the version in Dockerfile and post the outcome here. Hi, here is a similar article about ensdemo for IRIS and IRIS for health.
https://community.intersystems.com/post/install-ensdemo-iris Works for IRIS4Health Also available as ZPM module now:
USER>zpm "install irishealth-ensdemo"[irishealth-ensdemo] Reload START (/usr/irissys/mgr/.modules/USER/irishealth-ensdemo/1.0.0/)[irishealth-ensdemo] Reload SUCCESS[irishealth-ensdemo] Module object refreshed.[irishealth-ensdemo] Validate START[irishealth-ensdemo] Validate SUCCESS[irishealth-ensdemo] Compile START[irishealth-ensdemo] Compile SUCCESS[irishealth-ensdemo] Activate START[irishealth-ensdemo] Configure START[irishealth-ensdemo] Configure SUCCESS[irishealth-ensdemo] MakeDeployed START[irishealth-ensdemo] MakeDeployed SUCCESS[irishealth-ensdemo] Activate SUCCESS
USER>
Here is the set of productions available:
Is there any documentation on what the ens-demo module can do? Unfortunately not as I'd like it to have. Even when ENSDEMO was part of Ensemble information was a bit scattered all over.
If you access the Ensemble documentation and search for "Demo." you can see some of the references I mentioned. (since IRIS does not have ENSDEMO by default, documentation has also been removed) Thanks, @Renan.Lourenco !
Perhaps, we could wrap this part of the documentation as a module too. Could be a nice extension to the app. I like your idea @Evgeny.Shvarov !! How do you envision that, a simple index with easy access like:
DICOM:
Link1
Link2
HL7
Link1
Link2
Or something more elaborated?
Also would that be a separate module altogether or part of the existing? I see that the documentation pages are IRIS CSP classes. So I guess it could work if installed in IRIS. I guess also there is a set of static files (FILECOPY could help).
IMHO, the reasonable approach to have a separate repo ensdemo-doc and a separate module the, which will be a dependent module to irishealth-ensdemo
So people could contribute to documentation independently and update it independently too.
I had my bit of fun with documentation before, they are not as straightforward as they appear to be. That's why I thought of having a separate index. I guess you know more about it.
I’d also ping @Dmitry.Maslennikov as he tried to make a ZPM package for the whole documentation.
Article
Peter Steiwer · Mar 6, 2020
InterSystems IRIS Business Intelligence allows you to keep your cubes up to date in multiple ways. This article will cover building vs synchronizing. There are also ways to manually keep cubes up to date, but these are very special cases and almost always cubes are kept current by building or synchronizing.
What is Building?
The build starts by removing all data in the cube. This ensures that the build is starting in a clean state. The build then goes through all records specified by the source class. This may take all records from the source class or it may take a restricted set of records from the source class. As the build goes through the specific records, the data required by the cube is inserted into the cube. Finally, once all of the data has been inserted into the cube, the indices are built. During this process, the cube is not available to be queried. The build can be executed single-threaded or multi-threaded. It can be initiated by both the UI or Terminal. The UI will be multi-threaded by default. Running a build from terminal will default to multi-threaded unless a parameter is passed in. In most cases multi-threaded builds are possible. There are specific cases where it is not possible to perform a multi-threaded build and it must be done single-threaded.
What is Synchronizing?
If a cube's source class is DSTIME Enabled (see documentation), it is able to be synchronized. DSTime allows modifications to the source class to be tracked. When synchronization is called, only the records that have been modified will be inserted, updated, or deleted as needed within the cube. While a synchronize is running, the cube is available to be queried. A Synchronize can only be initiated from Terminal. It can be scheduled in the Cube Manager through the UI, but it can't be directly executed from the UI. By default, synchronize is executed single-threaded, but there is a parameter to initiate the synchronize multi-threaded.
It is always a good idea to initially build your cube and then it can be kept up to date with synchronize if desired.
Recap of differences
Build
Synchronize
Which records are modified?
All
Only records that have changed
Available in UI?
Yes
No
Multi-Threaded
Yes, by default
Yes, not the default
Cube available for query
No(*1)
Yes
Requires source class modification
No
Yes, DSTIME must be enabled
Build Updates
(*1) Starting with InterSystems IRIS 2020.1, Selective Build is now an available option while building your cube. This allows the cube to be available for querying while being built selectively. For additional information see Getting Started with Selective Build
Synchronize Updates
Starting with InterSystems IRIS 2021.2, DSTIME has a new "CONDITIONAL" option. This allows implementations to conditionally enable DSTIME for specific sites/installations. 💡 This article is considered as InterSystems Data Platform Best Practice.
Announcement
Evgeny Shvarov · Mar 23, 2020
Hi, participants of the InterSystems IRIS Online Programming Contest!
This is an announcement for the current and all the future participants of online contests.
To win the contest you need to gather the maximum votes of InterSystems Developer Community members.
Below are the few ideas of how to achieve that.
Winner Criteria
First of all, you need to build and submit the application which matches terms and the winner criteria:
Idea and value - the app makes the world a better place or makes the life of a developer better at least;
Functionality and usability - how well and how much the application/library does;
The beauty of code - has a readable and qualitative ObjectScript code.
But even if you know exactly that your application is great you need to other developers to be sure in it too.
And here are three ways how you can make it:
0. Bugs and Documentation
Use the voting week to clean up the code, fix bugs and make an accurate documentation.
1. Article on DC
Write an article on Developers Community that describes how your app works and why this is the best application in the contest.
It works even better if you connect the article with an application and vice-versa. Example of an article that is connected and describes the app and the app has a linked article on DC (button Discuss).
2. Video on YouTube
Record the screencast where you show and pitch how your application works and solves problems.
E.g. you can record the video with QuickTime or other screen-recording. apps and send it to @Anastasia.Dyubaylo - we'll publish it then on InterSystems Developers YouTube channel.
3. Social Media
We'll publish announcements on your video and article (articles) on social DC social media channels: Twitter, Facebook, and LinkedIn.
And we encourage you to advertise your OEX. application, article, and video in your social networks too.
These three recipes will help to make your application more visible and noticed and thus increase your chances to win!
Good luck and happy coding! Also we will make posts on your applications in DC Social media channels: Twitter, Facebook, DC Telegram,and LinkedIn. We will do it in the order you submitted the apps: earlier submitted, earlier posted in social media. And will spread it through 5 working days of the week.
Another thing you may want to add to your OEX and Github README.md - is the Online Contest Github shield!
Here is how it looks like:
Here is the code you can install into your Github README.md
[](https://openexchange.intersystems.com/contest/current)
Learn more about Github Shields Hey Developers!
Our contestant @Maks.Atygaev recorded a promo video specially for the IRIS Programming Contest! Please welcome:
⏯ Declarative ObjectScript Promo
Big applause! Great video content! 👏🏼
P.s. This is a prime example of how you can increase your chances of winning a contest.
Let the world know about your cool apps. Don't slow down and good luck! And another way to win - to have clear instructions.
Often fantastic applications with bad instructions can loose to poor applications with perfect instructions.
Please make sure that the instructions you have in your README.md really work.
It is always helpful to try to go through your instruction steps by yourself before releasing the application. Or and ask your colleague to do it.
Good luck!
Announcement
Anastasia Dyubaylo · Mar 24, 2020
Hi Community!
Enjoy watching the new video on InterSystems Developers YouTube:
⏯ InterSystems IRIS and Node.js Overview
InterSystems IRIS™ supports a native API for Node.js that provides direct access to InterSystems IRIS data structures from your Node.js application.
Visit the Node.js QuickStart on the InterSystems learning site for more.
Stay tuned with InterSystems Developers! 👍🏼 These APIs appear to be synchronous, and therefore will not be usable in a standard production Node.js environment where all concurrent users coexist in the same physical process.
This is precisely the reason why QEWD was created - ie to allow the safe use of synchronous APIs, but, then again, if you use QEWD, you won't need or use the APIs described here
Announcement
Anastasia Dyubaylo · Jan 20, 2020
Hi Developers,
2019 was a really great year with almost 100 applications uploaded to the InterSystems Open Exchange!
To thank our Best Contributors we have special annual achievement badges in Global Masters Advocacy Hub. This year we introduced 2 new badges for contribution to the InterSystems Open Exchange:
✅ InterSystems Application of the Year 2019
✅ InterSystems Developer of the Year 2019
We're glad to present the most downloaded applications on InterSystems Data Platforms!
Badge's Name
Advocates
Rules
Nomination: InterSystems Application of the Year
Gold InterSystems Application of the Year 2019 - 1st place
VSCode-ObjectScript by @Maslennikov.Dmitry
1st / 2nd/ 3rd / 4th-10th place in "InterSystems Application of the Year 2019" nomination.
Given to developers whose application gathered the maximum amount of downloads on InterSystems Open Exchange in the year of 2019.
Silver InterSystems Application of the Year 2019 - 2nd place
PythonGateway by @Eduard.Lebedyuk
Bronze InterSystems Application of the Year 2019 - 3rd place
iris-history-monitor by @Henrique
InterSystems Application of the Year 2019 - 4th-10th places
WebTerminal by @Nikita.Savchenko7047
Design Pattern in Caché Object Script by @Tiago.Ribeiro
Caché Monitor by @Andreas.Schneider
AnalyzeThis by @Peter.Steiwer
A more useFull Object Dump by @Robert.Cemper1003
Light weight EXCEL download by @Robert.Cemper1003
ObjectScript Class Explorer by @Nikita.Savchenko7047
Nomination: InterSystems Developer of the Year
Gold InterSystems Developer of the Year 2019 - 1st place
@Robert.Cemper1003
1st / 2nd / 3rd / 4th-10th place in "InterSystems Developer of the Year 2019" nomination.
Given to developers who uploaded the largest number of applications to InterSystems Open Exchange in the year of 2019.
Silver InterSystems Developer of the Year 2019 - 2nd place
@Evgeny.Shvarov
@Eduard.Lebedyuk
Bronze InterSystems Developer of the Year 2019 - 3rd place
@Maslennikov.Dmitry
@David.Crawford
@Otto.Karlinger
InterSystems Developer of the Year 2019 - 4th-10th places
@Peter.Steiwer
@Amir.Samary
@Guillaume.Rongier7183
@Rubens.Silva9155
Congratulations! You are doing so valuable and important job for all the community!
Thank you all for being part of the InterSystems Community. Share your experience, ask, learn and develop, and be successful with InterSystems!
➡️ See also the Best Articles and the Best Questions on InterSystems Data Platform and the Best Contributors in 2019.
Announcement
Anastasia Dyubaylo · Dec 18, 2019
Hi Community,
The new video from Global Summit 2019 is already on InterSystems Developers YouTube:
⏯ InterSystems IRIS Roadmap: Analytics and AI
This video outlines what's new and what's next for Business Intelligence (BI), Artificial Intelligence (AI), and analytics within InterSystems IRIS. We will present the use cases that we are working to solve, what has been delivered to address those use cases, as well as what we are working on next.
Takeaway: You will gain knowledge of current and future business intelligence and analytics capabilities within InterSystems IRIS.
Presenters: 🗣 @Benjamin.DeBoe, Product Manager, InterSystems 🗣 @tomd, Product Specialist - Machine Learning, InterSystems 🗣 @Carmen.Logue, Product Manager - Analytics and AI, InterSystems
Additional materials to this video you can find in this InterSystems Online Learning Course.
Enjoy watching this video! 👍🏼
Article
Timothy Leavitt · Mar 24, 2020
This article will describe processes for running unit tests via the InterSystems Package Manager (aka IPM - see https://openexchange.intersystems.com/package/InterSystems-Package-Manager-1), including test coverage measurement (via https://openexchange.intersystems.com/package/Test-Coverage-Tool).
Unit testing in ObjectScript
There's already great documentation about writing unit tests in ObjectScript, so I won't repeat any of that. You can find the Unit Test tutorial here: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=TUNT_preface
It's best practice to include your unit tests somewhere separate in your source tree, whether it's just "/tests" or something fancier. Within InterSystems, we end up using /internal/testing/unit_tests/ as our de facto standard, which makes sense because tests are internal/non-distributed and there are types of tests other than unit tests, but this might be a bit complex for simple open source projects. You may see this structure in some of our GitHub repos.
From a workflow perspective, this is super easy in VSCode - you just create the directory and put the classes there. With older server-centric approaches to source control (those used in Studio) you'll need to map this package appropriately, and the approach for that varies by source control extension.
From a unit test class naming perspective, my personal preference (and the best practice for my group) is:
UnitTest.<package/class being tested>[.<method/feature being tested>]
For example, if unit tests for method Foo in class MyApplication.SomeClass, the unit test class would be named UnitTest.MyApplication.SomeClass.Foo; if the tests were for the class as a whole, it'd just be UnitTest.MyApplication.SomeClass.
Unit tests in IPM
Making the InterSystems Package Manager aware of your unit tests is easy! Just add a line to module.xml like the following (taken from https://github.com/timleavitt/ObjectScript-Math/blob/master/module.xml - a fork of @Peter.Steiwer 's excellent math package from the Open Exchange, which I'm using as a simple motivating example):
<Module> ... <UnitTest Name="tests" Package="UnitTest.Math" Phase="test"/></Module>
What this all means:
The unit tests are in the "tests" directory underneath the module's root.
The unit tests are in the "UnitTest.Math" package. This makes sense, because the classes being tested are in the "Math" package.
The unit tests run in the "test" phase in the package lifecycle. (There's also a "verify" phase in which they could run, but that's a story for another day.)
Running Unit Tests
With unit tests defined as explained above, the package manager provides some really helpful tools for running them. You can still set ^UnitTestRoot, etc. as you usually would with %UnitTest.Manager, but you'll probably find the following options much easier - especially if you're working on several projects in the same environment.
You can try out all of these by cloning the objectscript-math repo listed above and then loading it with zpm "load /path/to/cloned/repo/", or on your own package by replacing "objectscript-math" with your package names (and test names).
To reload the module and then run all the unit tests:
zpm "objectscript-math test"
To just run the unit tests (without reloading):
zpm "objectscript-math test -only"
To just run the unit tests (without reloading) and provide verbose output:
zpm "objectscript-math test -only -verbose"
To just run a particular test suite (meaning a directory of tests - in this case, all the tests in UnitTest/Math/Utils) without reloading, and provide verbose output:
zpm "objectscript-math test -only -verbose -DUnitTest.Suite=UnitTest.Math.Utils"
To just run a particular test case (in this case, UnitTest.Math.Utils.TestValidateRange) without reloading, and provide verbose output:
zpm "objectscript-math test -only -verbose -DUnitTest.Case=UnitTest.Math.Utils.TestValidateRange"
Or, if you're just working out the kinks in a single test method:
zpm "objectscript-math test -only -verbose -DUnitTest.Case=UnitTest.Math.Utils.TestValidateRange -DUnitTest.Method=TestpValueNull"
Test coverage measurement via IPM
So you have some unit tests - but are they any good? Measuring test coverage won't fully answer that question, but it at least helps. I presented on this at Global Summit back in 2018 - see https://youtu.be/nUSeGHwN5pc .
The first thing you'll need to do is install the test coverage package:
zpm "install testcoverage"
Note that this doesn't require IPM to install/run; you can find more information on the Open Exchange: https://openexchange.intersystems.com/package/Test-Coverage-Tool
That said, you can get the most out of the test coverage tool if you're also using IPM.
Before running tests, you need to specify which classes/routines you expect your tests to cover. This is important because, in very large codebases (for example, HealthShare), measuring and collecting test coverage for all of the files in the project may require more memory than your system has. (Specifically, gmheap for the line-by-line monitor, if you're curious.)
The list of files goes in a file named coverage.list within your unit test root; different subdirectories (suites) of unit tests can have their own copy of this to override which classes/routines will be tracked while the test suite is running.
For a simple example with objectscript-math, see: https://github.com/timleavitt/ObjectScript-Math/blob/master/tests/UnitTest/coverage.list ; the user guide for the test coverage tool goes into further details.
To run the unit tests with test coverage measurement enabled, there's just one more argument to add to the command, specifying that TestCoverage.Manager should be used instead of %UnitTest.Manager to run the tests:
zpm "objectscript-math test -only -DUnitTest.ManagerClass=TestCoverage.Manager" The output (even in non-verbose mode) will include a URL where you can view which lines of your classes/routines were covered by unit tests, as well as some aggregate statistics.
Next Steps
What about automating all of this in CI? What about reporting unit test results and coverage scores/diffs? You can do that too! For a simple example using Docker, Travis CI and codecov.io, see https://github.com/timleavitt/ObjectScript-Math ; I'm planning to write this up in a future article that looks at a few different approaches. Excellent article Tim! Great description of how people can move the ball forward with the maturity of their development processes :) Hello @Timothy.Leavitt Thank you for this great article!
I tried to add "UnitTest" tag to my module.xml but something wrong during the publish process.<UnitTest Name="tests" Package="UnitTest.Isc.JSONFiltering.Services" Phase="test"/>
tests directory contain a directory tree UnitTest/Isc/JSONFiltering/Services/ with a %UnitTest.TestCase sublcass.
Exported 'tests' to /tmp/dirLNgC2s/json-filter-1.2.0/tests/.testsERROR #5018: Routine 'tests' does not exist[json-filter] Package FAILURE - ERROR #5018: Routine 'tests' does not existERROR #5018: Routine 'tests' does not exist
I also tried with objectscript-math project. This is the output of objectscript-math publish -v :Exported 'src/cls/UnitTests' to /tmp/dir7J1Fhz/objectscript-math-0.0.4/src/cls/unittests/.src/cls/unittestsERROR #5018: Routine 'src/cls/UnitTests' does not exist[objectscript-math] Package FAILURE - ERROR #5018: Routine 'src/cls/UnitTests' does not existERROR #5018: Routine 'src/cls/UnitTests' does not exist
Did I miss something or is a package manager issue ?Thank you. Perhaps try Name="/tests" with a leading slash? Yes, that's it ! We can see a dot.
It works fine.Thank you for your help. @Timothy.Leavitt Do you all still use your Test Coverage Tool at InterSystems? I haven't seen any recent updates to it on the repo so I I'm wondering if you consider it still useful and it's just in a steady state, stable place or are there different tactics for test coverage metrics since you published? @Michael.Davidovich yes we do! It's useful and just in a steady state (although I have a PR in process around some of the recent confusing behavior that's been reported in the community). Thanks, @Timothy.Leavitt!
For others working through this too, I wanted to sum some points up that I discussed with Tim over PM.
- Tim reiterated the usefulness of the Test Coverage tool and the Cobertura output for finding starting places based on complexity and what are the right blocks to test.
- When it comes to testing persistent data classes, it is indeed tricky but valuable (e.g. data validation steps). Using transactions (TSTART and TROLLBACK) is a good approach for this.
I also discussed the video from some years ago on the mocking framework. It's an awesome approach, but for me, it depends on retooling classes to fit the framework. I'm not in a place where I want to or can rewrite classes for the sake of testing, however this might be a good approach for others. There may be other open source frameworks for mocking available later.
Hope this helps and encourages more conversation! In a perfect world we'd start with our tests and code from there, but well, the world isn't perfect! great summary ... thank you! @Timothy.Leavitt and others: I know this isn't Jenkins support, but I seem to be having trouble allowing the account running Jenkins to get into IRIS. Just trying to get this to work locally at the moment. I'm running on Windows through an organizational account, so I created a new local account on the computer, jenkinsUser, which I'm to understand is the 'user' that logs in and runs everything on Jenkins. When I launch IRIS in the build script using . . .
C:\MyPath\bin\irisdb -s C:\MyPath\mgr -U MYNAMESPACE 0<inFile
. . . I can see in the console it's trying to login. I turned on O/S authentication for the system and allowed the %System.Login function to use Kerbose. I can launch Terminal from my tray and I'm logged in without a user/password prompt.
I am guessing that IRIS doesn't know about my jenkinsUser local account, so it won't allow that user to us O/S authentication? I'm trying to piece this together in my head. How can I allow this computer user trying to run Jenkins access to IRIS without authentication?
Hope this helps others who are trying to set this up. Not sure if this is right, but I created a new IRIS user and then created delegated access to %Service_Console and included this in my ZAUTHENTICATE routine. Seems to have worked.
Now . . . on to the next problem:
DO ##CLASS(UnitTest.Manager).OutputResultsXml("junit.xml")
^
<CLASS DOES NOT EXIST> *UnitTest.Manager Please try %UnitTest.Manager I had to go back . . . that was a custom class and method that was written for the Widgets Direct demo app. Trial and error folks:
@Timothy.Leavitt your presentation mentioned a custom version of the Coberutra plugin for the scatter plot . . . is that still necessary or does the current version support that? Not sure if I see any mention of the custom plugin on the GitHub page.
Otherwise, I seem to me missing something key: I don't have build logic in my script. I suppose I just thought that step was for automation purposes so that the latest code would be compiled on whatever server. I don't have anything like that yet and thought I could just run the test coverage utility but it's coming up with nothing. I'll keep playing tomorrow but appreciate anyone's thoughts on this especially if you've set it up before!
For those following along, I got this to work finally by creating the "coverage.list" file in the unit test root. I tried setting the parameter node "CoverageClasses" but that didn't work (maybe I used $LB wrong).
Still not sure how to get the scatter plot for complexity as @Timothy.Leavitt mentioned in the presentation the Cobertura plugin was customized. Any thoughts on that are appreciated! I think this is it: GitHub - timleavitt/covcomplplot-plugin: Jenkins covcomplplot pluginIt's written by Tim, it's on the plugin library, and it looks like what was in the presentation, however I have some more digging come Monday.
@Michael.Davidovich I was out Friday, so still catching up on all this - glad you were able to figure out coverage.list. That's generally a better way to go for automation than setting a list of classes.
re: the plugin, yes, that's it! There's a GitHub issue that's probably the same here: https://github.com/timleavitt/covcomplplot-plugin/issues/1 - it's back on my radar given what you're seeing. So I originally installed the scatter plot plugin from the library, not the one from your repo. I uninstalled that and I'm trying to install the one you modified. I'm having a little trouble because it seems I have to download your source, make sure I have a JDK installed and Maven and package the code into a .hpi file? Does this sound right? I'm getting some issues with the POM file while running 'mvn pacakge'. Is it possible to provide the packaged file for those of us not Java-savvy? For other n00bs like me . . . in GitHub you click the Releases link on the code page and you can find the packaged code. Edit: I created a separate thread about this so it gets more visibility: The thread can be found from here: https://community.intersystems.com/post/test-coverage-coverage-report-not-generating-when-running-unit-tests-zpm
...
Hello,
@Timothy.Leavitt, thanks for the great article! I am facing a slight problem and was wondering if you, or someone else, might have some insight into the matter.
I am running my unit tests in the following way with ZPM, as instructed. They work well and test reports are generated correctly. Test coverage is also measured correctly according to the logs. However, even though I instructed ZPM to generate Cobertura-style coverage reports, it is not generating one. When I run the GenerateReport() method manually, the report is generated correctly.
I am wondering what I am doing wrong. I used the test flags from the ObjectScript-Math repository, but they seem not to work.
Here is the ZPM command I use to run the unit tests:
zpm "common-unit-tests test -only -verbose
-DUnitTest.ManagerClass=TestCoverage.Manager
-DUnitTest.UserParam.CoverageReportClass=TestCoverage.Report.Cobertura.ReportGenerator
-DUnitTest.UserParam.CoverageReportFile=/opt/iris/test/CoverageReports/coverage.xml
-DUnitTest.Suite=Test.UnitTests.Fw
-DUnitTest.JUnitOutput=/opt/iris/test/TestReports/junit.xml
-DUnitTest.FailuresAreFatal=1":1
The test suite runs okay, but coverage reports do not generate. However, when I run these commands stated in the TestCoverage documentation, the reports are generated.
Set reportFile = "/opt/iris/test/CoverageReports/coverage.xml"
Do ##class(TestCoverage.Report.Cobertura.ReportGenerator).GenerateReport(<index>, reportFile)
Here is a short snippet from the logs where you can see that test coverage analysis is run:
Collecting coverage data for Test: .036437 seconds
Test passed
Mapping to class/routine coverage: .041223 seconds
Aggregating coverage data: .019707 seconds
Code coverage: 41.92%
Use the following URL to view the result:
http://192.168.208.2:52773/csp/sys/%25UnitTest.Portal.Indices.cls?Index=19&$NAMESPACE=COMMON
Use the following URL to view test coverage data:
http://IRIS-LOCALDEV:52773/csp/common/TestCoverage.UI.AggregateResultViewer.cls?Index=17
All PASSED
[COMMON|common-unit-tests] Test SUCCESS
What am I doing wrong?
Thank you, and have a good day!Kari Vatjus-Anttila %UnitTest mavens may be interested in this announcement:
https://community.intersystems.com/post/intersystems-testing-manager-new-vs-code-extension-unittest-framework Helllo @Timothy.Leavitt
Is there a way to ensure that the code sending messages through BusinessService or BusinessProcess can be fully tracked? The current issue is that when methods contain "SendRequestSync" or "SendRequestAsync", the code at the receiving end cannot be tracked and included in the test coverage report.
Thank you. Here we are using the mocking framework that we developed (GitHub - GendronAC/InterSystems-UnitTest-Mocking: This project contains a mocking framework to use with InterSystems' Products written in ObjectScript
Have a look at the https://github.com/GendronAC/InterSystems-UnitTest-Mocking/blob/master/Src/MockDemo/CCustomPassthroughOperation.cls class. Instead of calling ..SendRequestAsync we do ..ensHost.SendRequestAsync(...) Doing so enables us to create Expectations (..Expect(..ensHost.SendRequestAsync(....
Here a code sample :
Class Sample.Src.CExampleService Extends Ens.BusinessService
{
/// The type of adapter used to communicate with external systems
Parameter ADAPTER = "Ens.InboundAdapter";
Property TargetConfigName As %String(MAXLEN = 1000);
Parameter SETTINGS = "TargetConfigName:Basic:selector?multiSelect=0&context={Ens.ContextSearch/ProductionItems?targets=1&productionName=@productionId}";
// -- Injected dependencies for unit tests
Property ensService As Ens.BusinessService [ Private ];
/// initialize Business Host object
Method %OnNew(
pConfigName As %String,
ensService As Ens.BusinessService = {$This}) As %Status
{
set ..ensService = ensService
return ##super(pConfigName)
}
/// Override this method to process incoming data. Do not call SendRequestSync/Async() from outside this method (e.g. in a SOAP Service or a CSP page).
Method OnProcessInput(
pInput As %RegisteredObject,
Output pOutput As %RegisteredObject,
ByRef pHint As %String) As %Status
{
set output = ##class(Ens.StringContainer).%New("Blabla")
return ..ensService.SendRequestAsync(..TargetConfigName, output)
}
}
Import Sample.Src
Class Sample.Test.CTestExampleService Extends Tests.Fw.CUnitTestBase
{
Property exampleService As CExampleService [ Private ];
Property ensService As Ens.BusinessService [ Private ];
ClassMethod RunTests()
{
do ##super()
}
Method OnBeforeOneTest(testName As %String) As %Status
{
set ..ensService = ..CreateMock()
set ..exampleService = ##class(CExampleService).%New("Unit test", ..ensService)
set ..exampleService.TargetConfigName = "Some test target"
return ##super(testName)
}
// -- OnProcessInput tests --
Method TestOnProcessInput()
{
do ..Expect(..ensService.SendRequestAsync("Some test target",
..NotNullObject(##class(Ens.StringContainer).%ClassName(1)))
).AndReturn($$$OK)
do ..ReplayAllMocks()
do $$$AssertStatusOK(..exampleService.OnProcessInput())
do ..VerifyAllMocks()
}
Method TestOnProcessInputFailure()
{
do ..Expect(..ensService.SendRequestAsync("Some test target",
..NotNullObject(##class(Ens.StringContainer).%ClassName(1)))
).AndReturn($$$ERROR($$$GeneralError, "Some error"))
do ..ReplayAllMocks()
do $$$AssertStatusNotOK(..exampleService.OnProcessInput())
do ..VerifyAllMocks()
}
}
The answer about mocking is great.
At the TestCoverage level, by default the tool tracks coverage for the current process only. This prevents noise / pollution of stats from other concurrent use of the system. You can override this (see readme at https://github.com/intersystems/TestCoverage - set tPidList to an empty string), but there are sometimes issues with the line-by-line monitor if you do; #14 has a bit more info on this.
Note - question also posted/answered at https://github.com/intersystems/TestCoverage/issues/33
Announcement
Anastasia Dyubaylo · Apr 6, 2020
InterSystems IRIS latest release (v2020.1) makes it even easier for you to build high performance, machine learning-enabled applications to streamline your digital transformation initiatives.
Join this webinar to learn about what's new in InterSystems IRIS 2020.1, including:
Machine learning and analytics
Integration and healthcare interoperability enhancements
Ease of use for developers
Even higher performance
And more...
Speakers:
🗣 @Jeffrey.Fried, Director, Product Management - Data Platforms, InterSystems🗣 @Joseph.Lichtenberg, Director, Product Marketing, InterSystems IRIS
Date: Tuesday, April 7, 2020Time: 10:00 a.m. - 11:00 a.m. EDT
JOIN THE WEBINAR! Is a recording of this going to be available? Yes it is.I missed it and entered via registration. JOIN THE WEBINAR! Hi Developers!
➡️ Please find the webinar recording here.
Enjoy!
Announcement
Anastasia Dyubaylo · Apr 10, 2020
Hi Community!
Enjoy watching the new video on InterSystems Developers YouTube and learn about IntegratedML feature:
⏯ What is IntegratedML in InterSystems IRIS?
This video provides an overview of IntegratedML - the feature of InterSystems IRIS Data Platform that allows developers to implement machine learning directly from the existing SQL environment.
Ready to try InterSystems IRIS? Take our data platform for a spin with the IDE trial experience: Start Coding for Free.
Stay tuned! 👍🏼 If you would like to explore a wider range of topics related to this video including videos and infographics, please check out the IntegratedML Resource Guide.
Enjoy!
Question
Mohamed Hassan Anver · Apr 8, 2020
Hi There,
I have Microsoft Visual Studio Community 2019 installed and tried to setup the entity framework as per Using Entity Framework with InterSystems IRIS Data Platform (https://learning.intersystems.com/course/view.php?id=1046) tutorial but I can't see the ISC data source in MS Visual Studio's Data source section. Does this mean that MS VS Community 2019 is not supported with the Entity Frmawork?
Hassan Hello @MohamedHassan.Anver,
I think that the tutorial is for EF 6 that is designed for .NET Framework. And MS is not promoting more EF Framework, right now, MS has EF core as goal (check this: https://docs.microsoft.com/es-es/ef/efcore-and-ef6/ ) and is the right EF to go in my opinion.
However IRIS is not supporting EF Core https://community.intersystems.com/post/how-can-i-use-iris-net-core-entity-framework. :-(
Any thought @Robert.Kuszewski ? Thank you @David.Reche for the reply. I wish IRIS would release support for EF Core in the near future. For now we will develop our app based on IRIS and EF.