Clear filter
Question
Elize VdRiet · Jul 27, 2020
My VS Code plugin "InterSystems Objectscript" upgraded to version 0.8.7 but the server settings no longer visible in extension settings and can no longer connect to our InterSystems server.
What has chnaged and how do we set the server connectio up? I checked the JSON file and it still has the settings as it was before but it is not connecting.
Screenshot of the settings json and then on the left a timer icon on the InterSystems icon and blue line running from left to right at top as it tries to conenct to server. I downgraded to version 0.8.6 setup settings again as before and it connects, there is nothing wrong with my connection to the server, it is the extension.
I don't see the value for port, could you set it as well, and check again?
For the next time, the best place to post about issues you faced is issues page right in the repository I had that happen with mine, where the port AND namespace completely disappeared from the connection objects. I had to add both back in at the lowest level, then close VSCode and re-open it for the changes to take effect. (Closing the Workspace probably also would work).
(I also installed the IS Server manager extension before checking the ObjectScript tab. I don't think that changed anything, but mentioning it just in case.)
I wonder why we can't edit the connection settings directly from the extension settings themselves anymore, but I guess that's a question for the repo.
Announcement
Steven LeBlanc · May 13, 2020
AWS has officially released their second-generation Arm-based Graviton2 processors and associated Amazon EC2 M6g instance type, which boasts up to 40% better price performance over current generation Intel Xeon based M5 instances.
A few months ago, InterSystems participated in the M6g preview program, and we ran a few benchmarks with InterSystems IRIS that showed compelling results. This led us to support ARM64 architectures for the first time.
Now you can try InterSystems IRIS and InterSystems IRIS for Health on Graviton2-based Amazon EC2 M6g instances for yourselves through the AWS Marketplace!
If you’re unfamiliar with launching an instance through the AWS Marketplace, let’s walk through setting it up. First make sure that you’re logged into your AWS Account, then navigate to the AWS Marketplace listing for InterSystems IRIS and click ‘Continue to Subscribe’
Click ‘Accept Terms’ to subscribe, wait for a minute, and then ‘Continue to Configuration’
You can accept the defaults, and ‘Continue to Launch’
On the Launch page, make sure to select an m6g instance type, such as m6g.large
Scroll down, and make sure to select a valid key pair, or follow the link to create a new one if you don’t already have one. Then select Launch.
Now you can navigate to the EC2 console to access your new instance
You can name it, access the public IP address below:
And SSH into the instance using your private key and host name (ubuntu@<Public-IP-Address>). Here I’m using PuTTY, make sure you point to your private key file (.ppk for PuTTY, under SSH > Auth)
For other SSH clients and additional information, please refer to the Usage Instructions copied here:
Usage Instructions for IRIS 2020.1(Arm64) Community Edition
Getting Started: - SSH into the Ubuntu EC2 instance following the instructions here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html
The default user is "ubuntu"
- Make note of the connection information provided in the connection message.
- Start by resetting the IRIS password: "$iris password"
- Connect your IDE - see: https://www.intersystems.com/developers/
- Learn more with a Quickstart - https://learning.intersystems.com/course/view.php?id=1055&ssoPass=1
- Enter the IRIS terminal directly with $docker exec -it iris iris session iris
Additional Resources:
Getting Started with InterSystems IRIS - https://gettingstarted.intersystems.com/
InterSystems strives to provide customers with platform freedom of choice. We at InterSystems are very excited to see the performance gains and cost savings that AWS Graviton2 processors can provide to InterSystems IRIS customers. We anticipate that these combined benefits will drive significant adoption of Arm-based platforms among IRIS customers, and we’re thrilled to support InterSystems IRIS running on AWS Graviton2-based M6g EC2 instances! The Community Edition images for IRIS and IRIS for Health for ARM64 are now available in the Docker Store. Try them out!
InterSystems IRIS:
docker pull store/intersystems/iris-community-arm64:2020.2.0.211.0
InterSystems IRIS for Health:
docker pull store/intersystems/irishealth-community-arm64:2020.2.0.211.0 Excellent news! Thanks @Steven.LeBlanc and congratulations!
Announcement
Evgeny Shvarov · Dec 16, 2018
Hi Community!We try a new approach to the InterSystems Developers YouTube Videos called "Coding Talks"!Coding Talks is a short video in which the developer demonstrates a particular feature or functionality of InterSystems Data Platforms which he/she uses to in coding. Typical format: the face on side and editor with ObjectScript.Check this video I made by myself participating in Advent of Code 2018 and coding with InterSystems ObjectScript in VSCode.Coding Advent of Code 2018 Using InterSystems ObjectScript You're very welcome to share your ideas in the comments of this post!
Question
Rajat Sharma · Nov 9, 2020
I have created $zf commands to encrypt/decrypt a file using GNUpg which is working fine in Terminal but not populating output file when I call this from BPL process. Any help would be appreciated!! Terminal works under your OS user, BPL works under InterSystems user (typically irisusr).
This type of error is often caused by:
IRIS OS user does not have permissions to access the input and/or output file
IRIS OS user does not have required executables in PATH
IRIS OS user does not have required shared libraries in LD_LIBRARY_PATH
To solve issue (1) create input and output files in IRIS Temp directory - a path returned by:
write ##class(%File).NormalizeDirectory(##class(%SYS.System).TempDirectory())
To test for issues (2) and (3) add a $zf call which inits all your dependencies and returns version or some other trivia.
To adjust PATH and LD_LIBRARY_PATH use System Management Portal. Here's some info.
I recommend checking environment variables from inside your process with:
write $SYSTEM.Util.GetEnviron(Name)
For quick testing you can adjust environment variables for the current process with this community utiliy.
Announcement
Shane Nowack · May 23, 2022
Hello IRIS Community,
InterSystems Certification is developing a certification exam for IRIS system administrators and, if you match the exam candidate description given below, we would like you to beta test the exam. The exam will be available for beta testing on June 20-23, 2022 at InterSystems Global Summit 2022, but only for Summit registrants (visit this page to learn more about Certification at GS22) . Beta testing will open for all other interested beta testers on July 18, 2022. However, interested beta testers should sign up now by emailing certification@intersystems.com (see below for more details). The beta testing must be completed by September 30, 2022.
What are my responsibilities as a beta tester?
You will be assigned the exam and will need to take it within one month of the beta release. The exam will be administered in an online proctored environment (live-proctored during Summit), free of charge (the standard fee of $150 per exam is waived for all beta testers), and then the InterSystems Certification Team will perform a careful statistical analysis of all beta test data to set a passing score for the exam. The analysis of the beta test results will take 6-8 weeks, and after the passing score is established, you will receive an email notification from InterSystems Certification informing you of the results. If your score on the exam is at or above the passing score, you will have earned the certification!
Note: Beta test scores are completely confidential.
Exam Details
Exam title: InterSystems IRIS System Administration Specialist
Candidate description: An IT professional who:
installs, manages, and monitors InterSystems IRIS environments,
and ensures data security, integrity, and high availability.
Number of questions: 72
Time allotted to take exam: 2 hours
Recommended preparation:
Classroom course Managing InterSystems Servers or equivalent experience. Online course InterSystems IRIS Management Basics, is recommended, as well as experience searching Platform Management Documentation.
Recommended practical experience:
1-2 years performing system administration, management, and security tasks using InterSystems IRIS data platform version 2019.1 or higher.
Review the set of practice questions found in the PDF file at the bottom of this page
Exam practice questions
The practice questions found in the PDF file at the bottom of this page are provided to familiarize candidates with question formats and approaches.
Exam format
The questions are presented in two formats: multiple choice and multiple response. Several documentation books from the Platform Management section of InterSystems IRIS Documentation will be available during the exam. Please visit the practice question document at the bottom of this page to see the documentation books that will be available during the the exam.
DISCLAIMER: Please note this exam has a 2-hour time limit. While several documentation books will be available during the exam, candidates will NOT have time to search the documentation for every question. Thus, completing the recommended preparation before taking the exam, and searching the documentation only when absolutely necessary during the exam, are both strongly encouraged!
System requirements for beta testing
Version 6.1.27.1 or earlier of Questionmark Secure
Adobe Acrobat set as the default PDF viewer on your system
Windows 10 or 11 STRONGLY RECOMMENDED!
MacOS will work, but the ability to search the included documentation is extremely limited on MacOS.
NOTE: We delayed the release of this beta test in an attempt to get the search functionality across all operating systems to behave similarly. Unfortunately, Questionmark, our exam delivery service, has been unable to resolve the search functionality issues. Thus, regardless of your system’s OS, please prepare for your exam attempt by launching this short sample “exam experience” in Questionmark Secure. Completing the sample assessment will take ~10 minutes and will familiarize you with taking an exam in our locked-down browser (Questionmark Secure), the format of how the documentation books will be presented on the exam, and how to troubleshoot any search-related issues you may encounter. PLEASE HAVE A LOOK AT THIS SAMPLE ASSESSMENT BEFORE YOUR EXAM (IT WILL GREATLY REDUCE YOUR EXAM STRESS)!
Exam topics and content
The exam contains question items that cover the areas for the stated role as shown in the KSA (Knowledge, Skills, Abilities) chart immediately below.
KSA Group
KSA Group Description
KSA Description
Target Items
T41
Installs and configures InterSystems IRIS
Installs and upgrades instances
Installs development, client, server, and custom instances; Performs different security installs of InterSystems IRIS; Identifies structure and contents of folders in the Installation directory; Starts, stops, and lists instances from the command line; Upgrades existing instances
Configures namespaces, databases, memory, and other system parameters
Creates, views, and deletes namespaces and databases; Identifies contents and characteristics of default databases; Maps and manages globals, routines, and packages; Determines when it is necessary to directly edit the iris.cpf and when it is inadvisable; Determines appropriate size of global buffers and routine buffers, and tunes them as needed; Determines appropriate journal states, database size, and maximum size for database; Determines disk space requirements for installation and operation; Determines memory requirements for installation and operation; Determines appropriate values for generic memory heap settings; Increases lock-table sizes
Manages licenses
Activates and reviews licenses; Configures license servers
T42
Manages and monitors InterSystems IRIS
Manages databases
Mounts, dismounts, and expands databases; Compacts, truncates, and defrags databases; Checks database integrity; Manages data; Manages routines
Manages user and system processes
Performs process operations: suspend, resume, terminate; Inspects processes; Manages process locks; Uses Task Manager to view, schedule, execute, manage, and automate tasks
Manages journaling
Configures journal settings and locations; Uses journaling; Identifies the importance of enabling the "Freeze on error" option for data integrity; Differentiates between WIJ and journal functions; Uses Journal Profile utility; Restores journals; Purges journal files
Diagnoses and troubleshoots problems
Accesses and configures system logs; Identifies and interprets errors stored in system logs; Runs IRISHung/Diagnostic Report; Identifies and terminates stuck/looping processes; Inhibits access; Gains emergency access to configuration
Monitors system
Uses ^PERFMON to monitor system; Determines which monitoring tools are appropriate for different performance issues such as journal profiles, ^BLKCOL, and ^SystemPerformance; Runs the ^SystemPerformance utility; Determines global sizes
T43
Implements system continuity
Implements mirroring
Identifies requirements to set-up mirroring; Identifies mirror members and describes mirror communication; Configures mirroring; Determines failover possibilities; Adds databases to mirrors
Implements ECP
Uses ECP; Configures ECP data and application servers; Monitors and controls ECP connections
Manages backups and recoveries
Plans backup strategies including frequency required, journal file retention, and considers OS level vs InterSystems online backups; Identifies contents included in backups; Uses the FREEZE and THAW APIs for snapshot backups; Verifies backups; Restores system
T44
Implements system security
Uses audit log to track user and system events
Enables and disables audit events; Views audit entries and audit event properties; Identifies the root cause of common audit events; Manages audit database
Manages security
Creates users and roles; Assigns roles and privileges; Grants permissions to resources; Enables and disables services; Protects applications and resources within applications; Implements database and data element encryption; Encrypts journals; Manages system-wide security settings; Imports/Exports security settings; Reduces attack surface; Implements two-factor authentication; Identifies the multiple layers involved in configuring and troubleshooting the Web Gateway (including connecting an external web server to InterSystems IRIS)
Interested in participating? Email certification@intersystems.com now!
Announcement
Nermin Kibrislioglu Uysal · Mar 15, 2022
Hello Everyone,
The Certification Team of InterSystems Learning Services has updated exam objectives for our HL7 Interface Specialist certification exam and we need input from our implementation community.
How do I provide my input? We will present you with a list of job tasks, and you will rate them on their importance and other factors.
How much effort is involved? It takes about 30-45 minutes to fill out the survey. You can be anonymous or identify yourself and ask us to get back to you.
How can you access the survey? You can access the survey here
Note: Your answers cannot be saved. Thus, once you start the survey, please do not close your browser until you are finished.
Here's the exam title and the definition:
HL7 Interface Specialist
An IT professional who:
designs, builds and performs basic troubleshooting of HL7 interfaces with InterSystems products
Thank you
Nermin Kibrislioglu Uysal, Certification Exam Developer, InterSystems
Announcement
Anastasia Dyubaylo · Jun 6, 2022
Hey Developers,
New video is already on InterSystems Developers YouTube channel:
⏯ InterSystems HealthShare Analytics Solution: Create & Deliver Real-Time Insight at Scale
Many data-driven organizations need to build high-quality, clean, validated sources of truth. Learn how the InterSystems HealthShare Analytics Solution enables your organization to consolidate and harmonize data at scale from disparate sources in real-time, allowing you to collaborate across all functions and with outside entities while maintaining control of your data.
🗣 Presenter: Fred Azar, Healthcare Analytics Executive, InterSystems
Like and share!
Article
Alberto Fuentes · Apr 5, 2022
You have read about **OAuth2** / **OpenID Connect** but you don't know how to use it? Have you ever needed to implement Single Sign-On (SSO) or secure web services based on tokens? Did you have to add authentication / authorization to your web applications or services and you didn't know how to start?
What about a step by step example where you can set up an authorization server, a client and a resource server? [Here](https://openexchange.intersystems.com/package/workshop-iris-oauth2) you can find an example where you will configure InterSystems IRIS instances to act as each one of these OAuth2 roles.
## A brief introduction
**Authentication** is the process of verifying that users are who they say they are.
**Authorization** is the process of giving those users permission to access resources.
OAuth is an authorization framework. OpenID Connect (OIDC) is extension to OAuth 2.0 to handle authentication.
In OAuth2 there are different roles:
* Resource owner — Usually a user.
* Resource server — A server that hosts protected data and/or services.
* Client — An application that requests limited access to a resource server (e.g. a web application).
* Authorization server — A server that is responsible for issuing access tokens, with which the client can access the resource server.
OAuth2 uses scopes as a mechanism to limit access. A client can request one or more scopes.
Finally, OAuth2 supports different grant types. Each grant type can have a different behaviour which can be better suited for some scenarios.
## What can you test in the example?
You can test two different scenarios. One using `Authorization Code` grant type and the other using `Client Credentials`.
You will have 3 InterSystems IRIS instances that you will configure to act as each different need OAuth2 role.
### Authorization Code
`Authorization Code` is a grant type suited for web / mobile application scenario.

In the example, you will set up a web application as the client who accesses the protected resources using an access token.

### Client Credentials
`Client Credentials` is a different grant type, which typically is used when a client access the resources directly in its own name (and not on behalf of a user).

In the example, you will access the protected resources using Postman as the client.

excellent summary / training resource - thank you for taking the time to put it together!
Article
Evgeniy Potapov · Aug 4, 2022
AtScale pulls data from the IRIS database.
The AtScale product forms a virtual OLAP cube on the intermediate layer, which external applications can access using standard SQL and MDX (Multidimensional Expressions) languages. The solution includes three main components.
Design Center is used for designing OLAP cubes, forming links between metadata and dimensions of a virtual cube. Along with the task of designing a data schema, the issues of access policy to certain data and security are also solved here. Since Virtual Cube does not physically store Big Data, ensuring acceptable performance is a serious problem.
Adaptive Cache makes it possible not only to physically cache recent or frequently used data but also to predict what data will be needed soon so that it can be prefetched into the cache.
To connect Logi to AtScale, use the Hive2 JDBC connection type with the org.apache.hive.jdbc.HiveDriver driver.
The advantage of this type of connection is the addition of an unlimited number of cubes to one Catalog. This makes it possible to create complex reports with a large dataset. Also, it is possible to create a diagram inside Logi based on cubes and to make links between cubes from AtScale, which expands the possibilities of data generation. It is also possible to manually construct SQL queries for cubes. This feature is not used very often, but it can be employed to manually establish relationships or design SQL formulas.
Most of the logic for generating a data set is performed on the AtScale side. Logi (InterSystems Report Designer) accepts the generated data sets and visualizes this data.
Sales Insights Application test dataset (AtScale versions >= 2019.2.x)
“Gain insight into sales trends, product performance, and customer details using this sample data based on the AdventureWorks data set.”
It can be downloaded from the link https://downloads.atscale.com/
This is what this data set looks like in InterSystems Reports Designer
The main advantages that we have received from using this bundle are:
Possibility of automatic generation of reports according to the schedule
An option of receiving PDF reports by email
An opportunity to use parameters automatically calculated for the report (dates)
Acceleration of report generation (UDAF, aggregates and caching)
Ability to clone, backup, and quickly propagate AtScale cube changes
Logi functionality extension: some of the comparative parameters were not previously calculated in Logi and DeepSee
Substitution of Logi functionality: calculation of comparative indicators depending on the parameter settings on the AtScale side
Why UDAF is needed for Adaptive Analytics and how to configure it
InterSystems Adaptive Analytics (powered by AtScale) is a powerful addition to IRIS BI. It allows users to get their Dashboards built faster. Also, Developers and Maintainers can quickly switch between data sources. And last but not least are snapshots of Data Cubes versions. They give you the ability to backup data structure logic and rollback if needed.
Some of the functions are ready to use out-of-the-box. However, one important option needs to be configured. I mean an option that gives a boost to Dashboards and Reports speed.
It is called UDAF. If it is not configured, you can not see any optimisation or improvement of the abstraction layer mechanism.
You can get UDAF distribution from the same place you get Adaptive Analytics one.
UDAF gives Adaptive Analytics 2 main advantages:
- the ability to store query execution results (they call it Aggregate Tables), so that the next query, using aggregation on data, could take already pre-calculated results from the database.
- the ability to use additional functions (a.k.a User-Defined Aggregate Functions) and data processing algorithms that Adaptive Analytics is forced to store in the data source.
They are stored in the database in separate tables, and Adaptive Analytics can call them by name in auto-generated queries. When Adaptive Analytics uses these functions, the speed of queries increases.
The place where all these tables will be stored is specified when creating a connection to IRIS in the AGGREGATE SCHEMA item. If no such schema exists in the database, it will be assembled. It is a good idea to store that schema in another database. Later I will explain you why.
Now let's imagine that IRIS, to which Adaptive Analytics is connected, is intended only for analytics and contains only a copy of data from a running database (so that requests from analytical systems do not load the resources of the main database). Once in a certain time, the data from the main system is copied to our IRIS, overwriting the old data.
In such a situation, at the time of rewriting, all the data recorded by Adaptive Analytics disappears, and we get the following result:
- queries that are trying to access aggregate tables do not find them and fall with an error.
- requests that use the functions stored in the database cannot access them and fall with an error.
- when deleting old paths to aggregate tables, Adaptive Analytics quietly creates new ones, and later the situation repeats itself.
The above described is only one of the possible cases of overwriting data in IRIS but by no means the most common one.
The main problem is that Adaptive Analytics does not use fallback mechanisms of avoiding using UDAF options and making direct queries in case of such errors. It fails to report building or dashboard updates.
We can try to load the table with functions into the database along with the update manually. However, as mentioned above, the UDAF is not only the data processing functions but also the aggregate tables, and resaving them is also quite problematic.
The solution to the problems described above is the creation of a separate database where Adaptive Analytics will write its service tables. Such a solution is described in the documentation for connecting Adaptive Analytics to IRIS:
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=AADAN#AADAN_config
It's worth mentioning that the schema name you specify after "/<instancePath>/mgr/" must match the schema name you specify as AGGREGATE SCHEMA with Adaptive Analytics. Otherwise, a separate database will be created, but Adaptive Analytics will write data to the schema it created inside your database, ignoring the newly made separate database. The UDAF developers recommend using "AtScale" as the name of such a scheme. It also the name given as an example in the documentation.
In the event that this database becomes corrupted and the data will be impossible to recover, you can disable the aggregations in Adaptive Analytics to automatically recreate them on the next request.
To do this, in the Aggregates/instances tab, you must deactivate each aggregate. Aggregates are displayed 20 per page, so be sure to visit each page to verify they are all deactivated.
So if you are going to use Adaptive Analytics, please, pay attention to that small point. At the start of my work with Adaptive Analytics, this option was set up with a mistake, so I didn’t get all the benefits it could give. Also, remember to check your database name. It is case-sensitive in setting and in the script of installation of UDAF.
I think there might be some options which I don’t know about at the moment, as well as their influence on the performance of my analytics systems. If you find any, please write about them in the comments. Thank you for the detailed article @Evgeniy.Potapov
Announcement
Andreas Schneider · Feb 17, 2016
Caché Monitor is a database\sql tool primarily for InterSystems Caché but can also connect to MS SQL Server, MS Access and more databases. Within Caché Monitors Server Navigator you see all available Namespaces on your Caché Servers. No need to know the name of the Namespace, no need to configure many many JDBC Connections by hand. Just click on the namespace and see all objects like tables, views, classes and more...There is a beta build available with some new features: A main new feature in this build is called Query Cloud. With this feature you can write SQL Statements across multiple Caché Servers; Namespace and combine (SQL JOIN!) this data with other datasources like SQL Server; MS Access or simple CSV files. All this is done with zero installation on the Server side. More details: http://www.cachemonitor.de/cache-monitor-beta-releases/Please watch this video to see how it works. This video demonstrates how you can work with CSV files within the Query Cloud and query the data like database tables.Keep in mind please: all this is done locally and maby not the right way to work with very large tables with millions of rows. But maybe it is the right thing to make adhoc queries and analyse the data before you go on and export\import your data to combine it in one namespace for analysis purposes.An evaluation license is attached to this post. It would be great if you make some tests within your environment and play with this feature. I'm very interested in getting feedback (email preferred). Thanks for your time! Thank you Andreas! I used Caché Monitor many years ago, it is great! Server Navigator link doesn't work, would you please fix it? Evgeny thanks for the kind words and the hint about broken link!Andreas
Question
Tom Longmoore · Jun 22, 2016
My manager wants to send a couple of people to one of InterSystems's courses about developing Ensemble productions. I work in a healthcare setting, but my group does not do much work with HL7 interfaces. We mainly use Ensemble to implement custom (non-HL7) interfaces and web services/clients.With this in mind, which of the two available courses would make the most sense for us - Building Healthcare Productions or Building Business Productions? Has anyone taken one or both and, if so, which would you recommend? Tom,It's been a while since I took it but if I recall correctly you would want "Building Business Productions" which is basically the "Building Healthcare Productions" course with a day of HL7 content removed.HTH,Ben
Article
Murray Oldfield · Nov 25, 2016
Hyper-Converged Infrastructure (HCI) solutions have been gaining traction for the last few years with the number of deployments now increasing rapidly. IT decision makers are considering HCI when scoping new deployments or hardware refreshes especially for applications already virtualised on VMware. Reasons for choosing HCI include; dealing with a single vendor, validated interoperability between all hardware and software components, high performance especially IO, simple scalability by addition of hosts, simplified deployment and simplified management.
I have written this post with an introduction for a reader who is new to HCI by looking at common features of HCI solutions. I then review configuration choices and recommendations for capacity planning and performance when deploying applications built on InterSystems data platform with specific examples for database applications. HCI solutions rely on flash storage for performance so I also include a section on characteristics and use cases of selected flash storage options.
Capacity planning and performance recommendations in this post are specific to _VMWare vSAN_. However vSAN is not alone in the growing HCI market, there are other HCI vendors, notably _Nutanix_ which also has an increasing number of deployments. There is a lot of commonality between features no matter which HCI vendor you choose so I expect the recommendations in this post are broadly relevant. But the best advice in all cases is to discuss the recommendations from this post with HCI vendors taking into account your application specific requirements.
[A list of other posts in the InterSystems Data Platforms and performance series is here.](https://community.intersystems.com/post/capacity-planning-and-performance-series-index)
# What is HCI?
Strictly speaking converged solutions have been around for a long time, however in this post I am talking about current HCI solutions for example from [Wikipedia:](https://en.wikipedia.org/wiki/Hyper-converged_infrastructure) "Hyperconvergence moves away from multiple discrete systems that are packaged together and evolve into __software-defined__ intelligent environments that all run in __commodity, off-the-shelf x86 rack servers__...."
## So is HCI a single thing?
No. When talking to vendors you must remember HCI has many permutations; Converged and Hyper-converged are more a type of architecture not a specific blueprint or standard. Due to the commodity nature of HCI hardware the market has multiple vendors differentiating themselves at the software layer and/or other innovative ways of combining compute, network, storage and management.
Without going down too much of a rat hole here, as an example solutions labeled HCI can have storage inside the servers in a cluster or have more traditional configuration with a cluster of servers and separate SAN storage -- possibly from different vendors -- that has also been tested and validated for interoperability and managed from a single control plane. For capacity and performance planning you must consider solutions where storage is in an array connected over a SAN fabric (e.g. Fibre Channel or Ethernet) have a different performance profile and requirements to the case where the storage pool is software defined and located inside each of a cluster of server nodes with storage processing on the servers.
## So what is HCI again?
For this post I am focusing on HCI and specifically _VMware vSAN_ where _storage is physically inside the host servers_. In these solutions the HCI software layer enables the internal storage in each of multiple nodes in a cluster performing processing to act like one shared storage system. So another driver of HCI is even though there is a cost for HCI software there could also be significant savings using HCI when compared to solutions using enterprise storage arrays.
>For this post I am talking about solutions where HCI combines compute, memory, storage, network and management software into a cluster of virtualised x86 servers.
## Common HCI characteristics
As mentioned above _VMWare vSAN_ and _Nutanix_ are examples of HCI solutions. Both have similar high level approaches to HCI and are good examples of the format:
- _VMware vSAN_ requires VMware vSphere and is available on multiple vendors hardware. There are many hardware choices available but these are strictly dependent on VMware's vSAN Hardware Compatibility List (HCL). Solutions can be purchased prepackaged and preconfigured for example EMC VxRail or you can purchase components on the HCL and build-your-own.
- _Nutanix_ can also be purchased and deployed as an all-in-one solution including hardware in preconfigured blocks with up to four nodes in a 2U appliance. Nutanix solution is also available as a build-your-own software solution validated on other vendors hardware.
There are some variations in implementation, but generally speaking HCI have common features that will inform your planning for performance and capacity:
- Virtual Machines (VMs) run on hypervisors such as VMware ESXi but also others including Hyper-V or Nutanix Acropolis Hypervisor (AHV). Nutanix can also be deployed using ESXi.
- Host servers are often combined into blocks of compute, storage and network. For example a 2U Appliance with four nodes.
- Multiple host servers are combined into a cluster for management and availability.
- Storage is tiered, either all-flash or a hybrid with a flash cache tier plus spinning disks as a capacity tier.
- Storage is presented as a pool which is software defined including data placement and policies for capacity, performance and availability.
- Capacity and IO performance are scaled by adding hosts to the cluster.
- Data is written to storage on multiple cluster nodes synchronously so the cluster can tolerate host or component failures without data loss.
- VM availability and load balancing is provided by the hypervisor for example vMotion, VMware HA, and DRS.
As I noted above there are also other HCI solutions with twists on this list such as support for external storage arrays, storage only nodes... the list is a long as the list of vendors.
HCI adoption is gathering pace and competition between the vendors is driving innovation and performance improvements. It is also worth noting that HCI is a basic building block for cloud deployment.
# Are InterSystems' products supported on HCI?
It is InterSystems policy and procedure to verify and release InterSystems’ products against processor types and operating systems including when operating systems are virtualised. Please note [InterSystems Advisory: Software Defined Data Centers (SDDC) and Hyper-Converged Infrastructure (HCI)](https://www.intersystems.com/product-alerts-advisories/advisory-software-defined-data-centers-sddc-and-hyper-converged-infrastructure-hci).
For example: Caché 2016.1 running on Red Hat 7.2 operating system on vSAN on x86 hosts is supported.
Note: If you do not write your own applications you must also check your application vendors support policy.
# vSAN Capacity Planning
This section highlights considerations and recommendations for deployment of _VMware vSAN_ for database applications on InterSystems data platforms -- Caché, Ensemble and HealthShare. However you can also use these recommendations as a general list of configuration questions for reviewing with any HCI vendor.
## VM vCPU and memory
As a starting point use the same capacity planning rules for your database VMs' vCPU and memory as you already use for deploying your applications on VMware ESXi with the same processors.
As a refresher for general CPU and memory sizing for Caché a list of other posts in this series is here: [Capacity planning and performance series index.](https://community.intersystems.com/post/capacity-planning-and-performance-series-index)
One of the features of HCI systems is very low storage IO latency and high IOPS capability. You may remember from the 2nd post in this series the [hardware food groups graphic](https://dl.dropboxusercontent.com/u/25822386/InterSystems/performance2/foodGroups.png) showing CPU, memory, storage and network. I pointed out that these components are all related to each other and changes to one component can affect another, sometimes with unexpected consequences. For example I have seen a case of fixing a particularly bad IO bottleneck in a storage array caused CPU usage to jump to 100% resulting in even worse user experience as the system was suddenly free to do more work but did not have the CPU resources to service increased user activity and throughput. This effect is something to bear in mind when you are planning your new systems if your sizing model is based on performance metrics from less performant hardware. Even though you will be upgrading to newer servers with newer processors your database VM activity must be monitored closely in case you need to right-size due to lower latency IO on the new platform.
Also note, as detailed later you will also have to account for software defined storage IO processing when sizing _physical host_ CPU and memory resources.
## Storage capacity planning
To understand storage capacity planning and put database recommendations in context you must first understand some basic differences between vSAN and traditional ESXi storage. I will cover these first then break down all the best practice recommendations for Caché databases.
### vSAN storage model
At the heart of vSAN and HCI in general is software defined storage (SDS). The way data is stored and managed is very different to using a cluster of ESXi servers and a shared storage array. One of the advantages of HCI is there are no LUNs, instead pool(s) of storage that are allocated to VMs as needed with policies describing capabilities for availability, capacity, and performance per-VMDK.
For example; imagine a traditional storage array consisting of shelves of physical disks configured together as various sized disk groups or disk pools with different numbers and/or types of disk depending on performance and availability requirements. Disk groups are then presented as a number of logical disks (storage array volumes or LUNs) which are in turn presented to ESXi hosts as datastores and are formatted as VMFS volumes. VMs are represented as files in the datastores. Database best practice for availability and performance recommends at minimum separate disk groups and LUNs for database (random access), journals (sequential), and any others (such as backups or non-production systems, etc).
vSAN is different; storage from the vSAN is allocated using storage policy-based management (SPBM). Policies can be created using combinations of capabilities, including the following (but there are more);
- Failures To Tolerate (FTT) which dictates the number of redundant copies of data.
- Erasure coding (RAID-5 or RAID-6) for space savings.
- Disk stripes for performance.
- Thick or thin disk provisioning (thin by default on vSAN).
- Others...
VMDKs (individual VM disks) are created from the vSAN storage pool by selecting appropriate policies. So instead of creating disk groups and LUNs on the array with a set attributes, you define the capabilities of storage as policies in vSAN using SPBM; for example "Database" would be different to "Journal", or whatever others you need. You set the capacity and select the appropriate policy when you create disks for your VM.
Another key concept is a VM is no longer a set of files on a VMDK datastore but is stored as a set of _storage objects_. For example your database VM will be made up of multiple objects and components including the VMDKs, swap, snapshots, etc. vSAN SDS manages all the mechanics of object placement to meet the requirements of the policies you selected.
### Storage tiers and IO performance planning
To ensure high performance there are two tiers of storage;
- Cache tier - Must be high endurance flash.
- Capacity tier - Flash or for hybrid uses spinning disks.
As shown in the graphic below storage is divided into tiers and disk groups. In vSAN 6.5 each disk group includes a single cache device and up to seven spinning disks or flash devices. There can be up to five disk groups so possibly up to 35 devices per host. The figure below shows an all-flash vSAN cluster with four hosts, each host has two disk groups each with one NVMe cache disk and three SATA capacity disks.
_Figure 1. vSAN all-flash storage showing tiers and disk groups_
When considering how to populate tiers and the _type_ of flash for cache and capacity tiers you must consider the IO path; for the lowest latency and maximum performance writes go to the cache tier then software coalesces and de-stages the writes to the capacity tier. Cache use depends on deployment model, for example in vSAN hybrid configurations 30% of the cache tier is write cache, in the case of all-flash 100% of cache tier is write cache -- reads are from low latency flash capacity tier.
There will be a performance boost using all-flash. With larger capacity and durable flash drives available today the time has come where you should be considering whether you need spinning disks. The business case for flash over spinning disk has been made over recent years and includes much lower cost/IOPS, performance (lower latency), higher reliability (no moving parts to fail, less disks to fail for required IOPS), lower power and heat profile, smaller footprint, and so on. You will also benefit from additional HCI features, for example vSAN will only allow deduplication and compression on all-flash configurations.
- **_Recommendation:_** For best performance and lower TCO consider all-flash.
For best performance the cache tier should have the lowest latency, especially for vSAN as there is only a single cache device per disk group.
- **_Recommendation:_** If possible choose NVMe SSDs for the cache tier although SAS is still OK.
- **_Recommendation:_** Choose high endurance flash devices in the cache tier to handle high I/O.
For SSDs at the capacity tier there is negligible performance difference between SAS and SATA SSDs. You do not need to incur the cost of NVMe SSD at the capacity tier for database applications. However in all cases ensure you are using enterprise class SATA SSDs with features such as power failure protection.
- **_Recommendation:_** Choose high capacity SATA SSDs for capacity tier.
- **_Recommendation:_** Choose enterprise SSDs with power failure protection.
Depending on your timetable new technologies such as such as 3D Xpoint with higher IOPS, lower latency, higher capacity and higher durability may be available. There is a breakdown of flash storage at the end of this post.
- **_Recommendation:_** Watch for new technologies to include such as 3D Xpoint for cache AND capacity tier.
As I mentioned above you can have up to five disk groups per host and a disk group is made up of one flash device and up to seven devices at the capacity tier. You could have a single disk group with one flash device and as much capacity as you need, or multiple disk groups per host. There are advantages to having multiple disk groups per host:
- Performance: Having multiple flash devices at the tiers will increase the IOPS available per host.
- Failure domain: Failure of a cache disk impacts the entire disk group, although availability is maintained as vSAN rebuilds automatically.
You will have to balance availability, performance and capacity, but in general having multiple disk groups per host is a good balance.
- **_Recommendation:_** Review storage requirements, consider multiple disk groups per host.
#### What performance should I expect?
A key requirement for good application user experience is low storage latency; the usual recommendation is that database read IO latency should be below 10ms. [Refer to the table from Part 6 of this series here for details.](https://community.intersystems.com/post/data-platforms-and-performance-part-6-cach%C3%A9-storage-io-profile)
For Caché database workloads tested using the default vSAN storage policy and Caché [RANREAD utility](https://community.intersystems.com/post/random-read-io-storage-performance-tool) I have observed sustained 100% random read IO over 30K IOPS with less than 1ms latency for all-flash vSAN using Intel S3610 SATA SSDs at the capacity tier. Considering that a basic rule of thumb for Caché databases is to size instances to [use memory for as much database IO as possible](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-part-4-looking-memory) all-flash latency and IOPS capability should provide ample headroom for most applications. Remember memory access times are still orders of magnitude lower than even NVMe flash storage.
As always remember your mileage will vary; storage policies, number of disk groups and number and type of disks etc will influence performance so you must validate on your own systems!
## Capacity and performance planning
You can calculate the raw TB capacity of a vSAN storage pool roughly as the total size of disks in the capacity tier. In our example configuration in _figure 1_ there are a total of 24 x INTEL S3610 1.6TB SSDs:
>Raw capacity of cluster: 24 x 1.6TB = 38.4 TB
However _available_ capacity is much different and where calculations get messy and is dependent on configuration choices; which policies are used (such as FTT which dictates how many copies of data) and also whether deduplication and compression have been enabled.
I will step through selected policies and discuss their implications for capacity and performance and recommendations for a _database workload_.
All ESXi deployments I see are made up of multiple VMs; for example, TrakCare a unified healthcare information system built on InterSystems’ health informatics platform, HealthShare is at its heart at least one large (monster) database server VM which is absolutely fits the description "tier-1 business critical application". However a deployment also includes combinations of other single purpose VMs such as production web servers, print servers, etc. As well as test, training and other non-production VMs. Usually all deployed in a single ESXi cluster. While I focus on database VM requirements remember that SPBM can be tailored per VMDK for all your VMs.
### Deduplication and Compression
For vSAN deduplication and compression is a cluster-wide on/off setting. Deduplication and compression can only be enabled when you are using an all-flash configuration. Both features are enabled together.
At first glance deduplication and compression seems to be a good idea - you want to save space, especially if you are using (more expensive) flash devices at the capacity tier. While there are space savings with deduplication and compression my recommendation is that you do not enable this feature for clusters with large production databases or where data is constantly being overwritten.
Deduplication and compression does add some processing overhead on the host, maybe in the range of single digit %CPU utilization, but this is not the primary reason not recommending for databases.
In summary vSAN attempts to deduplicate data as it is written to the capacity tier within the scope of a single disk group using 4K blocks. So in our example at _figure 1_ data objects to be deduplicated would have to exists in the capacity tier of the same disk group. I am not convinced we will see much savings on Caché database files which are basically very large files filled with 8K database blocks with unique pointers, contents, etc. Secondly vSAN will only attempt to compress duplicated blocks, and will only consider blocks compressed if compression reaches 50% or more. If the deduplicated block does not compress to 2K it is written uncompressed. While there may be some duplication of operating system or other files _the real benefit of deduplication and compression would be for clusters deployed for VDI_.
Another caveat is the impact of a (albeit rare) failure of one device in a disk group on the whole group when deduplication and compression is on. The whole disk group is marked "unhealthy" which has a cluster wide impact: because the group is marked unhealthy all the data on a disk group will be evacuated off that group to other places, then the device must be replaced and vSAN will resynchronise the objects to rebalance.
- **_Recommendation:_** For database deployments do not enable compression and deduplication.
>_**Sidebar: InterSystems database mirroring.**_
> For mission critical tier-1 Caché database application instances requiring the highest availability I recommend [InterSystems synchronous database mirroring, even when virtualised.](http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror#GHA_mirror_set_bp_vm) Virtualised solutions have HA built in; for example VMWare HA, however additional advantages of also using mirroring include:
>- Separate copies of up-to-date data.
- Failover in seconds (faster than restarting a VM then operating System then recovering Caché).
- Failover in case of application/Caché failure (not detected by VMware).
>I am guessing you have spotted the flaw in enabling deduplication when you have mirrored databases on the same cluster? You will be attempting to deduplicate your mirror data. Generally not sensible and also a processing overhead.
>Another consideration when deciding whether to mirror databases on HCI is the total storage capacity required. vSAN will be making multiple copies of data for availability, this data storage will be doubled again by mirroring. You will need to weigh the small incremental increase in uptime over what VMware HA provides against the additional cost of storage.
>For maximum uptime you can create two clusters so that each node of the database mirror is in a completely independent failure domain. However take note of the total servers and storage capacity to provide this level of uptime.
## Encryption
Another consideration is where you choose to encrypt data at rest. You have several choices in the IO stack including;
- Using Caché database encryption (encrypts database only).
- At Storage (e.g. hardware disk encryption at SSD).
Encryption will have a very small impact on performance, but can have a big impact on capacity if you choose to enable deduplication or compression in HCI. If you do choose deduplication and/or compression you would not want to be using Caché database encryption because it would negate any gains as encrypted data is random by design and does not compress well. Consider the protection point or risk they are trying to protect from, for example theft of file vs. theft of device.
- **_Recommendation:_** Encrypt at the lowest layer as possible in the IO stack for a minimal level of encryption. However the more risk you want to protect move higher up the stack.
### Failures To Tolerate (FTT)
FTT sets a requirement on the storage object to tolerate at least _n_ number of concurrent host, network, or disk failures in the cluster and still ensure the availability of the object. The default is _1_ (RAID-1); the VM’s storage objects (e.g. VMDK) are mirrored across ESXi hosts.
>So vSAN configuration must contain at least n + 1 replicas (copies of the data) which also means there are 2n + 1 hosts in the cluster.
For example to comply with a number of failures to tolerate = 1 policy, you need three hosts at a minimum at all times -- even if one host fails. So to account for maintenance or other times when a host is taken off-line you need four hosts.
- **_Recommendation:_** A vSAN cluster must have a minimum four hosts for availability.
Note there is also exceptions; a Remote Office Branch Office (ROBO) configuration that is designed for two hosts and a remote witness VM.
### Erasure Coding
The default storage method on vSAN is RAID-1 -- data replication or mirroring. Erasure coding is RAID-5 or RAID-6 with storage objects/components distributed across storage nodes in the cluster. The main benefit of erasure coding is better space efficiency for the same level of data protection.
Using the calculation for FTT in the previous section as an example; for a VM to tolerate _two_ failures using a RAID-1 there must be three copies of storage objects meaning a VMDK will consume 300% of the base VMDK size. RAID-6 also allows a VM to tolerate two failures and only consumes 150% the size of the VMDK.
The choice here is between performance and capacity. While the space saving is welcome you should consider your database IO patterns before enabling erasure coding. Space efficiency benefits come at the price of the amplification of I/O operations which is higher again during times of component failure so for best database performance use RAID-1.
- **_Recommendation:_** For production databases do not enable erasure coding. Enable for non-production.
Erasure coding also impacts the number of hosts required in your cluster. For for example for RAID-5 you need a minimum of four nodes in the cluster, for RAID-6, you need a minimum of six nodes.
- **_Recommendation:_** Consider the cost of additional hosts before planning to configure erasure coding.
### Striping
Striping offers opportunity for performance improvements but will likely only help with hybrid configurations.
- **_Recommendation:_** For production databases do not enable striping.
### Object Space Reservation (thin or thick provisioning)
The name for this setting comes from vSAN using objects to store components of your VMs (VMDKs etc). By default all VMs provisioned to a VSAN datastore have object space reservation of 0% (thin provisioned) which leads to space savings and also enables vSAN more freedom for placement of data. However for your production databases best practice is to use 100% reservation(thick provisioned) where space is allocated at creation. For vSAN this will be Lazy Zeroed – where 0’s are written as each block is first written to. There are a few reasons for choosing 100% reservation for production databases; there will be less delay when database expansions occur, and you are guaranteeing that storage will be available when you need it.
- **_Recommendation:_** For production database disks use 100% reservation.
- **_Recommendation:_** For non-production instances leave storage thin provisioned.
### When should I turn on features?
You can generally enable availability and space saving features after using the systems for some time, that is; when there are active VMs and users on the system. However there will be performance and capacity impact. Additional replicas of data in addition to the original are needed so additional space is required while data is synchronised. My experience is that enabling these type of features on clusters with large databases can take a very long time and expose the possibility of reduced availability.
- **_Recommendation:_** Spend time up front to understand and configure storage features and functionality such as deduplication and compression before go-live and definitely before large databases are loaded.
There are other considerations such as leaving free space for disk balancing, failure etc. The point is you will have to take into account the recommendations in this post with vendor specific choices to understand your raw disk requirements.
- **_Recommendation:_** There are many features and permutations. Work out your total GB capacity requirements as a starting point, review recommendations in this post [and with your application vendor] then talk to your HCI vendor.
## Storage processing overhead
You must consider the overhead of storage processing on the hosts. Storage processing otherwise handled by the processors on an enterprise storage array is now being computed on each host in the cluster.
The amount of overhead _per host_ will be dependent on workload and what storage features are enabled. My observations with basic testing I have done with Caché on vSAN shows that processing requirements are not excessive, especially when you consider the number of cores available on current servers. VMware recommends planning for 5-10% host CPU usage
The above can be a starting point for sizing but _remember your mileage will vary_ and you will need to confirm.
- **_Recommendation:_** Plan for worst case of 10% CPU utilisation and then monitor your real workload.
## Network
Review vendor requirements -- assume minimum 10GbE NICs -- multiple NICs for storage traffic, management (e.g. vMotion), etc. I can tell you from painful experience that an enterprise class network switch is required for optimal operation of the cluster -- after all - all writes are sent synchronously over the network for availability.
- **_Recommendation:_** Minimum 10GbE switched network bandwidth for storage traffic. Multiple NICs per host as per best practice.
# Flash Storage Overview
Flash storage is a requirement of HCI so it is good to review where flash storage is today and where its going in the near future.
_The short story is whether you use HCI or not if you are not deploying your applications using storage with flash today it is likely that your next storage purchase will include flash._
## Storage today and tomorrow
Let us review the capabilities of commonly deployed storage solutions and be sure we are clear with the terminology.
**Spinning disk**
- Old faithful. 7.2, 10K or 15K HDD spinning disks with SAS or SATA interface. Low IOPS per disk. Can be high capacity but that means the IOPS per GB are decreasing. For performance typically data is striped across multiple disks to achieve 'just enough' IOPS with high capacity.
**SSD disk - SATA and SAS**
- Today flash is usually deployed as SAS or SATA interface SSDs using NAND flash. There is also some DRAM in the SSD as a write buffer. Enterprise SSDs include power loss protection - in event of power failure contents of DRAM are flushed to NAND.
**SSD disk - NVMe**
- Similar to SSD disk but uses NVMe protocol (not SAS or SATA) with NAND flash. NVMe media attach via PCI Express (PCIe) bus allowing the system to talk directly without the overhead of host bus adapters and storage fabrics resulting in much lower latency.
**Storage Array**
- Enterprise Arrays provide protection and the ability to scale. It is more common today that storage is either a hybrid array or all-flash. Hybrid arrays have a cache tier of NAND flash plus one or more capacity tiers using 7.2, 10K or 15K spinning disks. NVMe arrays are also becoming available.
**Block-Mode NVDIMM**
- These devices are shipping today and are used when extremely low latencies are required. NVDIMMs sit in a DDR memory socket and provide latencies around 30ns. Today they ship in 8GB modules so are not likely to be used for legacy database applications, but new scale-out applications may take advantage of this performance.
**3D XPoint**
_This is a future technology - not available in November 2016._
- Developed by Micron and Intel. Also known as **Optane** (Intel) and **QuantX** (Micron).
- Will not be available until at least 2017 but compared to NAND promises higher capacity, >10x more IOPS, >10x lower latency with extremely high Endurance and consistent performance.
- First availability will use NVMe protocol.
## SSD device Endurance
SSD device _endurance_ is an important consideration when choosing drives for cache and capacity tiers. The short story is that flash storage has a finite life. Flash cells in an SSD can only be deleted and rewritten a certain number of times (no restrictions apply to reads). Firmware in the device manages spreading writes around the drive to maximise the life of the SSD. Enterprise SSDs also typically have more real flash capacity than visible to achieve longer life (over-provisioned), for example an 800GB drive may have more than 1TB of flash.
The metric to look for and discuss with your storage vendor is full Drive Writes Per Day (DWPD) guaranteed for a certain number of years. For example; An 800GB SSD at 1 DWPD for 5 years can have 800GB per day written for 5 years. So the higher the DWPD (and years) the higher the endurance. Another metric simply switches the calculation to show SSD devices specified in Terabytes Written (TBW); The same example has TBW of 1,460 TB (800GB * 365 days * 5 years). Either way you get an idea of the life of the SSD based on your expected IO.
# Summary
This post covers the most important features to consider when deploying HCI and specifically VMWare vSAN version 6.5. There are vSAN features I have not not covered, if I have not mentioned a feature assume you should use the defaults. However if you have any questions or observations I am happy to discuss via the comments section.
I expect to return to HCI in future posts, this certainly is an architecture that is on the upswing so I expect to see more InterSystems customers deploying on HCI.
Very useful article, thanks, but at the moment I'd very cautious recommending Intel/Micron 3D XPoint memory technology. It looks like the real numbers are very far from the original claim (especially endurance improvements) - http://semiaccurate.com/2016/09/12/intels-xpoint-pretty-much-broken/ OTOH their performance numbers are very impressive, even today (especially for Micron part) - http://www.tomshardware.com/reviews/3d-xpoint-guide,4747-6.html Hi Timur, thanks for the comments and links. I agree, 3D XPoint is a case of waiting to see real performance when it's released. Even 10x lower latency is still a big jump - the figures in the post are what is publicly talked about by Micron now. My aim is to give people a heads up on what's coming and to look out for it (although vendors will be shouting it from the rooftops :) Hopefully we will have some real data and pricing soon. Thanks Murray for these 'new technology catch-up' articles, especially this part, 9 and 10. Bob alerted me to these. I do have HCI deployment in the horizon based on ESXi and EMC scaleio all-flash (both cache and capacity tiers) architecture. I will keep this in mind when we finally meet the vendors of the HCI kit.In the article you mentioned "you define the capabilities of storage as policies in vSAN using SPBM; for example "Database" would be different to "Journal". I was hoping to see specific policies for these further down the article?? (well if you consider i'm from traditional arrays where we normally pay attention to these).Regards;Anzelem. Hi, It would have been better to say "Database" could be different to "Journal". SPBM can be different for all VMDKs (disks), but that doesn't mean it should be. As an example on a four node all flash cluster;I am using just two storage policies for a production VM disks.- Policy: VSAN default: for OS and backup/scratch. Failures to tolerate (FTT)=1. Disk stripes=1. Object space reservation=0.- Policy: Production: for database and journals. FTT=1. Disk stripes=1. Object space reservation=100.For performance on each production VM use separate PVSCI adapters for OS, journal, database, and backup/scratch. For non-production VMs I have a policy that makes better use of available capacity, there is still HA in the storage:- Policy: Non-Production: for all data disks. FTT method=RAID 5/6 (Erasure coding). FTT=1. Object space reservation=0.Note. These values are not written in stone and will depend on your requirements. While you need to think about performance it should be great out of the box. What you must also consider is availability and capacity. Hi Murray;Have you tested/worked on HCI with VMware and ScaleIO?Regards;Anzelem. Hi Anzelem, no. But I m very interested to hear of experience community readers with any HCI solution. Either through the comments or directly.
Announcement
Janine Perkins · Nov 28, 2016
Learn how to work with DICOM Modality Worklists in a HealthShare Health Connect production.DICOM is a global information technology standard for handling medical images. We will cover the parts of a HealthShare Health Connect production that are at work when handling DICOM requests, and we will use the built-in demo production to simulate communication with an external modality. Learn More. I'm looking for any online videos you have demonstrating Health Insight, not specificaly DICOM Modality worklists.Thank You.Glenn Mamarygmamary@j2interactive.com Glenn,We are working on some more detailed courses on Health Insights. Right now we have a Health Insight Resource Guide that can start you off with an Overview and provides access to the documentation as long are a customer/partner. Hi,the link seems to be broken. Any chance to get this article back up?best regards,sebastian Digging the intersystems learning facility the course can also be found through the learning catalogue on learning.intersystems.com.best regards,Sebastian
Announcement
Anastasia Dyubaylo · May 7, 2021
Hey developers,
The registration period for the FHIR Accelerator programming contest is in full swing! We invite all FHIR developers to build new or test existing applications using the InterSystems IRIS FHIR Accelerator Service (FHIRaaS) on AWS.
And now you have a great opportunity to get FREE access to the FHIRaaS on AWS! So, to take the first step in mastering the FHIRaaS, you need to register on our FHIR Portal using this link:
👉🏼 https://portal.trial.isccloud.io/account/signup
Just follow the link above and become a master of the FHIRaaS with InterSystems! ✌🏼
Feel free to ask any questions regarding the competition here or in the discord-contests channel.
Happy coding! We will be ready to start sharing the access codes for the FHIRaaS portal starting from Thursday 14th of May. Please refer to @Irina.Podmazko in Direct Message or reply to this post or request in Discord! Some updates:
Now you can easily get your FREE access to InterSystems IRIS FHIR Accelerator Service (FHIRaaS) on AWS! Just follow this link and register on the ISC Dev FHIR Portal:
👉🏼 https://portal.trial.isccloud.io/account/signup
An easy start to join our competition! Don't miss it!
Question
MohanaPriya Vijayan · May 28, 2021
Does InterSystems IRIS will support Visual Studio 6.0 Enterprise Edition (Visual Basic)?
We are in the process of transitioning Intersystems Cache 2017 to Intersystems IRIS 2020 version.
For terminal based applications we can able to use the same DAT file used for Cache with minor changes.
For Web based we are using Visual studio 6.0(Visual Basic). Will IRIS supports Visual Studio 6?
While I am configuring from that GUI application, I am getting the error like, Access Denied or No connection could be established. Same configuration followed in Cache that works. What the component did you use for connection?
There are a few technologies that was deprecated with IRIS, or under an additional license option. We are just using the namespace and its Base_TCP_Port(Superserver Port) for connection.