Clear filter
Announcement
Anastasia Dyubaylo · Jan 20, 2020
Hi Developers,
2019 was a really great year with almost 100 applications uploaded to the InterSystems Open Exchange!
To thank our Best Contributors we have special annual achievement badges in Global Masters Advocacy Hub. This year we introduced 2 new badges for contribution to the InterSystems Open Exchange:
✅ InterSystems Application of the Year 2019
✅ InterSystems Developer of the Year 2019
We're glad to present the most downloaded applications on InterSystems Data Platforms!
Badge's Name
Advocates
Rules
Nomination: InterSystems Application of the Year
Gold InterSystems Application of the Year 2019 - 1st place
VSCode-ObjectScript by @Maslennikov.Dmitry
1st / 2nd/ 3rd / 4th-10th place in "InterSystems Application of the Year 2019" nomination.
Given to developers whose application gathered the maximum amount of downloads on InterSystems Open Exchange in the year of 2019.
Silver InterSystems Application of the Year 2019 - 2nd place
PythonGateway by @Eduard.Lebedyuk
Bronze InterSystems Application of the Year 2019 - 3rd place
iris-history-monitor by @Henrique.GonçalvesDias
InterSystems Application of the Year 2019 - 4th-10th places
WebTerminal by @Nikita.Savchenko7047
Design Pattern in Caché Object Script by @Tiago.Ribeiro
Caché Monitor by @Andreas.Schneider
AnalyzeThis by @Peter.Steiwer
A more useFull Object Dump by @Robert.Cemper1003
Light weight EXCEL download by @Robert.Cemper1003
ObjectScript Class Explorer by @Nikita.Savchenko7047
Nomination: InterSystems Developer of the Year
Gold InterSystems Developer of the Year 2019 - 1st place
@Robert.Cemper1003
1st / 2nd / 3rd / 4th-10th place in "InterSystems Developer of the Year 2019" nomination.
Given to developers who uploaded the largest number of applications to InterSystems Open Exchange in the year of 2019.
Silver InterSystems Developer of the Year 2019 - 2nd place
@Evgeny.Shvarov
@Eduard.Lebedyuk
Bronze InterSystems Developer of the Year 2019 - 3rd place
@Maslennikov.Dmitry
@David.Crawford
@Otto.Karlinger
InterSystems Developer of the Year 2019 - 4th-10th places
@Peter.Steiwer
@Amir.Samary
@Guillaume.Rongier7183
@Rubens.Silva9155
Congratulations! You are doing so valuable and important job for all the community!
Thank you all for being part of the InterSystems Community. Share your experience, ask, learn and develop, and be successful with InterSystems!
➡️ See also the Best Articles and the Best Questions on InterSystems Data Platform and the Best Contributors in 2019.
Announcement
Derek Robinson · Feb 21, 2020
I wanted to share each of the first three episodes of our new Data Points podcast with the community here — we previously posted announcements for episodes on IntegratedML and Kubernetes — so here is our episode on InterSystems IRIS as a whole! It was great talking with @jennifer.ames about what sets IRIS apart, some of the best use cases she's seen in her years as a trainer in the field and then as an online content developer, and more. Check it out, and make sure to subscribe at the link above — Episode 4 will be released next week!
Announcement
Anastasia Dyubaylo · Feb 22, 2020
Hi Developers,
New video, recorded by @Benjamin.DeBoe, is available on InterSystems Developers YouTube:
⏯ Code in Any Language with InterSystems IRIS
InterSystems Product Manager @Benjamin.DeBoe talks about combining your preferred tools and languages when building your application on InterSystems IRIS Data Platform.
Try InterSystems IRIS: https://www.intersystems.com/try
Enjoy watching the video! 👍🏼
Announcement
Michelle Spisak · Oct 17, 2019
New from InterSystems Online Learning: two new exercises that help you get hands-on with InterSystems IRIS to see how easy it is to use to solve your problems!
Using Multi-Model with Python and Node.js: This exercise takes you through the steps of using the InterSystems IRIS multi-model capability to create a Node.js application that sends JSON data straight to your database instance without any parsing or mapping.
Build a Smart Ticketing System: In this exercise, you will build on the Red Light Violation application used in the Interoperability QuickStart. Here, you will add another route to identify at-risk intersections based on data from Chicago traffic system. This involves:
Building a simple interface to consume data from a file and store data to a file.
Adding in logic to only list intersections that have been deemed high risk based on the number of red light violations.
Adding in routing to consume additional data about populations using REST from public APIs.
Modifying the data to be in the right format for the downstream system.
Get started with InterSystems IRIS today!
Article
Eduard Lebedyuk · Oct 21, 2019
InterSystems API Management (IAM) - a new feature of the InterSystems IRIS Data Platform, enables you to monitor, control and govern traffic to and from web-based APIs within your IT infrastructure. In case you missed it, here is the link to the announcement. And here's an article explaining how to start working with IAM.
In this article, we would use InterSystems API Management to Load Balance an API.
In our case, we have 2 InterSystems IRIS instances with /api/atelier REST API that we want to publish for our clients.
There are many different reasons why we might want to do that, such as:
Load balancing to spread the workload across servers
Blue-green deployment: we have two servers, one "prod", other "dev" and we might want to switch between them
Canary deployment: we might publish the new version only on one server and move 1% of clients there
High availability configuration
etc.
Still, the steps we need to take are quite similar.
Prerequisites
2 InterSystems IRIS instances
InterSystems API Management instance
Let's go
Here's what we need to do:
1. Create an upstream.
Upstream represents a virtual hostname and can be used to load balance incoming requests over multiple services (targets). For example, an upstream named service.v1.xyz would receive requests for a Service whose host is service.v1.xyz. Requests for this Service would be proxied to the targets defined within the upstream.
An upstream also includes a health checker, which can enable and disable targets based on their ability or inability to serve requests.
To start:
Open IAM Administration Portal
Go to Workspaces
Choose your workspace
Open Upstreams
Click on "New Upstream" button
After clicking the "New Upstream" button you would see a form where you can enter some basic information about the upstream (there are a lot more properties):
Enter name - it's a virtual hostname our services would use. It's unrelated to DNS records. I recommend setting it to a non-existing value to avoid confusion. If you want to read about the rest of the properties, check the documentation. On the screenshot, you can see how I imaginatively named the new upstream as myupstream.
2. Create targets.
Targets are backend servers that would execute the requests and send results back to the client. Go to Upstreams and click on the upstream name you just created (and NOT on update button):
You would see all the existing targets (none so far) and the "New Target" button. Press it:
And in the new form define a target. Only two parameters are available:
target - host and port of the backend server
weight - relative priority given to this server (more weight - more requests are sent to this target)
I have added two targets:
3. Create a service
Now that we have our upstream we need to send requests to it. We use Service for it.Service entities, as the name implies, are abstractions of each of your upstream services. Examples of Services would be a data transformationmicroservice, a billing API, etc.
Let's create a service targeting our IRIS instance, go to Services and press "New Service" button:
Set the following values:
field
value
description
name
myservice
the logical name of this service
host
myupstream
upstream name
path
/api/atelier
root path we want to serve
protocol
http
the protocols we want to support
Keep the default values for everything else (including port: 80).
After creating the service you'll see it in a list of services. Copy service ID somewhere, we're going to need that later.
4. Create a route
Routes define rules to match client requests. Each Route is associated with a Service, and a Service may have multiple Routes associated withit. Every request matching a given Route will be proxied to its associated Service.
The combination of Routes and Services (and the separation of concerns between them) offers a powerful routing mechanism with which it is possible to define fine-grained entry-points in IAM leading to different upstream services of your infrastructure.
Now let's create a route. Go to Routes and press the "New Route" button.
Set the values in the Route creation form:
field
value
description
path
/api/atelier
root path we want to serve
protocol
http
the protocols we want to support
service.id
guid from 3
service id value (guid from previous step)
And we're done!
Send a request to http://localhost:8000/api/atelier/ (note the slash at the end) and it would be served by one of our two backends.
Conclusion
IAM offers a highly customizable API Management infrastructure, allowing developers and administrators to take control of their APIs.
Links
Documentation
IAM Announcement
Working with IAM article
Question
What functionality do you want to see configured with IAM? I have a question regarding productionized deployments.Can the internal IRIS web-server be used, i.e. Port 52773?Or should there still be a web-gateway between IAM and the IRIS instance?
Regarding Kubernetes:I would think that IAM should be the ingress, is that correct? Hi Stefan,
The short answer is you still need a web-gateway between IAM and IRIS.
The private web server (port 52773) minimal build of the Apache web server is supplied for the purpose of running the Management Portal not production level traffic.
I would think that IAM should be the ingress, is that correct?
Agreed. Calling @Luca.Ravazzolo.
Article
Evgeny Shvarov · Nov 19, 2019
Hi developers!
I just want to share with you the knowledge aka experience which could save you a few hours someday.
If you are building REST API with IRIS which contains more than 1 level of "/", e.g. '/patients/all' don't forget to add parameter 'recurse=1' into your deployment script in %Installer, otherwise all the second and higher entries won't work. And all the entries of level=1 will work.
/patients
- will work, but
/patients/all
- won't.
Here is an example of CSPApplicatoin section which fix the issue and which you may want to use in your %Installer class:
<CSPApplication Url="${CSPAPP}"
Recurse="1"
Directory="${CSPAPPDIR}"
Grant="${RESOURCE},%SQL"
AuthenticationMethods="96"
/>
Article
Eduard Lebedyuk · Nov 22, 2019
This quick guide shows how to serve HTTPS requests with InterSystems API Management. Advantage here is that you have your certs on one separated server and you don't need to configure each backend web-server separately.
Here's how:
1. Buy the domain name.
2. Adjust DNS records from your domain to the IAM IP address.
3. Generate HTTPS certificate and private key. I use Let's Encrypt - it's free.
4. Start IAM if you didn't already.
5. Send this request to IAM:
POST http://host:8001/certificates/
{
"cert": "-----BEGIN CERTIFICATE-----...",
"key": "-----BEGIN PRIVATE KEY-----...",
"snis": [
"host"
]
}
Note: replace newlines in cert and key with \n.
You'll get a response, save id value from it.
6. Go to your IAM workspace, open SNIs, create new SNI with the name - your host and ssl_certificate_id - id from the previous step.
7. Update your routes to use https protocol (leave only https to force secure connection, or specify http, https to allow both protocols)
8. Test HTTPS requests by sending them to https://host:8443/<your route> - that's where IAM listens for HTTPS connections by default. Eduard, thank you for a very good webinar.
You mentioned that IAM can be helpful even if there is "service-mix": some services are IRIS based, others - not. How can IAM help with non-IRIS services? Can any Target Object be non-IRIS base?
Can any Target Object be non-IRIS base?
Absolutely. The services you offer via IAM can be sourced anywhere. Both from InterSystems IRIS and not.
How can IAM help with non-IRIS services?
All the benefits you get from using IAM (ease of administration, control, analytics) are available for both InterSystems IRIS-based and non InterSystems IRIS-based services
Discussion
Nikita Savchenko · Dec 12, 2019
Hello, InterSystems community!
Lately, you have probably heard of the new InterSystems Package Manager - ZPM. If you're familiar with it or with such package managers as NPM, Dep, pip/PyPI, etc. or just know what is it all about -- this question is for you! The question I want to arise is actually a system design question, or, in other words, "how should ZPM implement it".
In short, ZPM (the new package manager) allows you to install packages/software to your InterSystems product in a very convenient, manageable way. Just open up the terminal, run ZPM routine, and type install samples-objectscript: you will have a new package/software installed and ready to use! In the same way, you can easily delete and update packages.
From the developer's point of view, quite as same as in other package managers, ZPM requires the package/software to have a package description, fairly represented as module.xml file. Here's an example of it. This file has a description of what to install, which CSP applications to create, which routines to run once installed and so on.
Now, straight to the point. You've also probably heard of InterSystems WebTerminal - one of my projects which is quite widely used (over 500 installs over the last couple of months). We try to bring WebTerminal to ZPM.
So far, anyone could install WebTerminal just by importing an XML file with its code - no more actions were needed. During the class compilation, WebTerminal runs the projection and does all required settings on its own (web application, globals, etc - see here). In addition to this, WebTerminal has its own self-updating mechanism, which allows it to self-update when the new version comes out, made exactly with the use of projections. Apart from that, I have 2 more projects (ClassExplorer, Visual Editor) that use the same import-and-install convenient installation mechanism.
But, it was decided that ZPM won't accept projections as a paradigm and everything should be described in module.xml file. Hence, to publish WebTerminal for ZPM, the team tried to remove Installer.cls class (one of WebTerminal's classes which did all the install-update magic with the use of projections) and manually replaced it with some module.xml metadata. It turned to be quite enough for WebTerminal to work but it potentially leads to unexpected incompatibilities to be 100% compatible with ZPM (see below). Thus, the source code changes are needed.
So the question is, should ZPM really avoid all projection-enabled classes for its packages? The decision of avoiding projections might be changed via the open discussion here. It's not a question of why can't I rewrite WebTerminal's code, but rather why not just accept original software code even if it uses projections?
My opinion was quite strong against avoiding projection-enabled classes in ZPM modules. For multiple reasons. But first of all, because projections are the way how the programming language works, and I see no constructive reasoning against using them for whatever the software/package is designed for. Avoiding them and cutting Installer.cls class from the release is absolutely the same as patching a working module. I agree that the packages which ship specifically for ZPM should try to use all installation features which module.xml provides, however, WebTerminal is also shipped outside of ZPM, and maintaining 2 versions of WebTerminal (at least, because of the self-update feature) makes me think that something is wrong here.
I see the next pros of keeping all projection-enabled classes in ZPM:
The package/software will still be compatible with both ZPM and a regular installation done for years (via XML/classes import)
No original package/software source code changes needed to bring it to ZPM
All designed functions work as expected and don't cause problems (for instance, WebTerminal self-updates - upon the update, it loads the XML file with the new version and imports it, including projection-enabled Installer.cls file anyway)
Cons of keeping all projection-enabled classes in ZPM:
Side effects made during the installation/uninstallation, made by projection-enabled classes won't be statically described in the module.xml file, hence they are "less auditable". There is an opinion that any side effect must be described in module.xml file.
Please indicate any other pros/cons if this isn't the full list. What do you think?
Thank you! Exactly not for installing purposes, you're right, I agree. But what do you think about the WebTerminal case in particular?
1. It's already developed and bound to projections: installation, its own update mechanism, etc.2. It's also shipped outside of ZPM3. It would work as usual if only ZPM supported projections
I see you're pointing out to "It might need to support Projections eventually because as you said it's a part of language" - that's what mostly my point is about. Why not just to allow them. Thanks! Exactly, I completely agree about simplicity, transparency, and installation standard. But see my answer to Sergey's answer - what to do with WebTerminal in particular?
1. Why would I need to rewrite the update mechanism I developed years ago (for example)?2. Why would I need to maintain 2 code bases for ZPM & regular installations (or automate it in a quite crazy way, or just drop self-update feature when ZPM is detected)3. Why all these changes to the source code are needed, after all, if it "just works" normally without ZPM complications (which is how the ObjectScript works)
I think this leads to either "make a package ZPM-compatible" or "make ZPM ObjectScript-compatible" discussion, isn't it? The answer to all this could be "To make the world the better place").
Because if you do all 3 you get:
the same wonderful Web terminal, but with simple, transparent, and standard installing mechanism with and yet another channel for distribution, cause ZPM seems to be a very handy and popular way to install/try the staff.
Maybe yet another channel of clear and handy app distribution is enough argument to change something in the application too?
True points. For sure, developers can customize it. I can do another version of WebTerminal specifically for ZPM, but it will involve additional coding and support:
1. A need to change how the self-update mechanism works or shut it down completely. Right now, the user gets a message on the UI, suggesting to update WebTerminal to the latest version. There's quite a lot of things happen under the hood.2. Thus, create an additional pipeline (or split the codebase) for 2 WebTerminal versions: ZPM's one and a regular one with all the tests and so on.
I am wondering is it worth doing so in WebTerminal's perspective, or is it better to make WebTerminal a kind of an exception for ZPM. Because, still, inserting a couple of if (isZPMInstalled) { ... } else { ... } conditions to WebTerminal (even on front-end side) looks as anti-pattern to me. Thanks! Considering the points others mention, I agree that projections should not be the way to install things but rather the acceptable exception as for WebTerminal and other complex packages. Another option rather than having two versions of the whole codebase could be having a wrapper module around webterminal (i.e., another module that depends on webterminal) with hooks in webterminal to allow that wrapper to turn off projection-based installation-related features. I completely agree, and to get to
standard installing mechanism
for USERS, we need to zpm-enable as many existing projects as possible. To enable these projects we need to simplify the zpm-enabling, leveraging existing code if possible (or not preventing developers from leveraging the existing code). I think allowing developers to use already existing installers (whatever form they may take) would help with this goal. This is very wise, thanks Ed!
For zpm-enabling we plan to add some boilerplate module.xml generator for the repo, stay tuned Hi Nikita,
> A need to change how the self-update mechanism works or shut it down completely.If a package is distributed via package manager, its self-update should be completely removed. It should be a responsibility of package manager to alert the user that new version of package is available and to install it.
> Thus, create an additional pipeline (or split the codebase) for 2 WebTerminal versions: ZPM's one and a regular one with all the tests andso on.Some package managers allow to apply patches to software before packaging it, but I don't think it's the case for ZPM at the moment. I believe you will need to do a separate build for ZPM/non-ZPM versions of your software. You can either apply some patches during the build, or refactor the software so that it can run without auto updater if it's not installed. Hi Nikita!
Do you want the ZPM exception for the Webterminal only or for all your InterSystems solutions? ) The whole purpose of package manager is to get rid of individual installer/updater scripts written by individual developers and replace them with package management utility. So that you have a standard way of installing, removing and updating your packages. So I don't quite understand why this question is raised in this context -- of course package manager shouldn't support custom installers and updaters. It might need to support Projections eventually because as you said it's a part of language, but definitely not for installing purposes. I completely support inclusion of projections.
ObjectScript Language allows execution of arbitrary code at compile time through three different mechanisms:
Projections
Code generators
Macros
All these instruments are entirely unlimited in their scope, so I don't see why we need to prohibit one way of executing code at compilation.
Furthermore ZPM itself uses Projections to install itself so closing this avenue to other projects seems strange. Hi Nikita!
Thanks for the good question!
The answer is on why module.xml vs installer.cls on projections is quite obvious IMHO.
Compare module.xml and installer.cls which does the same thing.
Examining module.xml you can clearly say what the installation does and easily maintain/support it.
In this case, the package installs:
1. classes from WebTerminal package:
<Resource Name="WebTerminal.PKG" />
2. creates one REST Web app:
<CSPApplication
Url="/terminal"
Path="/build/client"
Directory="{$cspdir}/terminal"
DispatchClass="WebTerminal.Router"
ServeFiles="1"
Recurse="1"
PasswordAuthEnabled="1"
UnauthenticatedEnabled="0"
CookiePath="/"
/>
3. creates another REST Web app:
<CSPApplication
Url="/terminalsocket"
Path="/terminal"
Directory="{$cspdir}/terminalsocket"
ServeFiles="0"
UnauthenticatedEnabled="1"
MatchRoles=":%DB_CACHESYS:%DB_IRISSYS:{$dbrole}"
Recurse="1"
CookiePath="/"
/>
I cannot say the same for Installer.cls on projections - what does it do to my system?
Simplicity, transparency, and installation standard with zpm module.xml approach vs what?
From the pros/cons, it seems the objectives are:
Maintain compatibility with normal installation (without ZPM)
Make side effects from installation/uninstallation auditable by putting them in module.xml
I'd suggest as one approach to accomplish both objectives:
Suppress the projection side effects when running in a package manager installation/uninstallation context (either by checking $STACK or using some trickier under-the-hood things with singletons from the package manager - regardless, be sure to unit test this behavior!).
Add "Resource Processor" classes (specified in module.xml with Preload="true" and not included in normal WebTerminal XML exports used for non-ZPM installation) - that is, classes extending %ZPM.PackageManager.Developer.Processor.Abstract and overriding the appropriate methods - to handle your custom installation things. You can then use these in your module manifest, provided that such inversion of control still works without bootstrapping issues following changes made in https://github.com/intersystems-community/zpm.
Generally-useful things like creating a %All namespace should probably be pushed back to zpm itself.
Announcement
Anastasia Dyubaylo · Dec 18, 2019
Hi Community,
The new video from Global Summit 2019 is already on InterSystems Developers YouTube:
⏯ InterSystems IRIS Roadmap: Analytics and AI
This video outlines what's new and what's next for Business Intelligence (BI), Artificial Intelligence (AI), and analytics within InterSystems IRIS. We will present the use cases that we are working to solve, what has been delivered to address those use cases, as well as what we are working on next.
Takeaway: You will gain knowledge of current and future business intelligence and analytics capabilities within InterSystems IRIS.
Presenters: 🗣 @Benjamin.DeBoe, Product Manager, InterSystems 🗣 @Thomas.Dyar, Product Specialist - Machine Learning, InterSystems 🗣 @Carmen.Logue, Product Manager - Analytics and AI, InterSystems
Additional materials to this video you can find in this InterSystems Online Learning Course.
Enjoy watching this video! 👍🏼
Announcement
Olga Zavrazhnova · Dec 24, 2019
Hi Community,
Great news for all Global Masters lovers!
Now you can redeem a Certification Voucher for 10,000 points!
Voucher gives you 1 exam attempt for any exam available at the InterSystems exam system. We have a limited edition of 10 vouchers, so don't hesitate to get yours!
Passing the exam allows you to claim the electronic badge that can be embedded in social media accounts to show the world that your InterSystems technology skills are the first-rate.
➡️ Learn more about the InterSystems Certification Program here.
NOTE:
Reward is available for Global Masters members of Advocate level and above; InterSystems employees are not eligible to redeem this reward.
Vouchers are non-transferable.
Good for one attempt for any exam in InterSystems exam system.
Can be used for a future exam.
Valid for one year from the redemption’s date (the Award Date).
Redeem the prize and prove your mastery of our technology! 👍🏼 Already 9 available ;) How to get this voucher? @Kejia.Lin This appears to be a reward on Global Masters. You can also find a quick link at the light blue bar on the top of this page. Once you join Global Masters you can get points for interacting with the community and ultimately use these points to claim rewards, such as the one mentioned here.
Article
Peter Steiwer · Mar 6, 2020
InterSystems IRIS Business Intelligence allows you to keep your cubes up to date in multiple ways. This article will cover building vs synchronizing. There are also ways to manually keep cubes up to date, but these are very special cases and almost always cubes are kept current by building or synchronizing.
What is Building?
The build starts by removing all data in the cube. This ensures that the build is starting in a clean state. The build then goes through all records specified by the source class. This may take all records from the source class or it may take a restricted set of records from the source class. As the build goes through the specific records, the data required by the cube is inserted into the cube. Finally, once all of the data has been inserted into the cube, the indices are built. During this process, the cube is not available to be queried. The build can be executed single-threaded or multi-threaded. It can be initiated by both the UI or Terminal. The UI will be multi-threaded by default. Running a build from terminal will default to multi-threaded unless a parameter is passed in. In most cases multi-threaded builds are possible. There are specific cases where it is not possible to perform a multi-threaded build and it must be done single-threaded.
What is Synchronizing?
If a cube's source class is DSTIME Enabled (see documentation), it is able to be synchronized. DSTime allows modifications to the source class to be tracked. When synchronization is called, only the records that have been modified will be inserted, updated, or deleted as needed within the cube. While a synchronize is running, the cube is available to be queried. A Synchronize can only be initiated from Terminal. It can be scheduled in the Cube Manager through the UI, but it can't be directly executed from the UI. By default, synchronize is executed single-threaded, but there is a parameter to initiate the synchronize multi-threaded.
It is always a good idea to initially build your cube and then it can be kept up to date with synchronize if desired.
Recap of differences
Build
Synchronize
Which records are modified?
All
Only records that have changed
Available in UI?
Yes
No
Multi-Threaded
Yes, by default
Yes, not the default
Cube available for query
No(*1)
Yes
Requires source class modification
No
Yes, DSTIME must be enabled
Build Updates
(*1) Starting with InterSystems IRIS 2020.1, Selective Build is now an available option while building your cube. This allows the cube to be available for querying while being built selectively. For additional information see Getting Started with Selective Build
Synchronize Updates
Starting with InterSystems IRIS 2021.2, DSTIME has a new "CONDITIONAL" option. This allows implementations to conditionally enable DSTIME for specific sites/installations. 💡 This article is considered as InterSystems Data Platform Best Practice.
Article
Mark Bolinsky · Mar 3, 2020
InterSystems and Intel recently conducted a series of benchmarks combining InterSystems IRIS with 2nd Generation Intel® Xeon® Scalable Processors, also known as “Cascade Lake”, and Intel® Optane™ DC Persistent Memory (DCPMM). The goals of these benchmarks are to demonstrate the performance and scalability capabilities of InterSystems IRIS with Intel’s latest server technologies in various workload settings and server configurations. Along with various benchmark results, three different use-cases of Intel DCPMM with InterSystems IRIS are provided in this report.
Overview
Two separate types of workloads are used to demonstrate performance and scaling – a read-intensive workload and a write-intensive workload. The reason for demonstrating these separately is to show the impact of Intel DCPMM on different use cases specific to increasing database cache efficiency in a read-intensive workload, and increasing write throughput for transaction journals in a write-intensive workload. In both of these use-case scenarios, significant throughput, scalability and performance gains for InterSystems IRIS are achieved.
The read-intensive workload leveraged a 4-socket server and massive long-running analytical queries across a dataset of approximately 1.2TB of total data. With DCPMM in “Memory Mode”, benchmark comparisons yielded a significant reduction in elapsed runtime of approximately six times faster when compared to a previous generation Intel E7v4 series processor with less memory. When comparing like-for-like memory sizes between the E7v4 and the latest server with DCPMM, there was a 20% improvement. This was due to both the increased InterSystems IRIS database cache capability afforded by DCPMM and the latest Intel processor architecture.
The write-intensive workload leverages a 2-socket server and InterSystems HL7 messaging benchmark which consists of numerous inbound interfaces, each message has several transformations and then four outbound messages for each inbound. One of the critical components in sustaining high throughput is the message durability guarantees of IRIS for Health, and the transaction journal write performance is crucial in that operation. With DCPMM in “APP DIRECT” mode as DAX XFS presenting an XFS file system for transaction journals, this benchmark demonstrated a 60% increase in message throughput.
To summarize the test results and configurations; DCPMM offers significant throughput gains when used in the proper InterSystems IRIS setting and workload. The high-level benefits are increasing database cache efficiency and reducing disk IO block reads in read-intensive workloads and also increasing write throughput for journals in write-intensive workloads.
In addition, Cascade Lake based servers with DCPMM provide an excellent update path for those looking into refreshing older hardware and improving performance and scaling. InterSystems technology architects are available to help with those discussions and provide advice on suggested configurations for your existing workloads.
READ-INTENSIVE WORKLOAD BENCHMARK
For the read-intensive workload, we used an analytical query benchmark comparing an E7v4 (Broadwell) with 512GiB and 2TiB database cache sizes, against the latest 2nd Generation Intel® Xeon® Scalable Processors (Cascade Lake) with 1TB and 2TB database cache sizes using Intel® Optane™ DC Persistent Memory (DCPMM).
We ran several workloads with varying global buffer sizes to show the impact and performance gain of larger caching. For each configuration iteration we ran a COLD, and a WARM run. COLD is where the database cache was not pre-populated with any data. WARM is where the database cache has already been active and populated with data (or at least as much as it could) to reduce physical reads from disk.
Hardware Configuration
We compared an older 4-Socket E7v4 (aka Broadwell) host to a 4-socket Cascade Lake server with DCPMM. This comparison was chosen because it would demonstrate performance gains for existing customers looking for a hardware refresh along with using InterSystems IRIS. In all tests, the same version of InterSystems IRIS was used so that any software optimizations between versions were not a factor.
All servers have the same storage on the same storage array so that disk performance wasn’t a factor in the comparison. The working set is a 1.2TB database. The hardware configurations are shown in Figure-1 with the comparison between each of the 4-socket configurations:
Figure-1: Hardware configurations
Server #1 Configuration
Server #2 Configuration
Processors: 4 x E7-8890 v4 @ 2.5Ghz
Processors: 4 x Platinum 8280L @ 2.6Ghz
Memory: 2TiB DRAM
Memory: 3TiB DCPMM + 768GiB DRAM
Storage: 16Gbps FC all-flash SAN @ 2TiB
Storage: 16Gbps FC all-flash SAN @ TiB
DCPMM: Memory Mode only
Benchmark Results and Conclusions
There is a significant reduction in elapsed runtime (approximately 6x) when comparing 512GiB to either 1TiB and 2TiB DCPMM buffer pool sizes. In addition, it was observed that in comparing 2TiB E7v4 DRAM and 2TiB Cascade Lake DCPMM configurations there was a ~20% improvement as well. This 20% gain is believed to be mostly attributed to the new processor architecture and more processor cores given that the buffer pool sizes are the same. However, this is still significant in that in the 4-socket Cascade Lake tested only had 24 x 128GiB DCPMM installed, and can scale to 12TiB DCPMM, which is about 4x the memory of what E7v4 can support in the same 4-socket server footprint.
The following graphs in figure-2 depict the comparison results. In both graphs, the y axis is elapsed time (lower number is better) comparing the results from the various configurations.
Figure-2: Elapse time comparison of various configurations
WRITE-INTENSIVE WORKLOAD BENCHMARK
The workload in this benchmark was our HL7v2 messaging workload using all T4 type workloads.
The T4 Workload used a routing engine to route separately modified messages to each of four outbound interfaces. On average, four segments of the inbound message were modified in each transformation (1-to-4 with four transforms). For each inbound message four data transformations were executed, four messages were sent outbound, and five HL7 message objects were created in the database.
Each system is configured with 128 inbound Business Services and 4800 messages sent to each inbound interface for a total of 614,400 inbound messages and 2,457,600 outbound messages.
The measurement of throughput in this benchmark workload is “messages per second”. We are also interested in (and recorded) the journal writes during the benchmark runs because transaction journal throughput and latency are critical components in sustaining high throughput. This directly influences the performance of message durability guarantees of IRIS for Health, and the transaction journal write performance is crucial in that operation. When journal throughput suffers, application processes will block on journal buffer availability.
Hardware Configuration
For the write-intensive workload, we decided to use a 2-socket server. This is a smaller configuration than our previous 4-socket configuration in that it only had 192GB of DRAM and 1.5TiB of DCPMM. We compared the workload of Cascade Lake with DCPMM to that of the previous 1st Generation Intel® Xeon® Scalable Processors (Skylake) server. Both servers have locally attached 750GiB Intel® Optane™ SSD DC P4800X drives.
The hardware configurations are shown in Figure-3 with the comparison between each of the 2-socket configurations:
Figure-3: Write intensive workload hardware configurations
Server #1 Configuration
Server #2 Configuration
Processors: 2 x Gold 6152 @ 2.1Ghz
Processors: 2 x Gold 6252 @ 2.1Ghz
Memory: 192GiB DRAM
Memory:1.5TiB DCPMM + 192GiB DRAM
Storage: 2 x 750GiB P4800X Optane SSDs
Storage: 2 x 750GiB P4800X Optane SSDs
DCPMM: Memory Mode & App Direct Modes
Benchmark Results and Conclusions
Test-1: This test ran the T4 workload described above on the Skylake server detailed as Server #1 Configuration in Figure-3. The Skylake server provided a sustained throughput of ~3355 inbound messages a second with a journal file write rate of 2010 journal writes/second.
Test-2: This test ran the same workload on the Cascade Lake server detailed as Server #2 Configuration in Figure-3, and specifically with DCPMM in Memory Mode. This demonstrated a significant improvement of sustained throughput of ~4684 inbound messages per second with a journal file write rate of 2400 journal writes/second. This provided a 39% increase compared to Test-1.
Test-3: This test ran the same workload on the Cascade Lake server detailed as Server #2 Configuration in Figure-3, this time using DCPMM in App Direct Mode but not actually configuring DCPMM to do anything. The purpose of this was to gauge just what the performance and throughput would be comparing Cascade Lake with DRAM only to Cascade Lake with DCPMM + DRAM. Results we not surprising in that there was a gain in throughput without DCPMM being used, albeit a relatively small one. This demonstrated an improvement of sustained throughput of ~4845 inbound message a second with a journal file write rate of 2540 journal writes/second. This is expected behavior because DCPMM has a higher latency compared to DRAM, and with the massive influx of updates there is a penalty to performance. Another way of looking at it there is <5% reduction in write ingestion workload when using DCPMM in Memory Mode on the same exact server. Additionally, when comparing Skylake to Cascade Lake (DRAM only) this provided a 44% increase compared to the Skylake server in Test-1.
Test-4: This test ran the same workload on the Cascade Lake server detailed as Server #2 configuration in Figure-3, this time using DCPMM in App Direct Mode and using App Direct Mode as DAX XFS mounted for the journal file system. This yielded even more throughput of 5399 inbound messages per second with a journal file write rate of 2630/sec. This demonstrated that DCPMM in App Direct mode for this type of workload is the better use of DCPMM. Comparing these results to the initial Skylake configuration there was a 60% increase in throughput compared to the Skylake server in Test-1.
InterSystems IRIS Recommended Intel DCPMM Use Cases
There are several use cases and configurations for which InterSystems IRIS will benefit from using Intel® Optane™ DC Persistent Memory.
Memory Mode
This is ideal for massive database caches for either a single InterSystems IRIS deployment or a large InterSystems IRIS sharded cluster where you want to have much more (or all!) of your database cached into memory. You will want to adhere to a maximum of 8:1 ratio of DCPMM to DRAM as this is important for the “hot memory” to stay in DRAM acting as an L4 cache layer. This is especially important for some shared internal IRIS memory structures such as seize resources and other memory cache lines.
App Direct Mode (DAX XFS) – Journal Disk Device
This is ideal for using DCPMM as a disk device for transaction journal files. DCPMM appears to the operating system as a mounted XFS file system to Linux. The benefit of using DAX XFS is this alleviates the PCIe bus overhead and direct memory access from the file system. As demonstrated in the HL7v2 benchmark results, the write latency benefits significantly increased the HL7 messaging throughput. Additionally, the storage is persistent and durable on reboots and power cycles just like a traditional disk device.
App Direct Mode (DAX XFS) – Journal + Write Image Journal (WIJ) Disk Device
In this use case, this extends the use of App Direct mode to both the transaction journals and the write image journal (WIJ). Both of these files are write-intensive and will certainly benefit from ultra-low latency and persistence.
Dual Mode: Memory + App Direct Modes
When using DCPMM in dual mode, the benefits of DCPMM are extended to allow for both massive database caches and ultra-low latency for the transaction journals and/or write image journal devices. In this use case, DCPMM appears both as mounted XFS filesystem to the OS and RAM to operating systems. This is achieved by allocating a percentage of DCPMM as DAX XFS and the remaining is allocated in memory mode. As mentioned previously, the DRAM installed will operate as an L4 like cache to the processors.
“Quasi” Dual Mode
To extend the use case models on a bit of slant with a Quasi Dual Mode in that you have a concurrent transaction and analytic workloads (also known as HTAP workloads) type workload where there is a high rate of inbound transactions/updates for OLTP type workloads, and also an analytical or massive querying need, then having each InterSystems IRIS node type within an InterSystems IRIS sharded cluster operating with different modes for DCPMM.
In this example there is the addition of InterSystems IRIS compute nodes which will handle the massive querying/analytics workload running with DCPMM Memory Mode so that they benefit from massive database cache in the global buffers, and the data nodes either running in Dual mode or App Direct the DAX XFS for the transactional workloads.
Conclusion
There are numerous options available for InterSystems IRIS when it comes to infrastructure choices. The application, workload profile, and the business needs drive the infrastructure requirements, and those technology and infrastructure choices influence the success, adoption, and importance of your applications to your business. InterSystems IRIS with 2nd Generation Intel® Xeon® Scalable Processors and Intel® Optane™ DC Persistent Memory provides for groundbreaking levels of scaling and throughput capabilities for your InterSystems IRIS based applications that matter to your business.
Benefits of InterSystems IRIS and Intel DCPMM capable servers include:
Increases memory capacity so that multi-terabyte databases can completely reside in InterSystems IRIS or InterSystems IRIS for Health database cache with DCPMM in Memory Mode. In comparison to reading from storage (disks), this can increase query response performance by up six times with no code changes due to InterSystems IRIS proven memory caching capabilities that take advantage of system memory as it increases in size.
Improves the performance of high-rate data interoperability throughput applications based on InterSystems IRIS and InterSystems IRIS for Health, such as HL7 transformations, by as much as 60% in increased throughput using the same processors and only changing the transaction journal disk from the fastest available NVMe drives to leveraging DCPMM in App Direct mode as a DAX XFS file system. Exploiting both the memory speed data transfers and data persistence is a significant benefit to InterSystems IRIS and InterSystems IRIS for Health.
Augment the compute resources where needed for a given workload whether read or write-intensive, or both, without over-allocating entire servers just for the sake of one resource component with DCPMM in Mixed Mode.
InterSystems Technology Architects are available to discuss hardware architectures ideal for your InterSystems IRIS based application. Great article, Mark!
I have a few notes and questions:
1. Here's a brief comparison of different storage categories:
Intel® Optane™ DC Persistent Memory has read throughput of 6.8 GB/s and write throughput 1.85 GB/s (source).
Intel® Optane™ SSD has read throughput of 2.5 GB/s and write throughput of 2.2 GB/s at (source).
Modern DDR4 RAM has read throughput of ~25 GB/s.
While I certainly see the appeal of DC Persistent Memory if we need more memory than RAM can provide, is it useful on smaller scale? Say I have a few hundred gigabytes of indices I need to keep in global buffer and be able to read-access fast. Would plain DDR4 RAM be better? Costs seem comparable and read throughput of 25 Gb/s seems considerably better.
2. What RAM was used in a Server #1 configuration?
3. Why are there different CPUs between servers?
4. Workload link does not work. 6252 supports DCPM, while 6152 does not 6252 can be used for both DCPM and DRAM configuration. Hi Eduard,
Thanks for you questions.
1- On small scale I would stay with traditional DRAM. DCPMM becomes beneficial when >1TB of capacity.
2- That was DDR4 DRAM memory in both read-intensive and write-intensive Server #1 configurations. In the read-intensive server configuration it was specifically DDR-2400, and in the write-intensive server configuration it was DDR-2600.
3- There are different CPUs in configuration in the read-intensive workload because this testing is meant to demonstrate upgrade paths from older servers to new technologies and the scalability increases offered in that scenario. The write-intensive workload only used a different server in the first test to compare previous generation to the current generation with DCPMM. Then the three following results demonstrated the differences in performance within the same server - just different DCPMM configurations.
4- Thanks. I will see what happened to the link and correct. Correct. Gold 6252 series (aka "Cascade Lake") supports both DCPMM and DRAM. However, keep in mind that when using DCPMM you need to have DRAM and should adhere to at least a 8:1 ratio of DCPMM:DRAM.
Announcement
Anastasia Dyubaylo · Sep 3, 2019
Hi Community!
We are super excited to announce the Boston FHIR @ InterSystems Meetup on 10th of September at the InterSystems meeting space!
There will be two talks with Q&A and networking.
Doors open at 5:30pm, we should start the first talk around 6pm. We will have a short break between talks for announcements, including job opportunities.
Please check the details below.
#1 We are in the middle of changes in healthcare technology that affect the strategies of companies and organizations across the globe, including many startups right here in Massachusetts. Micky Tripathi from the Massachusetts eHealth Collaborative is going to talk to us about the opportunities and consequences of API-based healthcare.
By Micky Tripathi - MAeHC#2 FHIR Analytics
The establishment of FHIR as a new healthcare data format creates new opportunities and challenges. Health professionals would like to acquire patient data from Electronic Health Records (EHR) with FHIR, and use it for population health management and research.FHIR provides resources and foundations based on XML and JSON data structures. However, traditional analytic tools are difficult to use with these structures. We created a prototype application to ingest FHIR bundles and save the Patient and Observation resources as objects/tables in InterSystems IRIS for Health. Developers can then easily create derived "fact tables" that de-normalize these tables for exploration and analytics.We will demo this application and our analytics tools using the InterSystems IRIS for Health platform.By Patrick Jamieson, M.D., Product Manager for InterSystems IRIS for Health and Carmen Logue, Product Manager - Analytics and AI
So, remember!
Date and time: Tuesday, 10 September 2019 5:30 pm to 7:30 pm
Venue: 1 Memorial Dr, Cambridge, MA 02142, USA
Event webpage: Boston FHIR @ InterSystems Meetup
Article
Evgeny Shvarov · Sep 6, 2019
Hi Developers!
InterSystems Package Manager (ZPM) is a great thing, but it is even better if you don't need to install it but can use immediately.
There are several ways how to do this and here is one approach of having IRIS container with ZPM built with dockerfile.
I've prepared a repository which has a few lines in dockerfile which perform the download and install the latest version of ZPM.
Add these lines to your standard dockerfile for IRIS community edition and you will have ZPM installed and ready to use.
To download the latest ZPM client:
RUN mkdir -p /tmp/deps \
&& cd /tmp/deps \
&& wget -q https://pm.community.intersystems.com/packages/zpm/latest/installer -O zpm.xml
to install ZPM into IRIS:
" Do \$system.OBJ.Load(\"/tmp/deps/zpm.xml\", \"ck\")" \
Great!
To try ZPM with this repository do the following:
$ git clone https://github.com/intersystems-community/objectscript-zpm-template.git
Build and run the repo:
$ docker-compose up -d
Open IRIS terminal:
$ docker-compose exec iris iris session iris
USER>
Call ZPM:
USER>zpm
zpm: USER>
Install webterminal
zpm: USER>install webterminal
webterminal] Reload START
[webterminal] Reload SUCCESS
[webterminal] Module object refreshed.
[webterminal] Validate START
[webterminal] Validate SUCCESS
[webterminal] Compile START
[webterminal] Compile SUCCESS
[webterminal] Activate START
[webterminal] Configure START
[webterminal] Configure SUCCESS
[webterminal] Activate SUCCESS
zpm: USER>
Use it!
And take a look at the whole process in this gif:
It turned out, that we don't need a special repository to add ZPM into your docker container.You just need another dockerfile - like this one. And here is the related docker-compose to make a handy start. See how it works:
Article
Henry Pereira · Sep 16, 2019
In an ever-changing world, companies must innovate to stay competitive. This ensures that they’ll make decisions with agility and safety, aiming for future results with greater accuracy.Business Intelligence (BI) tools help companies make intelligent decisions instead of relying on trial and error. These intelligent decisions can make the difference between success and failure in the marketplace.Microsoft Power BI is one of the industry’s leading business intelligence tools. With just a few clicks, Power BI makes it easy for managers and analysts to explore a company’s data. This is important because when data is easy to access and visualize, it’s much more like it’ll be used to make business decisions.
Power BI includes a wide variety of graphs, charts, tables, and maps. As a result, you can always find visualizations that are a good fit for your data.
BI tools are only as useful as the data that backs them, however. Power BI supports many data sources, and InterSystems IRIS is a recent addition to those sources. Since Power BI provides an exciting new way to explore data stored in IRIS, we’ll be exploring how to use these two amazing tools together.
This article will explain how to use IRIS Tables and Power BI together on real data. In a follow-up article, we’ll walk through using Power BI with IRIS Cubes.
Project Prerequisites and Setup
You will need the following to get started:
InterSystems IRIS Data Platform
Microsoft Power BI Desktop (April 2019 release or more recent)
InterSystems Sample-BI data
We'll be using the InterSystems IRIS Data Platform, so you’ll need access to an IRIS install to proceed. You can download a trial version from the InterSystems website if necessary.
There are two ways to install the Microsoft Power BI Desktop. You can download an installer and, or install it through the Microsoft Store. Note that if you are running Power BI from a different machine than where you installed InterSystems IRIS, you will need to install the InterSystems IRIS ODBC drivers on that machine separately
To create a dashboard on Power BI we'll need some data. We'll be using the HoleFoods dataset provided by InterSystems here on GitHub. To proceed, either clone or download the repository.
In IRIS, I've created a namespace called SamplesBI. This is not required, but if you want to create a new namespace, in the IRIS Management Portal, go to System Administration > Configuration > System Configuration > Namespace and click on New Namespace. Enter a name, then create a data file or use an existing one.
On InterSystems IRIS Terminal, enter the namespace that you want to import the data into. In this case, SamplesBI:
Execute $System.OBJ.Load() with the full path of buildsample/Build.SampleBI.cls and the "ck" compile flags:
Execute the Build method of Build.SampleBI class, and full path directory of the sample files:
Connecting Power BI with IRIS
Now it's time to connect Power BI with IRIS. Open Power BI and click on "Get Data". Choose "Database", and you will see the InterSystems IRIS connector:
Enter the host address. The host address is the IP address of the host for your InterSystems IRIS instance (localhost in my case), the Port is the instance’s superserver port (IRIS default is 57773), and the Namespace is where your HoleFoods data is located.
Under Data Connectivity mode, choose "DirectQuery", which ensures you’re always viewing current data.
Next, enter the username and password to connect to IRIS. The defaults are "_SYSTEM" and "SYS".
You can import both tables and cubes generated you’ve created in IRIS. Let’s start by importing some tables.
Under Tables and HoleFoods, check:
Country
Outlet
Product
Region
SalesTransaction
We're almost there! To tell Power BI about the relationship between our tables, click on "Manage Relationships".
Then, click on New.
Let's make two relationships: "SalesTransaction" and "Product relationship".
On top, select the "SalesTransaction" table and click on the "Product" column. Next, select the "Product" table and click on the "ID" column. You'll see that the Cardinality changes automatically to "Many to One (*:1)".
Repeat this step for the following:
"SalesTransaction(Outlet)" with "Outlet(ID)"
"Outlet(Country)" with "Country(ID)"
"Country(Region)" with "Region(ID)":
Note that these relationships are imported automatically if they are expressed as Foreign Keys.
Power BI also has a Relationships schema viewer. If you click the button on the left side of the application, it will show our data model.
Creating a Dashboard
We now have everything we need to create a dashboard.
Start by clicking the button on the left to switch from schema view back to Report view. On the Home tab under the Insert Group, click the TextBox to add a Title.
The Insert Group includes static elements like Text, Shapes, and Images we can use to enhance our reports.
It's time to add our first visualization! In the Fields pane, check "Name" on "Product" and "UnitsSold" on "SalesTransaction".
Next, go to Style and select "Bold Header".
Now it's time to do some data transformation. Click on the ellipsis next to "SalesTransaction" in the Field pane.
Then, click on "Edit Query". It will open the "Power Query Editor".
Select the "DateOfSale" column and click on "Duplicate Column".
Rename this new column to "Year", and click on "Date" and select "Year".
Apply these changes. Next, select the new column and, on the "Modeling" tab, change "Default Summarization" to "Don't Summarize".
Add a "Line Chart" visualization, then drag Year to Axis, drag "Name" from "Region" to Legend, and drag "AmountOfSale" from "SalesTransaction" to Values.
Imagine that the HoleFoods sales team has a target of selling 2000 units. How can we tell if the team is meeting its goal?
To answer, let's add a visual for metrics and targets.
On "SalesTransaction" in the Field pane, check "UnitsSold", then click Gauge Chart. Under the Style properties, set Max to 3000 and Target to 2000.
KPIs (Key Performance Indicators) are helpful decision-making tools, and Power BI has a convenient KPI visual we can use.
To add it, under "SalesTransaction", check "AmountOfSale" and choose KPI under “Visualizations”. Then, drag "Year" to "Trend axis".
To align all charts and visuals, simply click and drag a visual, and when an edge or center is close to aligning with the edge or center of another visual or set of visuals, red dashed lines appear.
You also can go to the View tab and enable "Show GridLines" and "Snap Objects to Grid".
We’ll finish up by adding a map that shows HoleFoods global presence. Set Longitude and Latitude on "Outlet" to "Don't Summarize" on the Modeling tab.
You can find the map tool in the Visualizations pane. After adding it, drag the Latitude and Longitude fields from Outlet to respective properties on the map. Also from SalesTransaction, drag the AmountOfSale to Size property and UnitsSold to ToolTips.
And our dashboard is finally complete.
You can share your dashboard by publishing it to the Power BI Service. To do this, you’ll have to sign up for a Power BI account.
Conclusion
In just a few minutes, we were able to connect Power BI to InterSystems IRIS and then create amazing interactive visualizations.
As developers, this is great. Instead of spending hours or days developing dashboards for managers, we can get the job done in minutes. Even better, we can show managers how to quickly and easily create reports for themselves.
Although developing visualizations is often part of a developer’s job, our time is usually better spent developing mission-critical architecture and applications. Using IRIS and Power BI together ensures that developer time is used effectively and that managers are able to access and visualize data immediately — without waiting weeks for dashboards to be developed, tested, and deployed to production.
Perfect! Great! Thanks Henry.
Few queries -
1. Does Power Bi offer an advantage over Intersystems own analytics? If yes, what are those? In general, I believe visualization is way better in Power BI.,data modelling would be much easier. In addition Power BI offers its user to leverage from Microsoft's cognitive services. Did you notice any performance issue?
2. I believe the connector is free to avail, can you confirm if this true?
Thanks,
SS.
3. tagging @Carmen.Logue to provide more details. That's right. There is no charge for the PowerBI connector; but you do need licenses for Microsoft PowerBI. The connector is available with PowerBI starting in version 2019.2. See this article in the product documentation. 💡 This article is considered as InterSystems Data Platform Best Practice. Nice article: while testing, trying to load the tables (which I can select) I got the following errors:
I am using Iris installed locally and PowerBI Desktop.
Any suggestions? Access Denied errors can stem from a variety of reasons. As a sanity check, running the windows ODBC connection manager's test function never hurts to rule out connectivity issues. By any means, you can consult with the WRC on support issues like this.