Search

Clear filter
Announcement
Jeff Fried · Nov 4, 2019

InterSystems IRIS and IRIS for Health 2019.3 released

The 2019.3 versions of InterSystems IRIS, InterSystems IRIS for Health, and InterSystems IRIS Studio are now Generally Available! These releases are available from the WRC Software Distribution site, with build number 2019.3.0.311.0. InterSystems IRIS Data Platform 2019.3 has many new capabilities including: Support for InterSystems API Manager (IAM) Polyglot Extension (PeX) available for Java Java and .NET Gateway Reentrancy Node-level Architecture for Sharding and SQL Support SQL and Performance Enhancements Infrastructure and Cloud Deployment Improvements Port Authority for Monitoring Port Usage in Interoperability Productions X12 Element Validation in Interoperability Productions These are detailed in the documentation: InterSystems IRIS 2019.3 documentation and release notes InterSystems IRIS for Health 2019.3 includes all of the enhancements of InterSystems IRIS. In addition, this release includes FHIR searching with chained parameters (including reverse chaining) and minor updates to FHIR and other health care protocols. FHIR STU3 PATCH Support New IHE Profiles XCA-I and IUA These are detailed in the documentation: InterSystems IRIS for Health 2019.3 documentation and release notes InterSystems IRIS Studio 2019.3 is a standalone development image supported on Microsoft Windows. It works with InterSystems IRIS and InterSystems IRIS for Health version 2019.3 and below, as well as with Caché and Ensemble. See the InterSystems IRIS Studio Documentation for details 2019.3 is a CD release, so InterSystems IRIS and InterSystems IRIS for Health 2019.3 are only available in OCI (Open Container Initiative) a.k.a. Docker container format. The platforms on which this is supported for production and development are detailed in the Supported Platforms document. Having gone through the pain of Installing Docker for Windows and then installing the InterSystems IRIS for HEALTH 2019.3 image and having got hold of a copy of the 2019.3 Studio I was please when I saw this announcement and excitedly went looking for my 2019.3......exe only to find out there is none and a small note at the end off the announcement to say that 2019.3 InterSystems IRIS and InterSystems IRIS for Heath will only be released in CD form. Yours Nigel Salm Nigel, just want to be sure that you read CD as Containers Deployment - so it will be available on every delivery site (WRC, download, AWS, GCP, Azure, Dockerhub) but in a container form. InterSystems Docker Imageshttps://wrc.intersystems.com/wrc/coDistContainers.csp
Article
Timothy Leavitt · Mar 24, 2020

Unit Tests and Test Coverage in the InterSystems Package Manager

This article will describe processes for running unit tests via the InterSystems Package Manager (aka IPM - see https://openexchange.intersystems.com/package/InterSystems-Package-Manager-1), including test coverage measurement (via https://openexchange.intersystems.com/package/Test-Coverage-Tool). Unit testing in ObjectScript There's already great documentation about writing unit tests in ObjectScript, so I won't repeat any of that. You can find the Unit Test tutorial here: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=TUNT_preface It's best practice to include your unit tests somewhere separate in your source tree, whether it's just "/tests" or something fancier. Within InterSystems, we end up using /internal/testing/unit_tests/ as our de facto standard, which makes sense because tests are internal/non-distributed and there are types of tests other than unit tests, but this might be a bit complex for simple open source projects. You may see this structure in some of our GitHub repos. From a workflow perspective, this is super easy in VSCode - you just create the directory and put the classes there. With older server-centric approaches to source control (those used in Studio) you'll need to map this package appropriately, and the approach for that varies by source control extension. From a unit test class naming perspective, my personal preference (and the best practice for my group) is: UnitTest.<package/class being tested>[.<method/feature being tested>] For example, if unit tests for method Foo in class MyApplication.SomeClass, the unit test class would be named UnitTest.MyApplication.SomeClass.Foo; if the tests were for the class as a whole, it'd just be UnitTest.MyApplication.SomeClass. Unit tests in IPM Making the InterSystems Package Manager aware of your unit tests is easy! Just add a line to module.xml like the following (taken from https://github.com/timleavitt/ObjectScript-Math/blob/master/module.xml - a fork of @Peter.Steiwer 's excellent math package from the Open Exchange, which I'm using as a simple motivating example): <Module> ... <UnitTest Name="tests" Package="UnitTest.Math" Phase="test"/></Module> What this all means: The unit tests are in the "tests" directory underneath the module's root. The unit tests are in the "UnitTest.Math" package. This makes sense, because the classes being tested are in the "Math" package. The unit tests run in the "test" phase in the package lifecycle. (There's also a "verify" phase in which they could run, but that's a story for another day.) Running Unit Tests With unit tests defined as explained above, the package manager provides some really helpful tools for running them. You can still set ^UnitTestRoot, etc. as you usually would with %UnitTest.Manager, but you'll probably find the following options much easier - especially if you're working on several projects in the same environment. You can try out all of these by cloning the objectscript-math repo listed above and then loading it with zpm "load /path/to/cloned/repo/", or on your own package by replacing "objectscript-math" with your package names (and test names). To reload the module and then run all the unit tests: zpm "objectscript-math test" To just run the unit tests (without reloading): zpm "objectscript-math test -only" To just run the unit tests (without reloading) and provide verbose output: zpm "objectscript-math test -only -verbose" To just run a particular test suite (meaning a directory of tests - in this case, all the tests in UnitTest/Math/Utils) without reloading, and provide verbose output: zpm "objectscript-math test -only -verbose -DUnitTest.Suite=UnitTest.Math.Utils" To just run a particular test case (in this case, UnitTest.Math.Utils.TestValidateRange) without reloading, and provide verbose output: zpm "objectscript-math test -only -verbose -DUnitTest.Case=UnitTest.Math.Utils.TestValidateRange" Or, if you're just working out the kinks in a single test method: zpm "objectscript-math test -only -verbose -DUnitTest.Case=UnitTest.Math.Utils.TestValidateRange -DUnitTest.Method=TestpValueNull" Test coverage measurement via IPM So you have some unit tests - but are they any good? Measuring test coverage won't fully answer that question, but it at least helps. I presented on this at Global Summit back in 2018 - see https://youtu.be/nUSeGHwN5pc . The first thing you'll need to do is install the test coverage package: zpm "install testcoverage" Note that this doesn't require IPM to install/run; you can find more information on the Open Exchange: https://openexchange.intersystems.com/package/Test-Coverage-Tool That said, you can get the most out of the test coverage tool if you're also using IPM. Before running tests, you need to specify which classes/routines you expect your tests to cover. This is important because, in very large codebases (for example, HealthShare), measuring and collecting test coverage for all of the files in the project may require more memory than your system has. (Specifically, gmheap for the line-by-line monitor, if you're curious.) The list of files goes in a file named coverage.list within your unit test root; different subdirectories (suites) of unit tests can have their own copy of this to override which classes/routines will be tracked while the test suite is running. For a simple example with objectscript-math, see: https://github.com/timleavitt/ObjectScript-Math/blob/master/tests/UnitTest/coverage.list ; the user guide for the test coverage tool goes into further details. To run the unit tests with test coverage measurement enabled, there's just one more argument to add to the command, specifying that TestCoverage.Manager should be used instead of %UnitTest.Manager to run the tests: zpm "objectscript-math test -only -DUnitTest.ManagerClass=TestCoverage.Manager" The output (even in non-verbose mode) will include a URL where you can view which lines of your classes/routines were covered by unit tests, as well as some aggregate statistics. Next Steps What about automating all of this in CI? What about reporting unit test results and coverage scores/diffs? You can do that too! For a simple example using Docker, Travis CI and codecov.io, see https://github.com/timleavitt/ObjectScript-Math ; I'm planning to write this up in a future article that looks at a few different approaches. Excellent article Tim! Great description of how people can move the ball forward with the maturity of their development processes :) Hello @Timothy.Leavitt Thank you for this great article! I tried to add "UnitTest" tag to my module.xml but something wrong during the publish process.<UnitTest Name="tests" Package="UnitTest.Isc.JSONFiltering.Services" Phase="test"/> tests directory contain a directory tree UnitTest/Isc/JSONFiltering/Services/ with a %UnitTest.TestCase sublcass. Exported 'tests' to /tmp/dirLNgC2s/json-filter-1.2.0/tests/.testsERROR #5018: Routine 'tests' does not exist[json-filter] Package FAILURE - ERROR #5018: Routine 'tests' does not existERROR #5018: Routine 'tests' does not exist I also tried with objectscript-math project. This is the output of objectscript-math publish -v :Exported 'src/cls/UnitTests' to /tmp/dir7J1Fhz/objectscript-math-0.0.4/src/cls/unittests/.src/cls/unittestsERROR #5018: Routine 'src/cls/UnitTests' does not exist[objectscript-math] Package FAILURE - ERROR #5018: Routine 'src/cls/UnitTests' does not existERROR #5018: Routine 'src/cls/UnitTests' does not exist Did I miss something or is a package manager issue ?Thank you. Perhaps try Name="/tests" with a leading slash? Yes, that's it ! We can see a dot. It works fine.Thank you for your help. @Timothy.Leavitt Do you all still use your Test Coverage Tool at InterSystems? I haven't seen any recent updates to it on the repo so I I'm wondering if you consider it still useful and it's just in a steady state, stable place or are there different tactics for test coverage metrics since you published? @Michael.Davidovich yes we do! It's useful and just in a steady state (although I have a PR in process around some of the recent confusing behavior that's been reported in the community). Thanks, @Timothy.Leavitt! For others working through this too, I wanted to sum some points up that I discussed with Tim over PM. - Tim reiterated the usefulness of the Test Coverage tool and the Cobertura output for finding starting places based on complexity and what are the right blocks to test. - When it comes to testing persistent data classes, it is indeed tricky but valuable (e.g. data validation steps). Using transactions (TSTART and TROLLBACK) is a good approach for this. I also discussed the video from some years ago on the mocking framework. It's an awesome approach, but for me, it depends on retooling classes to fit the framework. I'm not in a place where I want to or can rewrite classes for the sake of testing, however this might be a good approach for others. There may be other open source frameworks for mocking available later. Hope this helps and encourages more conversation! In a perfect world we'd start with our tests and code from there, but well, the world isn't perfect! great summary ... thank you! @Timothy.Leavitt and others: I know this isn't Jenkins support, but I seem to be having trouble allowing the account running Jenkins to get into IRIS. Just trying to get this to work locally at the moment. I'm running on Windows through an organizational account, so I created a new local account on the computer, jenkinsUser, which I'm to understand is the 'user' that logs in and runs everything on Jenkins. When I launch IRIS in the build script using . . . C:\MyPath\bin\irisdb -s C:\MyPath\mgr -U MYNAMESPACE 0<inFile . . . I can see in the console it's trying to login. I turned on O/S authentication for the system and allowed the %System.Login function to use Kerbose. I can launch Terminal from my tray and I'm logged in without a user/password prompt. I am guessing that IRIS doesn't know about my jenkinsUser local account, so it won't allow that user to us O/S authentication? I'm trying to piece this together in my head. How can I allow this computer user trying to run Jenkins access to IRIS without authentication? Hope this helps others who are trying to set this up. Not sure if this is right, but I created a new IRIS user and then created delegated access to %Service_Console and included this in my ZAUTHENTICATE routine. Seems to have worked. Now . . . on to the next problem: DO ##CLASS(UnitTest.Manager).OutputResultsXml("junit.xml") ^ <CLASS DOES NOT EXIST> *UnitTest.Manager Please try %UnitTest.Manager I had to go back . . . that was a custom class and method that was written for the Widgets Direct demo app. Trial and error folks: @Timothy.Leavitt your presentation mentioned a custom version of the Coberutra plugin for the scatter plot . . . is that still necessary or does the current version support that? Not sure if I see any mention of the custom plugin on the GitHub page. Otherwise, I seem to me missing something key: I don't have build logic in my script. I suppose I just thought that step was for automation purposes so that the latest code would be compiled on whatever server. I don't have anything like that yet and thought I could just run the test coverage utility but it's coming up with nothing. I'll keep playing tomorrow but appreciate anyone's thoughts on this especially if you've set it up before! For those following along, I got this to work finally by creating the "coverage.list" file in the unit test root. I tried setting the parameter node "CoverageClasses" but that didn't work (maybe I used $LB wrong). Still not sure how to get the scatter plot for complexity as @Timothy.Leavitt mentioned in the presentation the Cobertura plugin was customized. Any thoughts on that are appreciated! I think this is it: GitHub - timleavitt/covcomplplot-plugin: Jenkins covcomplplot pluginIt's written by Tim, it's on the plugin library, and it looks like what was in the presentation, however I have some more digging come Monday. @Michael.Davidovich I was out Friday, so still catching up on all this - glad you were able to figure out coverage.list. That's generally a better way to go for automation than setting a list of classes. re: the plugin, yes, that's it! There's a GitHub issue that's probably the same here: https://github.com/timleavitt/covcomplplot-plugin/issues/1 - it's back on my radar given what you're seeing. So I originally installed the scatter plot plugin from the library, not the one from your repo. I uninstalled that and I'm trying to install the one you modified. I'm having a little trouble because it seems I have to download your source, make sure I have a JDK installed and Maven and package the code into a .hpi file? Does this sound right? I'm getting some issues with the POM file while running 'mvn pacakge'. Is it possible to provide the packaged file for those of us not Java-savvy? For other n00bs like me . . . in GitHub you click the Releases link on the code page and you can find the packaged code. Edit: I created a separate thread about this so it gets more visibility: The thread can be found from here: https://community.intersystems.com/post/test-coverage-coverage-report-not-generating-when-running-unit-tests-zpm ... Hello, @Timothy.Leavitt, thanks for the great article! I am facing a slight problem and was wondering if you, or someone else, might have some insight into the matter. I am running my unit tests in the following way with ZPM, as instructed. They work well and test reports are generated correctly. Test coverage is also measured correctly according to the logs. However, even though I instructed ZPM to generate Cobertura-style coverage reports, it is not generating one. When I run the GenerateReport() method manually, the report is generated correctly. I am wondering what I am doing wrong. I used the test flags from the ObjectScript-Math repository, but they seem not to work. Here is the ZPM command I use to run the unit tests: zpm "common-unit-tests test -only -verbose -DUnitTest.ManagerClass=TestCoverage.Manager -DUnitTest.UserParam.CoverageReportClass=TestCoverage.Report.Cobertura.ReportGenerator -DUnitTest.UserParam.CoverageReportFile=/opt/iris/test/CoverageReports/coverage.xml -DUnitTest.Suite=Test.UnitTests.Fw -DUnitTest.JUnitOutput=/opt/iris/test/TestReports/junit.xml -DUnitTest.FailuresAreFatal=1":1 The test suite runs okay, but coverage reports do not generate. However, when I run these commands stated in the TestCoverage documentation, the reports are generated. Set reportFile = "/opt/iris/test/CoverageReports/coverage.xml" Do ##class(TestCoverage.Report.Cobertura.ReportGenerator).GenerateReport(<index>, reportFile) Here is a short snippet from the logs where you can see that test coverage analysis is run: Collecting coverage data for Test: .036437 seconds Test passed Mapping to class/routine coverage: .041223 seconds Aggregating coverage data: .019707 seconds Code coverage: 41.92% Use the following URL to view the result: http://192.168.208.2:52773/csp/sys/%25UnitTest.Portal.Indices.cls?Index=19&$NAMESPACE=COMMON Use the following URL to view test coverage data: http://IRIS-LOCALDEV:52773/csp/common/TestCoverage.UI.AggregateResultViewer.cls?Index=17 All PASSED [COMMON|common-unit-tests] Test SUCCESS What am I doing wrong? Thank you, and have a good day!Kari Vatjus-Anttila %UnitTest mavens may be interested in this announcement: https://community.intersystems.com/post/intersystems-testing-manager-new-vs-code-extension-unittest-framework
Question
Evgeny Shvarov · Jul 23, 2019

Cannot Find Ensemble.INC Analog in InterSystems IRIS

Hi guys!What is the IRIS analog for Ensemble.INC? Tried to compile the class in IRIS - says Error compiling routine: Util.LogQueueCounts. Errors: Util.LogQueueCounts.cls : Util.LogQueueCounts.1(7) : MPP5635 : No include file 'Ensemble' You just have to enable Ensemble in the installer <Namespace Name="${NAMESPACE}" Code="${DBNAME}-CODE" Data="${DBNAME}-DATA" Create="yes" Ensemble="1"> That helped! Thank you! What do you mean? There are still Ensemble.inc
Announcement
Sourabh Sethi · Jul 29, 2019

Online Learning - A Solid Design in InterSystems ObjectScript

A SOLID Design in Cache ObjectIn this session, we will discussing SOLID Principle of Programming and will implement in a example.I have used Cache Object Programming Language for examples.We will go step by step to understand the requirement, then what common mistakes we use to do while designing, understanding each principles and then complete design with its implementation via Cache Objects.If you have any questions or suggestions, please write to me - sethisourabh.hit@gmail.comCodeSet - https://github.com/sethisourabh/SolidPrinciplesTraining Thanks for sharing this knowledge on ObjectScript language.I haven't heard of SOLID Principle before, I'll apply it on my next code.BTW : can you share your sildes for an easier walkthrough ? Thank you for your response.I dont see any way to attach documents here. You can send your email id and I will send over there.My email ID - sethisourabh.hit@gmail.comRegards,Sourabh You could use https://www.slideshare.net/ or add the document to the GitHub repo.There is a way to post documents on Intersystems Community under Edit Post -> Change Additional Settings, which I documented here but it's not user friendly and I didn't automatically see links to attached documents within the post so I had to manually add the links. Community feedback suggests they may turn this feature off at some point so I'd recommend any of the above options instead. Thanks, @Stephen.Wilson!Yes, we plan to turn off the attachments feature. As you mention there are a lot of better ways to expose presentation and code.And as you see @Sourabh.Sethi6829 posted the recent package for his recent video on Open Exchange. Do I need the code set for this session in open exchange? Would be great - it’s even more presence and developers can collaborate DONE
Article
Murray Oldfield · Nov 14, 2019

Monitoring InterSystems IRIS Using Built-in REST API

Released with no formal announcement in [IRIS preview release 2019.4](https://community.intersystems.com/post/intersystems-iris-and-iris-health-20194-preview-published "IRIS preview release 2019.1.4") is the /api/monitor service exposing IRIS metrics in **_Prometheus_** format. Big news for anyone wanting to use IRIS metrics as part of their monitoring and alerting solution. The API is a component of the new IRIS _System Alerting and Monitoring (SAM)_ solution that will be released in an upcoming version of IRIS. >However, you do not have to wait for SAM to start planning and trialling this API to monitor your IRIS instances. In future posts, I will dig deeper into the metrics available and what they mean and provide example interactive dashboards. But first, let me start with some background and a few questions and answers. IRIS (and Caché) is always collecting dozens of metrics about itself and the platform it is running on. There have always been [multiple ways to collect these metrics to monitor Caché and IRIS](https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GCM "multiple ways to collect these metrics to monitor Caché and IRIS"). I have found that few installations use IRIS and Caché built-in solutions. For example, History Monitor has been available for a long time as a historical database of performance and system usage metrics. However, there was no obvious way to surface these metrics and instrument systems in real-time. IRIS platform solutions (along with the rest of the world) are moving from single monolithic applications running on a few on-premises instances to distributed solutions deployed 'anywhere'. For many use cases existing IRIS monitoring options do not fit these new paradigms. Rather than completely reinvent the wheel InterSystems looked to popular and proven current Open Source solutions for monitoring and alerting. ## Prometheus? Prometheus is a well known and widely deployed open source monitoring system based on proven technology. It has a wide variety of plugins. It is designed to work well within the cloud environment, but also is just as useful for on-premises. Plugins include operating systems, web servers such as Apache and many other applications. Prometheus is often used with a front end client, for example, _Grafana_, which provides a great UI/UX experience that is extremely customisable. ## Grafana? Grafana is also open source. As this series of posts progresses, I will provide sample templates of monitoring dashboards for common scenarios. You can use the samples as a base to design dashboards for what you care about. The real power comes when you combine IRIS metrics in context with metrics from your whole solution stack. From the platform components, operating system, IRIS and especially when you add instrumentation from your applications. ## Haven't I seen this before? Monitoring IRIS and Caché with Prometheus and Grafana is not new. I have been using these applications for several years to monitor my development and test systems. If you search the Developer Community for "Prometheus", you will find other posts ([for example, some excellent posts by Mikhail Khomenko](https://community.intersystems.com/post/making-prometheus-monitoring-intersystems-cach%C3%A9 "or example, this one by Mikhail Khomenko")) that show how to expose Caché metrics for use by Prometheus. >The difference now is that the /api/monitor API is included and enabled by default. There is no requirement to code your own classes to expose metrics. # Prometheus Primer Here is a quick orientation to Prometheus and some terminology. I want you to see the high level and to lay some groundwork and open the door to how you think of visualising or consuming the metrics provided by IRIS or other sources. Prometheus works by _scraping_ or pulling time series data exposed from applications as HTTP endpoints (APIs such as IRIS /api/monitor). _Exporters_ and client libraries exist for many languages, frameworks, and open-source applications — for example, for web servers like Apache, operating systems, docker, Kubernetes, databases, and now IRIS. Exporters are used to instrument applications and services and to expose relevant metrics on an endpoint for scraping. Standard components such as web servers, databases, and the like - are supported by core exporters. Many other exporters are available open-source from the Prometheus community. ## Prometheus Terminology A few key terms are useful to know: - **Targets** are where the services are that you care about, like a host or application or services like Apache or IRIS or your own application. - Prometheus **scrapes** targets over HTTP collecting metrics as time-series data. - **Time-series data** is exposed by applications, for example, IRIS or via exporters. - **Exporters** are available for things you don't control like Linux kernel metrics. - The resulting time-series data is stored locally on the Prometheus server in a database \*\*. - The time-series database can be queried using an optimised **query language (PromQL)**. For example, to create alerts or by client applications such as Grafana, to display the metrics in a dashboard. >\*\* **Spoiler Alert;** For security, scaling, high availability and some other operational efficiency reasons, for the new SAM solution the database used for Prometheus time-series data is IRIS! However, access to the Prometheus database -- on IRIS -- is transparent, and applications such as Grafana do not know or care. ### Prometheus Data Model Metrics returned by the API are in Prometheus format. Prometheus uses a simple text-based metrics format with one metric per line, the format is; [ (time n, value n), ....] Metrics can have labels as (key, value) pairs. Labels are a powerful way to filter metrics as dimensions. As an example, examine a single metric returned for IRIS /api/monitor. In this case journal free space: iris_jrn_free_space{id="WIJ",dir=”/fast/wij/"} 401562.83 The identifier tells you what the metric is and where it came from: iris_jrn_free_space Multiple labels can be used to decorate the metrics, and then used to filter and query. In this example, you can see the WIJ and the directory where the WIJ is stored: id="WIJ",dir="/fast/wij/" And a value: `401562.83` (MB). ## What IRIS metrics are available? The [preview documentation](https://irisdocs.intersystems.com/iris20194/csp/docbook/Doc.View.cls?KEY=GCM_rest "Will be subject to changes") has a list of metrics. However, be aware there may be changes. You can also simply query the `/api/monitor/metrics` endpoint and see the list. I use [Postman](https://www.getpostman.com "Postman") which I will demonstrate in the next community post. # What should I monitor? Keep these points in mind as you think about how you will monitor your systems and applications. - When you can, instrument key metrics that affect users. - - Users don't care that one of your machines is short of CPU. - - Users care if the service is slow or having errors. - - For your primary dashboards focus on high-level metrics that directly impact users. - For your dashboards avoid a wall of graphs. - - Humans can't deal with too much data at once. - - For example, have a dashboard per service. - Think about services, not machines. - - Once you have isolated a problem to one service, then you can drill down and see if one machine is the problem. # References **Documentation and downloads** for: [Prometheus](https://prometheus.io "Prometheus") and [Grafana](https://grafana.com "Grafana") I presented a pre-release overview of SAM (including Prometheus and Grafana) at **InterSystems Global Summit 2019** you can find [the link at InterSystems learning services](https://learning.intersystems.com/mod/page/view.php?id=5599 "Learning Services"). If the direct link does not work go to the [InterSystems learning services web site](https://learning.intersystems.com "Learning Services") and search for: "System Alerting and Monitoring Made Easy" Search here on the community for "Prometheus" and "Grafana". Please include node_exporter setup. What gets put into the isc_prometheus.yml THis is what the doc says to do this in isc_prometheus.yml - job_name: NODE metrics_path: /metrics scheme: http static_configs: - labels: cluster: "2" group: node targets: - csc2cxn00020924.cloud.kp.org:9100 - csc2cxn00021271.cloud.kp.org:9100 It does not work. The node_exporter is installed and running. From what I can see the values returned are updated very quickly - maybe every second? I'm unclear as to how to contextualize the metrics for a periodic collection. Specifically, if I call the API every minute I may get a value for global references that is very low or very high - but it may not be indicative of the value over time. Is there any information on how the metrics are calculated internally that might help? Single points in time may be very deceptive.
Announcement
Anastasia Dyubaylo · Jan 20, 2020

The Best InterSystems Open Exchange Developers and Applications in 2019

Hi Developers, 2019 was a really great year with almost 100 applications uploaded to the InterSystems Open Exchange! To thank our Best Contributors we have special annual achievement badges in Global Masters Advocacy Hub. This year we introduced 2 new badges for contribution to the InterSystems Open Exchange: ✅ InterSystems Application of the Year 2019 ✅ InterSystems Developer of the Year 2019 We're glad to present the most downloaded applications on InterSystems Data Platforms! Badge's Name Advocates Rules Nomination: InterSystems Application of the Year Gold InterSystems Application of the Year 2019 - 1st place VSCode-ObjectScript by @Maslennikov.Dmitry 1st / 2nd/ 3rd / 4th-10th place in "InterSystems Application of the Year 2019" nomination. Given to developers whose application gathered the maximum amount of downloads on InterSystems Open Exchange in the year of 2019. Silver InterSystems Application of the Year 2019 - 2nd place PythonGateway by @Eduard.Lebedyuk Bronze InterSystems Application of the Year 2019 - 3rd place iris-history-monitor by @Henrique.GonçalvesDias InterSystems Application of the Year 2019 - 4th-10th places WebTerminal by @Nikita.Savchenko7047 Design Pattern in Caché Object Script by @Tiago.Ribeiro Caché Monitor by @Andreas.Schneider AnalyzeThis by @Peter.Steiwer A more useFull Object Dump by @Robert.Cemper1003 Light weight EXCEL download by @Robert.Cemper1003 ObjectScript Class Explorer by @Nikita.Savchenko7047 Nomination: InterSystems Developer of the Year Gold InterSystems Developer of the Year 2019 - 1st place @Robert.Cemper1003 1st / 2nd / 3rd / 4th-10th place in "InterSystems Developer of the Year 2019" nomination. Given to developers who uploaded the largest number of applications to InterSystems Open Exchange in the year of 2019. Silver InterSystems Developer of the Year 2019 - 2nd place @Evgeny.Shvarov @Eduard.Lebedyuk Bronze InterSystems Developer of the Year 2019 - 3rd place @Maslennikov.Dmitry @David.Crawford @Otto.Karlinger InterSystems Developer of the Year 2019 - 4th-10th places @Peter.Steiwer @Amir.Samary @Guillaume.Rongier7183 @Rubens.Silva9155 Congratulations! You are doing so valuable and important job for all the community! Thank you all for being part of the InterSystems Community. Share your experience, ask, learn and develop, and be successful with InterSystems! ➡️ See also the Best Articles and the Best Questions on InterSystems Data Platform and the Best Contributors in 2019.
Announcement
Derek Robinson · Feb 21, 2020

ICYMI: A Discussion with Jenny Ames about InterSystems IRIS

I wanted to share each of the first three episodes of our new Data Points podcast with the community here — we previously posted announcements for episodes on IntegratedML and Kubernetes — so here is our episode on InterSystems IRIS as a whole! It was great talking with @jennifer.ames about what sets IRIS apart, some of the best use cases she's seen in her years as a trainer in the field and then as an online content developer, and more. Check it out, and make sure to subscribe at the link above — Episode 4 will be released next week!
Announcement
Anastasia Dyubaylo · Feb 22, 2020

New Video: Code in Any Language with InterSystems IRIS

Hi Developers, New video, recorded by @Benjamin.DeBoe, is available on InterSystems Developers YouTube: ⏯ Code in Any Language with InterSystems IRIS InterSystems Product Manager @Benjamin.DeBoe talks about combining your preferred tools and languages when building your application on InterSystems IRIS Data Platform. Try InterSystems IRIS: https://www.intersystems.com/try Enjoy watching the video! 👍🏼
Announcement
Michelle Spisak · Oct 17, 2019

Get Hands-On with InterSystems IRIS™ Multi-Model, Interoperability

New from InterSystems Online Learning: two new exercises that help you get hands-on with InterSystems IRIS to see how easy it is to use to solve your problems! Using Multi-Model with Python and Node.js: This exercise takes you through the steps of using the InterSystems IRIS multi-model capability to create a Node.js application that sends JSON data straight to your database instance without any parsing or mapping. Build a Smart Ticketing System: In this exercise, you will build on the Red Light Violation application used in the Interoperability QuickStart. Here, you will add another route to identify at-risk intersections based on data from Chicago traffic system. This involves: Building a simple interface to consume data from a file and store data to a file. Adding in logic to only list intersections that have been deemed high risk based on the number of red light violations. Adding in routing to consume additional data about populations using REST from public APIs. Modifying the data to be in the right format for the downstream system. Get started with InterSystems IRIS today!
Article
Eduard Lebedyuk · Oct 21, 2019

Using InterSystems API Management to Load Balance an API

InterSystems API Management (IAM) - a new feature of the InterSystems IRIS Data Platform, enables you to monitor, control and govern traffic to and from web-based APIs within your IT infrastructure. In case you missed it, here is the link to the announcement. And here's an article explaining how to start working with IAM. In this article, we would use InterSystems API Management to Load Balance an API. In our case, we have 2 InterSystems IRIS instances with /api/atelier REST API that we want to publish for our clients. There are many different reasons why we might want to do that, such as: Load balancing to spread the workload across servers Blue-green deployment: we have two servers, one "prod", other "dev" and we might want to switch between them Canary deployment: we might publish the new version only on one server and move 1% of clients there High availability configuration etc. Still, the steps we need to take are quite similar. Prerequisites 2 InterSystems IRIS instances InterSystems API Management instance Let's go Here's what we need to do: 1. Create an upstream. Upstream represents a virtual hostname and can be used to load balance incoming requests over multiple services (targets). For example, an upstream named service.v1.xyz would receive requests for a Service whose host is service.v1.xyz. Requests for this Service would be proxied to the targets defined within the upstream. An upstream also includes a health checker, which can enable and disable targets based on their ability or inability to serve requests. To start: Open IAM Administration Portal Go to Workspaces Choose your workspace Open Upstreams Click on "New Upstream" button After clicking the "New Upstream" button you would see a form where you can enter some basic information about the upstream (there are a lot more properties): Enter name - it's a virtual hostname our services would use. It's unrelated to DNS records. I recommend setting it to a non-existing value to avoid confusion. If you want to read about the rest of the properties, check the documentation. On the screenshot, you can see how I imaginatively named the new upstream as myupstream. 2. Create targets. Targets are backend servers that would execute the requests and send results back to the client. Go to Upstreams and click on the upstream name you just created (and NOT on update button): You would see all the existing targets (none so far) and the "New Target" button. Press it: And in the new form define a target. Only two parameters are available: target - host and port of the backend server weight - relative priority given to this server (more weight - more requests are sent to this target) I have added two targets: 3. Create a service Now that we have our upstream we need to send requests to it. We use Service for it.Service entities, as the name implies, are abstractions of each of your upstream services. Examples of Services would be a data transformationmicroservice, a billing API, etc. Let's create a service targeting our IRIS instance, go to Services and press "New Service" button: Set the following values: field value description name myservice the logical name of this service host myupstream upstream name path /api/atelier root path we want to serve protocol http the protocols we want to support Keep the default values for everything else (including port: 80). After creating the service you'll see it in a list of services. Copy service ID somewhere, we're going to need that later. 4. Create a route Routes define rules to match client requests. Each Route is associated with a Service, and a Service may have multiple Routes associated withit. Every request matching a given Route will be proxied to its associated Service. The combination of Routes and Services (and the separation of concerns between them) offers a powerful routing mechanism with which it is possible to define fine-grained entry-points in IAM leading to different upstream services of your infrastructure. Now let's create a route. Go to Routes and press the "New Route" button. Set the values in the Route creation form: field value description path /api/atelier root path we want to serve protocol http the protocols we want to support service.id guid from 3 service id value (guid from previous step) And we're done! Send a request to http://localhost:8000/api/atelier/ (note the slash at the end) and it would be served by one of our two backends. Conclusion IAM offers a highly customizable API Management infrastructure, allowing developers and administrators to take control of their APIs. Links Documentation IAM Announcement Working with IAM article Question What functionality do you want to see configured with IAM? I have a question regarding productionized deployments.Can the internal IRIS web-server be used, i.e. Port 52773?Or should there still be a web-gateway between IAM and the IRIS instance? Regarding Kubernetes:I would think that IAM should be the ingress, is that correct? Hi Stefan, The short answer is you still need a web-gateway between IAM and IRIS. The private web server (port 52773) minimal build of the Apache web server is supplied for the purpose of running the Management Portal not production level traffic. I would think that IAM should be the ingress, is that correct? Agreed. Calling @Luca.Ravazzolo.
Article
Evgeny Shvarov · Nov 19, 2019

Recurse Parameter for REST API Applications in InterSystems IRIS

Hi developers! I just want to share with you the knowledge aka experience which could save you a few hours someday. If you are building REST API with IRIS which contains more than 1 level of "/", e.g. '/patients/all' don't forget to add parameter 'recurse=1' into your deployment script in %Installer, otherwise all the second and higher entries won't work. And all the entries of level=1 will work. /patients - will work, but /patients/all - won't. Here is an example of CSPApplicatoin section which fix the issue and which you may want to use in your %Installer class: <CSPApplication Url="${CSPAPP}" Recurse="1" Directory="${CSPAPPDIR}" Grant="${RESOURCE},%SQL" AuthenticationMethods="96" />
Article
Eduard Lebedyuk · Nov 22, 2019

Serving HTTPS requests from InterSystems API Management

This quick guide shows how to serve HTTPS requests with InterSystems API Management. Advantage here is that you have your certs on one separated server and you don't need to configure each backend web-server separately. Here's how: 1. Buy the domain name. 2. Adjust DNS records from your domain to the IAM IP address. 3. Generate HTTPS certificate and private key. I use Let's Encrypt - it's free. 4. Start IAM if you didn't already. 5. Send this request to IAM: POST http://host:8001/certificates/ { "cert": "-----BEGIN CERTIFICATE-----...", "key": "-----BEGIN PRIVATE KEY-----...", "snis": [ "host" ] } Note: replace newlines in cert and key with \n. You'll get a response, save id value from it. 6. Go to your IAM workspace, open SNIs, create new SNI with the name - your host and ssl_certificate_id - id from the previous step. 7. Update your routes to use https protocol (leave only https to force secure connection, or specify http, https to allow both protocols) 8. Test HTTPS requests by sending them to https://host:8443/<your route> - that's where IAM listens for HTTPS connections by default. Eduard, thank you for a very good webinar. You mentioned that IAM can be helpful even if there is "service-mix": some services are IRIS based, others - not. How can IAM help with non-IRIS services? Can any Target Object be non-IRIS base? Can any Target Object be non-IRIS base? Absolutely. The services you offer via IAM can be sourced anywhere. Both from InterSystems IRIS and not. How can IAM help with non-IRIS services? All the benefits you get from using IAM (ease of administration, control, analytics) are available for both InterSystems IRIS-based and non InterSystems IRIS-based services
Discussion
Nikita Savchenko · Dec 12, 2019

Package Manager for InterSystems: the Design Question to All Engineers

Hello, InterSystems community! Lately, you have probably heard of the new InterSystems Package Manager - ZPM. If you're familiar with it or with such package managers as NPM, Dep, pip/PyPI, etc. or just know what is it all about -- this question is for you! The question I want to arise is actually a system design question, or, in other words, "how should ZPM implement it". In short, ZPM (the new package manager) allows you to install packages/software to your InterSystems product in a very convenient, manageable way. Just open up the terminal, run ZPM routine, and type install samples-objectscript: you will have a new package/software installed and ready to use! In the same way, you can easily delete and update packages. From the developer's point of view, quite as same as in other package managers, ZPM requires the package/software to have a package description, fairly represented as module.xml file. Here's an example of it. This file has a description of what to install, which CSP applications to create, which routines to run once installed and so on. Now, straight to the point. You've also probably heard of InterSystems WebTerminal - one of my projects which is quite widely used (over 500 installs over the last couple of months). We try to bring WebTerminal to ZPM. So far, anyone could install WebTerminal just by importing an XML file with its code - no more actions were needed. During the class compilation, WebTerminal runs the projection and does all required settings on its own (web application, globals, etc - see here). In addition to this, WebTerminal has its own self-updating mechanism, which allows it to self-update when the new version comes out, made exactly with the use of projections. Apart from that, I have 2 more projects (ClassExplorer, Visual Editor) that use the same import-and-install convenient installation mechanism. But, it was decided that ZPM won't accept projections as a paradigm and everything should be described in module.xml file. Hence, to publish WebTerminal for ZPM, the team tried to remove Installer.cls class (one of WebTerminal's classes which did all the install-update magic with the use of projections) and manually replaced it with some module.xml metadata. It turned to be quite enough for WebTerminal to work but it potentially leads to unexpected incompatibilities to be 100% compatible with ZPM (see below). Thus, the source code changes are needed. So the question is, should ZPM really avoid all projection-enabled classes for its packages? The decision of avoiding projections might be changed via the open discussion here. It's not a question of why can't I rewrite WebTerminal's code, but rather why not just accept original software code even if it uses projections? My opinion was quite strong against avoiding projection-enabled classes in ZPM modules. For multiple reasons. But first of all, because projections are the way how the programming language works, and I see no constructive reasoning against using them for whatever the software/package is designed for. Avoiding them and cutting Installer.cls class from the release is absolutely the same as patching a working module. I agree that the packages which ship specifically for ZPM should try to use all installation features which module.xml provides, however, WebTerminal is also shipped outside of ZPM, and maintaining 2 versions of WebTerminal (at least, because of the self-update feature) makes me think that something is wrong here. I see the next pros of keeping all projection-enabled classes in ZPM: The package/software will still be compatible with both ZPM and a regular installation done for years (via XML/classes import) No original package/software source code changes needed to bring it to ZPM All designed functions work as expected and don't cause problems (for instance, WebTerminal self-updates - upon the update, it loads the XML file with the new version and imports it, including projection-enabled Installer.cls file anyway) Cons of keeping all projection-enabled classes in ZPM: Side effects made during the installation/uninstallation, made by projection-enabled classes won't be statically described in the module.xml file, hence they are "less auditable". There is an opinion that any side effect must be described in module.xml file. Please indicate any other pros/cons if this isn't the full list. What do you think? Thank you! Exactly not for installing purposes, you're right, I agree. But what do you think about the WebTerminal case in particular? 1. It's already developed and bound to projections: installation, its own update mechanism, etc.2. It's also shipped outside of ZPM3. It would work as usual if only ZPM supported projections I see you're pointing out to "It might need to support Projections eventually because as you said it's a part of language" - that's what mostly my point is about. Why not just to allow them. Thanks! Exactly, I completely agree about simplicity, transparency, and installation standard. But see my answer to Sergey's answer - what to do with WebTerminal in particular? 1. Why would I need to rewrite the update mechanism I developed years ago (for example)?2. Why would I need to maintain 2 code bases for ZPM & regular installations (or automate it in a quite crazy way, or just drop self-update feature when ZPM is detected)3. Why all these changes to the source code are needed, after all, if it "just works" normally without ZPM complications (which is how the ObjectScript works) I think this leads to either "make a package ZPM-compatible" or "make ZPM ObjectScript-compatible" discussion, isn't it? The answer to all this could be "To make the world the better place"). Because if you do all 3 you get: the same wonderful Web terminal, but with simple, transparent, and standard installing mechanism with and yet another channel for distribution, cause ZPM seems to be a very handy and popular way to install/try the staff. Maybe yet another channel of clear and handy app distribution is enough argument to change something in the application too? True points. For sure, developers can customize it. I can do another version of WebTerminal specifically for ZPM, but it will involve additional coding and support: 1. A need to change how the self-update mechanism works or shut it down completely. Right now, the user gets a message on the UI, suggesting to update WebTerminal to the latest version. There's quite a lot of things happen under the hood.2. Thus, create an additional pipeline (or split the codebase) for 2 WebTerminal versions: ZPM's one and a regular one with all the tests and so on. I am wondering is it worth doing so in WebTerminal's perspective, or is it better to make WebTerminal a kind of an exception for ZPM. Because, still, inserting a couple of if (isZPMInstalled) { ... } else { ... } conditions to WebTerminal (even on front-end side) looks as anti-pattern to me. Thanks! Considering the points others mention, I agree that projections should not be the way to install things but rather the acceptable exception as for WebTerminal and other complex packages. Another option rather than having two versions of the whole codebase could be having a wrapper module around webterminal (i.e., another module that depends on webterminal) with hooks in webterminal to allow that wrapper to turn off projection-based installation-related features. I completely agree, and to get to standard installing mechanism for USERS, we need to zpm-enable as many existing projects as possible. To enable these projects we need to simplify the zpm-enabling, leveraging existing code if possible (or not preventing developers from leveraging the existing code). I think allowing developers to use already existing installers (whatever form they may take) would help with this goal. This is very wise, thanks Ed! For zpm-enabling we plan to add some boilerplate module.xml generator for the repo, stay tuned Hi Nikita, > A need to change how the self-update mechanism works or shut it down completely.If a package is distributed via package manager, its self-update should be completely removed. It should be a responsibility of package manager to alert the user that new version of package is available and to install it. > Thus, create an additional pipeline (or split the codebase) for 2 WebTerminal versions: ZPM's one and a regular one with all the tests andso on.Some package managers allow to apply patches to software before packaging it, but I don't think it's the case for ZPM at the moment. I believe you will need to do a separate build for ZPM/non-ZPM versions of your software. You can either apply some patches during the build, or refactor the software so that it can run without auto updater if it's not installed. Hi Nikita! Do you want the ZPM exception for the Webterminal only or for all your InterSystems solutions? ) The whole purpose of package manager is to get rid of individual installer/updater scripts written by individual developers and replace them with package management utility. So that you have a standard way of installing, removing and updating your packages. So I don't quite understand why this question is raised in this context -- of course package manager shouldn't support custom installers and updaters. It might need to support Projections eventually because as you said it's a part of language, but definitely not for installing purposes. I completely support inclusion of projections. ObjectScript Language allows execution of arbitrary code at compile time through three different mechanisms: Projections Code generators Macros All these instruments are entirely unlimited in their scope, so I don't see why we need to prohibit one way of executing code at compilation. Furthermore ZPM itself uses Projections to install itself so closing this avenue to other projects seems strange. Hi Nikita! Thanks for the good question! The answer is on why module.xml vs installer.cls on projections is quite obvious IMHO. Compare module.xml and installer.cls which does the same thing. Examining module.xml you can clearly say what the installation does and easily maintain/support it. In this case, the package installs: 1. classes from WebTerminal package: <Resource Name="WebTerminal.PKG" /> 2. creates one REST Web app: <CSPApplication Url="/terminal" Path="/build/client" Directory="{$cspdir}/terminal" DispatchClass="WebTerminal.Router" ServeFiles="1" Recurse="1" PasswordAuthEnabled="1" UnauthenticatedEnabled="0" CookiePath="/" /> 3. creates another REST Web app: <CSPApplication Url="/terminalsocket" Path="/terminal" Directory="{$cspdir}/terminalsocket" ServeFiles="0" UnauthenticatedEnabled="1" MatchRoles=":%DB_CACHESYS:%DB_IRISSYS:{$dbrole}" Recurse="1" CookiePath="/" /> I cannot say the same for Installer.cls on projections - what does it do to my system? Simplicity, transparency, and installation standard with zpm module.xml approach vs what? From the pros/cons, it seems the objectives are: Maintain compatibility with normal installation (without ZPM) Make side effects from installation/uninstallation auditable by putting them in module.xml I'd suggest as one approach to accomplish both objectives: Suppress the projection side effects when running in a package manager installation/uninstallation context (either by checking $STACK or using some trickier under-the-hood things with singletons from the package manager - regardless, be sure to unit test this behavior!). Add "Resource Processor" classes (specified in module.xml with Preload="true" and not included in normal WebTerminal XML exports used for non-ZPM installation) - that is, classes extending %ZPM.PackageManager.Developer.Processor.Abstract and overriding the appropriate methods - to handle your custom installation things. You can then use these in your module manifest, provided that such inversion of control still works without bootstrapping issues following changes made in https://github.com/intersystems-community/zpm. Generally-useful things like creating a %All namespace should probably be pushed back to zpm itself.
Announcement
Anastasia Dyubaylo · Dec 18, 2019

New Video: InterSystems IRIS Roadmap - Analytics and AI

Hi Community, The new video from Global Summit 2019 is already on InterSystems Developers YouTube: ⏯ InterSystems IRIS Roadmap: Analytics and AI This video outlines what's new and what's next for Business Intelligence (BI), Artificial Intelligence (AI), and analytics within InterSystems IRIS. We will present the use cases that we are working to solve, what has been delivered to address those use cases, as well as what we are working on next. Takeaway: You will gain knowledge of current and future business intelligence and analytics capabilities within InterSystems IRIS. Presenters: 🗣 @Benjamin.DeBoe, Product Manager, InterSystems 🗣 @Thomas.Dyar, Product Specialist - Machine Learning, InterSystems 🗣 @Carmen.Logue, Product Manager - Analytics and AI, InterSystems Additional materials to this video you can find in this InterSystems Online Learning Course. Enjoy watching this video! 👍🏼
Announcement
Olga Zavrazhnova · Dec 24, 2019

New Global Masters Reward: InterSystems Certification Voucher

Hi Community, Great news for all Global Masters lovers! Now you can redeem a Certification Voucher for 10,000 points! Voucher gives you 1 exam attempt for any exam available at the InterSystems exam system. We have a limited edition of 10 vouchers, so don't hesitate to get yours! Passing the exam allows you to claim the electronic badge that can be embedded in social media accounts to show the world that your InterSystems technology skills are the first-rate. ➡️ Learn more about the InterSystems Certification Program here. NOTE: Reward is available for Global Masters members of Advocate level and above; InterSystems employees are not eligible to redeem this reward. Vouchers are non-transferable. Good for one attempt for any exam in InterSystems exam system. Can be used for a future exam. Valid for one year from the redemption’s date (the Award Date). Redeem the prize and prove your mastery of our technology! 👍🏼 Already 9 available ;) How to get this voucher? @Kejia.Lin This appears to be a reward on Global Masters. You can also find a quick link at the light blue bar on the top of this page. Once you join Global Masters you can get points for interacting with the community and ultimately use these points to claim rewards, such as the one mentioned here.