Clear filter
Article
Yuri Marx · Feb 4, 2021
Gartner defined DataOps as: "A collaborative data management practice focused on improving the communication, integration and automation of data flows between data managers and data consumers across an organization. The goal of DataOps is to deliver value faster by creating predictable delivery and change management of data, data models and related artifacts. DataOps uses technology to automate the design, deployment and management of data delivery with appropriate levels of governance, and it uses metadata to improve the usability and value of data in a dynamic environment."
DataOps was first introduced by Lenny Liebmann, Contributing Editor, InformationWeek, in a blog post on the IBM Big Data & Analytics Hub titled "3 reasons why DataOps is essential for big data success" on June 19, 2014. The term DataOps was later popularized by Andy Palmer at Tamr. DataOps is a moniker for "Data Operations." 2017 was a significant year for DataOps with significant ecosystem development, analyst coverage, increased keyword searches, surveys, publications, and open source projects. Gartner named DataOps on the Hype Cycle for Data Management in 2018. (source: https://en.wikipedia.org/wiki/DataOps)
The DataOps manifesto established the following DataOps principles: (https://www.dataopsmanifesto.org/dataops-manifesto.html)
Continually satisfy your customer: Our highest priority is to satisfy the customer through the early and continuous delivery of valuable analytic insights from a couple of minutes to weeks.
Value working analytics: We believe the primary measure of data analytics performance is the degree to which insightful analytics are delivered, incorporating accurate data, atop robust frameworks and systems.
Embrace change: We welcome evolving customer needs, and in fact, we embrace them to generate competitive advantage. We believe that the most efficient, effective, and agile method of communication with customers is face-to-face conversation.
It's a team sport: Analytic teams will always have a variety of roles, skills, favorite tools, and titles. A diversity of backgrounds and opinions increases innovation and productivity.
Daily interactions: Customers, analytic teams, and operations must work together daily throughout the project.
Self-organize: We believe that the best analytic insight, algorithms, architectures, requirements, and designs emerge from self-organizing teams.
Reduce heroism: As the pace and breadth of need for analytic insights ever increases, we believe analytic teams should strive to reduce heroism and create sustainable and scalable data analytic teams and processes.
Reflect: Analytic teams should fine-tune their operational performance by self-reflecting, at regular intervals, on feedback provided by their customers, themselves, and operational statistics.
Analytics is code: Analytic teams use a variety of individual tools to access, integrate, model, and visualize data. Fundamentally, each of these tools generates code and configuration which describes the actions taken upon data to deliver insight.
Orchestrate: The beginning-to-end orchestration of data, tools, code, environments, and the analytic teams work is a key driver of analytic success.
Make it reproducible: Reproducible results are required and therefore we version everything: data, low-level hardware and software configurations, and the code and configuration specific to each tool in the toolchain.
Disposable environments: We believe it is important to minimize the cost for analytic team members to experiment by giving them easy to create, isolated, safe, and disposable technical environments that reflect their production environment.
Simplicity: We believe that continuous attention to technical excellence and good design enhances agility; likewise simplicity--the art of maximizing the amount of work not done--is essential.
Analytics is manufacturing: Analytic pipelines are analogous to lean manufacturing lines. We believe a fundamental concept of DataOps is a focus on process-thinking aimed at achieving continuous efficiencies in the manufacture of analytic insight.
Quality is paramount: Analytic pipelines should be built with a foundation capable of automated detection of abnormalities (jidoka) and security issues in code, configuration, and data, and should provide continuous feedback to operators for error avoidance (poka yoke).
Monitor quality and performance: Our goal is to have performance, security and quality measures that are monitored continuously to detect unexpected variation and generate operational statistics.
Reuse: We believe a foundational aspect of analytic insight manufacturing efficiency is to avoid the repetition of previous work by the individual or team.
Improve cycle times: We should strive to minimize the time and effort to turn a customer need into an analytic idea, create it in development, release it as a repeatable production process, and finally refactor and reuse that product.
When you analyze these principles, it is possible see some points where InterSystems IRIS can help:
Continually satisfy your customer: you can create new short integration productions, orchestrations, IRIS cubes, reports, BI visualizations and ML models by sprints or iterations.
Value working analytics: IRIS help you to deliver data with quality (using productions, adapters and class methods in the persistent classes) and enable you to do data exploration into IRIS BI pivot tables (analysis designer) and into IRIS NLP (text analysis).
Self-organize: IRIS simplify the self organization, because with an unifield data platform, you collect, process, analyze and publish insights, with one tool.
Reflect: This User Portal you can interact with users and collect feedback to improve delivered products.
Analytics is code: into IRIS data model, cubes, dashboards are source code, with version control and governance.
Orchestrate: IRIS is a data platform thats orchestrate data ingestion, enrichment, analytical work, data visualization and ML over data, in a single tool, IRIS.
Make it reproducible: IRIS embrance docker, kubernetes (IKO) and devops to reproduce the results.
Disposable environments: IRIS supports create docker disposable environments to integration, data models, BI cubes and visualizations.
Simplicity: IRIS data cube creation is very simple and eliminate the creation of ETL scripts, the creation of analysis, cubes, dashboards are visual, web and possible to be done by the users, not only developer team. And IntegratedML allows create ML to common scenarios without source code development.
Monitor quality and performance: IRIS uses SAM to monitor performance and have a Web Management Portal.
Reuse: in IRIS the DataOps artifacts are classes and classes are extensible and reusable by default.
Improve cycle times: the users can create dashboards, analysis, reports, publish and share your work at self-service.
The ODSC (https://opendatascience.com/maximize-upstream-dataops-efficiency-through-ai-and-ml-to-accelerate-analytics/) indicate the following DataOps strategy:
The InterSystems IRIS helps in the above points, see:
Self-service provisioning: users can create and publish cubes and dashboards.
Share, tag, annotate: User portal can be used to share dashboards, the IRIS Analytical Web Portal allows user create, document, organize into folders and tag your work.
Enrichement: BPL can be used to enrich data.
Preparation: BPL, DTL, Adapters and ObjectScript logic can help with data preparation.
Data marketplace: data assets can be published to REST API and monetized with IRIS API Manager.
Data Catalog: data in IRIS is organized into classes, theses classes are stored into the class catalog system (%Dictonary)
Profile & Classify: can be created groups, folders to analytical artifacts in the User Portal and Admin Portal.
Quality: IRIS has utility classes to generate sample data and do unit tests.
Lineage: into IRIS all data assets are connected, from data model you build cubes, from cubes you build dashboards and all data assets can be controlled by data curators (IRIS permission system)
Mastering: Admin Portal allows you master all aspects into analytical projects.
DB Data, File Data, SaaS API, streams: IRIS is multimodel and supports persistence and analysis into data and text (NLP). Supports SaaS API using IRIS API Manager and works with Streams using Integration Adapters and PEX (with kafka).
Applications, BI Tools, Analytics Sandboxes: with IRIS you can create DataOps apps with your preferred language (Java, Python, .NET, Node.js, ObjectScript). IRIS is a BI tool, but you can use connectors with Power BI or MDX bridge and IRIS is sandbox to analytics, in a single tool.
See my summary mapping IRIS and DataOps:
Great article, thank you! I hadn't seen the DataOps concept before but it makes a lot of sense.
Discussion
Eduard Lebedyuk · Apr 2, 2021
Images for other languages are often build using multistage build process.
What about InterSystems IRIS?
Consider this Dockerfile:
FROM irishealth-community:2020.4.0.524.0 AS builder
# Load code into USER and compile
# Adjust settings, etc.
FROM irishealth-community:2020.4.0.524.0
# replace in standard kit with what we modified in first stage
COPY --from=builder /usr/irissys/iris.cpf /usr/irissys/.
COPY --from=builder /usr/irissys/mgr/IRIS.DAT /usr/irissys/mgr/.
COPY --from=builder /usr/irissys/mgr/user/IRIS.DAT /usr/irissys/mgr/user/.
The advantage of this approach is the image size.The disadvantage is that on a final stage developer must know/remember all the modified places in the builder image.But otherwise is this approach OK for InterSystems IRIS?Have anyone tried to build IRIS images this way? We build IRIS with ZPM this way, and the reason was to shrink the final size
But the best you achieve if you will not change anything in the system database, which is quite big, and it makes no sense in the end. Thank you!
Why do you map %ZLANGC00/%ZLANGF00 to %ALL? They are available everywhere by default. The reason was to place them in the particular database, out of the system database. You may use this at init time https://openexchange.intersystems.com/package/helper-for-objectscript-language-extensions to add your ZPM command and function. It does a unique insert of the label For a simple case (add a couple of classes to the USER namespace) - what would be space savings here? Interesting question!
Here are my findings for `store/intersystems/irishealth-community:2020.4.0.524.0` and a simple app in the `USER` namespace.
| | Uncompressed (Mb) | Compressed (Mb) |
|--------------------|--------------------|-----------------|
| IRIS | 3 323 | 897 |
| IRIS Squashed | 3 293 | 893 |
| App | 8 436 | 1 953 |
| App MSB | 3 526 | 937 |
| App Squashed | 3 576 | 931 |
| App MSB + Squashed | 3 363 | 904 |
Notes:
- MSB means [Multi Stage Build](https://docs.docker.com/develop/develop-images/multistage-build/)
- Squashed means that an image was built with a [--squash option](https://github.com/docker/docker-ce/blob/master/components/cli/docs/reference/commandline/build.md#squash-an-images-layers---squash-experimental)
- Uncompressed size is calculated via `docker inspect -f "{{ .Size }}" $image`
- Compressed size is calculated via `(docker manifest inspect $image | ConvertFrom-Json).layers | Measure-Object -Property size -Sum`
- More info on calculating image sizes is available [here](https://gist.github.com/MichaelSimons/fb588539dcefd9b5fdf45ba04c302db6)
**Conclusion**: either MSB or Squashed work fine, but just MSB would be better for storage as it can have shared layers (squash always produces one unique layer). Squashed is easier to implement. Super benchmark, it's very interesting.Personally, I prefer the squash method because it is easy to implement (no change in the dockefile). I won't recommend using squashing, as it makes caching image layers useless.
Have a look at this nice tool, which may help to discover why your final image is so big.
btw, one thing I noticed working on macOS, is that the final image much bigger than expected, while the same Dockerfile on Linux, produces an image of a reasonable size. You might want to add /usr/irissys/csp/bin/CSP.ini to the list of files to copy. Without it chances are CSP Gateway would not be authorized to communicate to IRIS.
Announcement
Anastasia Dyubaylo · Jul 21, 2023
Hi Community,
Watch this video to learn the basics of how to use the InterSystems® command line interface to execute commands, including starting the Terminal, executing methods and routines, and exiting the Terminal.
⏯ Using the InterSystems Terminal
Subscribe to InterSystems Developers YouTube to stay up to date!
Announcement
Anastasia Dyubaylo · Aug 1, 2023
Hi Community,
Enjoy watching the new video on InterSystems Developers YouTube:
⏯ Introduction to InterSystems Products
Get an introduction to the InterSystems IRIS data platform and InterSystems IRIS for Health, InterSystems Caché and Ensemble, the HealthShare family of products, and InterSystems TrakCare. See what each product has to offer and how they relate to each other.
Enjoy the video and stay tuned for more! 👍
Announcement
Preston Brown · Jul 20, 2023
SCOPE OF SERVICES: The Database Developer’s responsibilities will include, but are not limited to:• To perform support and maintenance of existing Cache code• Analysis and migration of existing Cache code to Microsoft SQL; and Web Services (SOA)
REQUIRED SKILLS/EXPERIENCE:
A minimum of 5 proven years of experience in computer applications development planning, design, troubleshooting, integration, maintenance, and enhancement of Cache Database applications.
Related required skills:• Object Script• Cache Routines, Queries, Class Methods• SQL• Globals• RedHat Linux
Excellent communication skills and experience in handling confidential information
US Citizenship only: Yes
Job Type: Contractor
Hybrid Position. 4 Days in the Kew Gardens NY office and one day remote.
$$$: DOE ($60-$70/hour)
Work hours: 9AM to 5PM (~35 hours weekly).
Email consideration to with Resume (must have) to pbrown@cybercodemasters.com
Announcement
Olga Zavrazhnova · Sep 13, 2023
InterSystems team is heading to MIT's largest hackathon this weekend where we will introduce a tech challenge for hackers.We offer hackers to use IntegratedML or InterSystems Supply Chain Orchestrator in their projects in order to compete for some really cool prizes!
If you are in Boston and would be interested to be an InterSystems mentor at the event - drop me a line.
UPDATE: We have our amazing winners! Check out these projects:
First Place – Fluxus Description | GitHub Second Place – Carta Description | GitHubThird Place – Manifest Description | GitHub
Thanks to everyone who took part in InterSystems Challenge!Don't forget to join our Discord Server to stay in touch and receive notifications on upcoming InterSystems programming contests!
We are looking forward to seeing new creative projects! Stay tuned for the winners announcement on September 18.
Learn more about this hackathon at HackMIT official website. Looking forward to seeing who the winners of the InterSystems awards are 😀 Thank you, Sylvain!We now have the 3 winning teams! I updated the post with links to their projects 🤩
Announcement
Vadim Aniskin · Nov 16, 2022
Hello Community,
Welcome to our first InterSystems Ideas News!
The most important piece of news is the result of our very first and very successful Idea-A-Thon Contest. We've received 75 new interesting ideas.
Some general stats of the Ideas Portal:
✓ 42 new ideas published last month✓ 147 new users joined last month✓ 142 ideas posted all time✓ 273 users joined all time
The top 5 most voted for ideas of the month:
IRIS and ZPM(Open Exchange) integration
Move users, roles, resources, user tasks, Mappings (etc) to a seperate Database, other than %SYS, so these items can be mirrored
RPMShare - Database solution for remote patient monitoring (RPM) datasets of high density vitals
Create front-end package based on CSS and JS to be used in NodeJS and Angular projects
PM platform
And to round up this newsletter, here is a list of all ideas posted last month
Add IRIS as a supported database for Apache Superset
For community articles, let admins (and possibly article authors) pin particular comments to the top
Add address standardization to Normalization (using Project US@ standards)
PM platform
Tool to convert legacy dot syntax code to use bracket syntax
TTTC
PDF reports for IRIS BI
Sample code share opportunity
Add basic tutorial of Docker or point to a Docker tutorial in Documentation
The ability to export current settings to a %Installer Manifest
Move users, roles, resources, user tasks, Mappings (etc) to a seperate Database, other than %SYS, so these items can be mirrored
Common Data Modelling
CI/CD support
String format to numeric values in ZEN.proxyObject
Patient Initiated Follow Up - Adding a document to an ROPWL
I service Flags
Linking I service to JIRA system
Linux: iris session [command line] get commands from a file
Journal file analysis/visualization
Add the option to call class parameters in Embedded Python
Create query builder
Colour Background on Ward / Clinical Areas Floorplans
A Dynamic Cheat Sheet to lookup for Common Core Functions for Reusability
Version History for Classes
Add wizard to create class with its properties
RPMShare - Database solution for remote patient monitoring (RPM) datasets of high density vitals
Better handle whitespace in Management Portal Text entry
IRIS and ZPM(Open Exchange) integration
Visual programming language
Backup button before importing
Adapting tools for people with special needs and/or disabilities 🌐🔰🦽🦼♿
Reserve licenses
Interoperability Wizard
Improve journal display in IRIS admin portal
Create front-end package based on CSS and JS to be used in NodeJS and Angular projects
Mirror Async Member Time Delay in Applying Journals
Cache Journal Switch Schedule
Monitoring and Programatic way of Starting/Stoping Gateways
Embedded Python: Add a built-in variable to represent class
LDAP Authentication method by default on Web Applications
Please add google oauth authorization to login to the management portal
Data Analyzer
That's it for now.
Visit our InterSystems Ideas portal, suggest your ideas and vote for the existing ones!
Look out for the next announcement!
Announcement
Anastasia Dyubaylo · Nov 13, 2022
Hi Developers,
We're pleased to announce that InterSystems is hosting its partner days in Germany – InterSystems Partnertage DACH 2022!
During this time you will be able to exchange product innovations and practical tips and tricks between InterSystems experts and your colleagues in Darmstadt. And of course, a lot of networking, because there is a lot to catch up on!
🗓 Dates: November 23-24, 2022
📍 Venue: Wissenschafts- und Kongresszentrum darmstadtium in Herzen DarmstadtsSchloßgraben 1, 64283 Darmstadt
The agenda of the two-day partner days consists of a mix of keynotes, informative sessions, and masterclasses. Read on for more details.
The agenda at a glance:
November 23 (focus on healthcare):
Innovations in healthcare
Transferring innovative ideas into concrete projects – with InterSystems technology
Outlook on the next development steps for InterSystems HealthShare and InterSystems IRIS for Health
Masterclass sessions with our Sales Engineering team (InterSystems SAM, Embedded Python for ObjectScript developers)
November 24:
Keynotes on the successful cross-industry use of InterSystems technologies
Our innovative data management concept „Smart Data Fabric“ – what is it all about?
Round Table „migration on InterSystems IRIS“ – our partners share their experiences
Presentations and live demos focusing on the following key topics: Columnar Storage, IntegratedML, Container-Support, InterSystems Reports
Masterclass sessions on „social selling“
✅ REGISTER HERE
We look forward to seeing you in Darmstadt!
Announcement
Anastasia Dyubaylo · Mar 6, 2023
We're electrified to invite all our clients, partners, developers, and community members to our in-person InterSystems Global Summit 2023!
Our Global Summit user conference is your opportunity to connect with trailblazing product developers, fellow users pushing our technology into new frontiers, and the people whose out-of-box thinking is rocking our universities and board rooms. All in the same space. And registration is now open!
➡️ InterSystems Global Summit 2023
🗓 Dates: June 4-7, 2023
📍 Location: The Diplomat Beach Resort, Hollywood, Florida, USA
Join us this year for content on how customers like you use our technology for innovation and what trends affect our future innovations, including new and enhanced products and product offerings.
Here is a short glimpse at the agenda.
Sunday, June 4
Golf Outing or Morning Social ActivitiesBadge Pick-upTechnology BootcampCertification ExamsWomen's MeetupWelcome Reception
Monday, June 5
Welcome and KeynotesBreakout SessionsHealthcare Leadership ConferencePartner Pavilion 1:1 MeetingsCertification ExamsFocus GroupsTech ExchangeAffinity SessionsEvening Drinks & Demos
Tuesday, June 6
KeynotesBreakout SessionsPartner Pavilion 1:1 Meetings Certification Exams Focus GroupsTech ExchangeAffinity SessionsEvening Social Event
Wednesday, June 7
Keynotes Breakout Sessions Partner Pavilion 1:1 Meetings Certification ExamsFocus Groups Tech Exchange Farewell Reception
For more information on the agenda please visit this page.
We look forward to seeing you at the InterSystems Global Summit 2023!
Question
Evgeny Shvarov · May 13, 2023
Hi folks!
Those who actively use unittests with ObjectScript know that they are methods of instance but not classmethods.
Sometimes this is not very convenient. What I do now if I face that some test method fails I COPY(!) this method somewhere else as classmethod and run/debug it.
Is there a handy way to call the particular unittest method in terminal? And what is more important, a handy way to debug the test method?
Why do we have unittest methods as instance methods?
VSCode has a way to help with running tests, but it requires implementing from our side A handy way to call UnitTest Case method in a terminal
Do ##class(%UnitTest.Manager).DebugRunTestCase("", "[ClassName]", "", "[MethodName]")
Run all Test methods for a TestCase:
Do ##class(%UnitTest.Manager).DebugRunTestCase("", "[ClassName]", "", "")
Placing a "break" line within a method can be useful when iterating creating the test. See the variables. Run other code and then type "g"+ [Enter] to continue.
The instance gives context to current test run, when raising assertions and other functionality. Thanks, Dima! Is there a listed related issue? Thanks Alex.
See the following:
USER>Do ##class(%UnitTest.Manager).DebugRunTestCase("", "dc.irisbi.unittests.TestBI", "", "TestPivots")
(root) begins ...LogStateStatus:0:Finding directories: ERROR #5007: Directory name '/usr/irissys/mgr/user/u:/internal/testing/unit_tests/' is invalid <<==== **FAILED** (root)::
In fact there is a handy way to run all the tests via:
zpm "test module-name"
But, I'd love to see debugging of it @Alex.Woodhead , do you know by a chance why unittest methods are instance methods but not classmethods? Could it be converted to classmethods or provided the option to do that? Hi Evgeny,
The global ^UnitTestRoot needs to be set to a real directory to satisfy the CORE UnitTest Runner.
As the first argument "Suite" is not specified, then no sub-directories are needed.
Set ^UnitTestRoot="\tmp\"
... may be sufficient for your purposes. It is common to need to run unit tests for other modules that have overlapping / breaking functionality.
This is where the value of loading and running multiple TestSuites comes into its own. There are no reasons for them to be ClassMethods, UnitTests is quite a complex thing, and it's there are use-cases where it needs to be this way. with zpm you can use additional parameter for it
zpm "test module-name -only -D UnitTest.Case=Test.PM.Unit.CLI:TestParser"
Use class name only to run all tests there, or add Method name to test only that method in the class Thanks! this is helpful. @Evgeny.Shvarov I have a detailed writeup here (although Dmitry already hit the important point re: IPM): https://community.intersystems.com/post/unit-tests-and-test-coverage-objectscript-package-manager
A few other notes:
Unit test class instances have a property (..Manager) that refers to the %UnitTest.Manager instance, and may be helpful for referencing the folder from which unit tests were loaded (e.g., to load additional supporting data or do file comparisons without assuming an absolute path) or "user parameters" / "user fields" that were passed in the call to run tests (e.g., to support running a subset of tests defined in unit test code). Sure, you could do the same thing with PPGs or % variables, but using OO features is much better.
I'll also often write unit tests that do setup in OnBeforeAllTests and cleanup in %OnClose, so that even if something goes very horribly wrong it'll have the best chance of actually running. Properties of the unit test are useful to store state relevant to this setup - the initial $TLevel (although that should always be 0), device settings, global configuration flags, etc. Thanks Tim! How do you debug a particular unit test in VSCode? Unit testing from within VS Code is now possible. Please see https://community.intersystems.com/post/intersystems-testing-manager-new-vs-code-extension-unittest-framework thanks, @John.Murray ! Will give it a try! Why are unit test methods instance methods? Since a running unit test is an instantiated object (%RegisteredObject), the unit test class itself can have custom properties, and the instance methods can use those properties as a way to share information. For example, initialize the properties in %OnBeforeAllTests(), and then access/change the properties in the test methods. 💡 This question is considered a Key Question. More details here. @Evgeny.Shvarov my hacky way of doing this is to create an untracked mac file with ROUTINE debug defined.
I just swap in the suite or method I want to run per Alex's instructions, set my breakpoints in VS code and make sure the debug configuration is in my VSCode settings:
"launch": {
"version": "0.2.0",
"configurations": [
{
"type": "objectscript",
"request": "launch",
"name": "debugRoutine",
"program": "^debug"
}
]
} That's indeed hacky! Thanks @Michael.Davidovich! Thanks @Joel.Solon!
But all this could be achieved without instance methods, right? Anyway, I'm struggling to find an easy way to debug a failed unittest. @Michael.Davidovich suggested the closest way to achieve it but I still want to find something really handy, e.g. an additional "clickable button" in VSCode above the test method that invites "debug this test method". Similar what we have for class methods now - debug this classmethod and copy invocation.
That'd be ideal. That's one of the things the new extension is designed to achieve. I am utilizing properties on class methods to good effect.
I would not use a classmethod only approach for normal development.
There is nothing stopping a community parallel UnitTest.Manager re-implementation that follows a ClassMethod pattern.
Some have reimplemented UnitTest.Manager:
1) Without Deleteing the test classes at end of run (With Run instead of DebugRun)
2) Not needed ^UnitTestRoot to be defined. Looking forward! Anything can be achieved without instance methods. The point here is that instance methods exist in object-oriented systems because they are considered a good, straightforward way to achieve certain things. In the case of unit tests sharing information using properties, that approach saves you from having to pass info around as method arguments, or declaring a list of public variables. I understand the argumentation, makes sense. Just curious how do you debug those unittests that fail?
Announcement
Vadim Aniskin · Mar 29, 2023
Hi Developers!
Welcome to the 5th issue of the InterSystems Ideas News! This time you can read about:
✓ Hall of Fame - a new page on the Ideas Portal
✓ Integration with Global Masters - get points for your ideas
✓ List of ideas that are planned for implementation
11 developers have already implemented ideas from the Ideas Portal. We created a new dedicated page on InterSystems Ideas to pay tribute to these heroes. The Hall of Fame lists:
names of implemented ideas,
developers who implemented ideas,
implementation names with links to more information.
You can implement one of Community Opportunity ideas and your name will be in the Hall of Fame!
About a month ago, developers who submitted product ideas started getting points for these ideas.
We would like to share that since 22 February, the authors have received a total of 18,200 Global Masters points for the following ideas
15 product ideas that were posted, promoted, or implemented:
Cross-production Interoperability Messages, Service and Operation by @Stefan.Cronje1399
Additional Data Types for ISC Products by @Stefan.Cronje1399
Change data capture from IRIS to kafka using SQL commands by @Yuri.Gomes
Allow graphical editing of Interoperability components BPL, DTL and Business Rules in VS Code by @Steve.Pisani
Examples to work with IRIS from Django by @Evgeny.Shvarov
Install python and java libraries from ZPM and Installation Manifest (%Installer) by @Yuri.Gomes
Set password through environment variable by @Dmitry.Maslennikov
Add a project that helps to generate unittests for an ObjectScript class by @Evgeny.Shvarov
Create a UI for convenient and easy transfer of projects (classes, globals, applications, users, roles, privileges, grants, namespace mapping, SQLgateways, libraries, etc.) to other system instances for fast deployment. by @MikhailenkoSergey
Add a wizard similar to the SOAP wizard to generate a REST client from OpenAPI specification by @Jaime.Lerga
Public API for access to shared memory by @Alexey.Maslov
Fold code on loops and If's on studio by @Heloisa.Paiva
Chat bot to help with TrakCare customization/settings by Sumana Gopinath
Iterative build of TrakCare configuration/code tables utilising FHIR and HL7 Messaging. by Linda McKay
BPL, DTL, Business Rule Editor in VSCode by @Cristiano.Silva
Post your great ideas and get points for them!
And to round up this newsletter, here is the list of ideas that are planned for implementation
Publish the InterSystems IRIS Native SDK for Node.js on npm by @John.Murray
Move users, roles, resources, user tasks, Mappings (etc) to a seperate Database, other than %SYS, so these items can be mirrored by @Sean.O'Connor1391
Please add google oauth authorization to login to the management portal by @Aleksandr.Kolesov
InterSystems Ideas - Long Term by @Vinay.Purohit3109
BPL, DTL, Business Rule Editor in VSCode by @Sawyer.Butterfield
Add Favorites in GM by @Irène.Mykhailova
LIMIT OFFSET support for IRIS SQL by @Dmitry.Maslennikov
Introduce WITH into IRIS SQL engine by @Evgeny.Shvarov
Security settings for mirror configurations by @Evgeny.Shvarov
A modern management portal to manage InterSystems IRIS by @Evgeny.Shvarov
copy/sync system configurations and user accounts between IRIS instances by @Evgeny.Shvarov
Jupyter Notebook by Guest
Stay creative, post your great ideas on InterSystems Ideas, vote and comment on existing ideas! Hi Community!
I want to say special thanks to the developers who implemented the ideas from InterSystems Ideas:
@Lorenzo.Scalese@Robert.Cemper1003 @MikhailenkoSergey @Dmitry.Maslennikov @Evgeniy.Potapov @henry @Henrique @José.Pereira @Yuri.Gomes @Evgeny.Shvarov @Guillaume.Rongier7183
Your names are on the special Hall of Fame page!  Many thanks
Announcement
Bob Kuszewski · Jun 30, 2023
When IRIS 2023.2 reaches general availability, we’ll be making some improvements to how we tag and distribute IRIS & IRIS for Health containers.
IRIS containers have been tagged using the full build number format, for example 2023.1.0.235.1. Customers have been asking for more stable tags, so they don’t need to change their dockerfiles/Kubernetes files every time a new release is made. With that in mind, we’re making the following changes to how we tag container images.
Major.Minor Tags: Containers will be tagged with the year and release, but not the rest of the full build number. For example, where an image is accessed currently as
containers.intersystems.com/intersystems/iris:2023.2.0.606.0
Will now be accessed as
containers.intersystems.com/intersystems/iris:2023.2
latest-em and latest-cd Tags: The most recent extended-maintenance and continuous-delivery releases will be tagged with latest-em and latest-cd, respectively. This provides a shorthand notation that can be used in documentation, examples, and development environments. We do not advise using these tags for production environments.
Preview: Developer preview releases will all be clearly tagged with -preview so you can easily separate out pre-release containers from production-ready containers. The most recent preview release will helpfully be tagged with latest-preview.
If you’re looking for the full build number for an InterSystems container, that’s available as a label, which you can view with the docker inspect command. For example:
docker inspect --format '{{ index .Config.Labels "com.intersystems.platform-version"}}' containers.intersystems.com/intersystems/iris:2023.1.0.235.1
Containers will no longer be distributed via the WRC download site. If you’re one of the few customers downloading containers from the WRC download site, now’s the time to switch to the InterSystems Container Registry (docs).
While we’re here, as a reminder that we have been publishing multi-architecture manifests for IRIS containers. This means that pulling the IRIS container tagged 2022.3 will download the right container for your machine’s CPU architecture (Intel/AMD or ARM).
If you need to pull a container for a specific CPU architecture, tags are available for architecture-specific containers. For example, 2022.3-linux-amd64 will pull the Intel/AMD container and 2022.3-linux-arm64v8 will pull the ARM container.
We’ll stop posting to the iris-arm64 repositories soon since multi-architecture streamlines the process of getting the right containers on the right machines.
If you have questions or concerns, please reach out to the WRC. Hi Bob,
That's good news and I like new names much more than old ones. I hope old tags for old releases will still be available?
A couple of entries from my container tagging wishlist (one can dream, you know)
- provide :latest tag for all containers, but especially for community ones, which will just pull the latest working release without having to rebuild dockerfiles every year when license expires
- provide 2023.1.x tag which will follow the latest minor version 🎉, great news !
I join Sergei on latest tag in addition of latest-cd and latest-em. Awesome! Please add latest too. Great improvements! Looking forward to it, Bob.
- provide 2023.1.x tag which will follow the latest minor version
It's not clear to me whether or not this is already the plan for the new Major.Minor tags. For instance, when IRIS 2024.1.1 is released, will it be at containers.intersystems.com/intersystems/iris:2024.1 (where IRIS 2024.1.0 images would likely have existed previously) or containers.intersystems.com/intersystems/iris:2024.1.1? @Robert.Kuszewski can you please clarify? The plan for new IRIS containers is to use YYYY.R format only. Let's walk through an example of a few releases for a hypothetical IRIS 2024.1 release.
Release
Old Tag
New Tag
Initial GA
2024.1.0.123.0
2024.1
First maintenance
2024.1.1.234.0
2024.1
Security Fix
2024.1.1.234.1
2024.1
This new scheme greatly simplifies your work to keep up with whatever mix of maintenance and security releases we provide. All you need to do is reference iris:2024.1 and you'll pick up the latest bug and security fixes without making any changes to your code.
As for the latest tag, we like the idea so much that we're giving you two. One that lets you get the latest release (iris:latest-cd) and one that lets you get the latest long-term-support release (iris:latest-em). You also can use intersystemsdc/iris-community:latest or just intersystemsdc/iris-community for the latest InterSystems IRIS Community Edition release.
And intersystemsdc/iris-community:preview for the latest preview build.
intersystemsdc/irishealth-community and intersystemsdc/irishealth-community:preview for InterSystems IRIS For Health Community Edition Well you gave me two so I should have one spare wish :)
Make iris:latest-cd just iris:latest, this way we can just skip the "latest" bit altogether and just use iris without any tag at all. Already some time since the last post...I appreciate the way of dealing with the tags although I'm used to the way this is done i.E. in docker hub where every provided image is listed with its actual build-number so you can see, which image exactly is "latest" or which version is delivered when you only ask for a mayor-release-tag.
If there is a bug in a certain minor-version I cannot see easily if there is a never minor version provided with a docker image. Therefore I have to search through the release notes to see if / when an image of the new version is released or I have to pull the last image of the mayor-version (some GB!) and inspect it.Perhaps this scenario seems a little "constructed"? We just had to deal with it .
In short: With only mayor-version-tags we have to rely in Intersystems that
1. there is always the latest release of an application provided as docker image as well and
2. that there are no new bugs in a newer version (because we can't stick to a certain build).
For production environments with high availability I recommend to use a private registry for Iris etc. and tag the images so we can determine when to upgrade.
How do other deal with this (or am I just paranoid?) I recently got an answer from the WRC confirming that In the YYYY.R docker image is always the latest release of the given version. So we only have to lookup which is the latest release of a version to know which exact version to expect when pulling an image. With these changes, to have just latest, latest-preview, it's hard to get how old the the image on InterSystems Containers Registry
It would be nice to see the date of the image is uploaded and have somewhere sorting by date Hi,
Pulling the irishealth:latest-em fails with dangling manifests (looks like).
docker pull containers.intersystems.com/intersystems/irishealth:latest-emlatest-em: Pulling from intersystems/irishealtha271f97708e3: Pull complete6d17179b85a7: Downloading [==================================================>] 272.8MB/272.8MB0a3eeb0be045: Waitingae9eda793928: Waitingcda3cca218dc: Waiting7858487e277f: Waitingunexpected EOF
Did any get this error?
Thanks,
JB
Announcement
Anastasia Dyubaylo · Oct 10, 2024
Hey Community,
We’re excited to introduce a whole new way for you to showcase your creativity and skills! This time, we’re inviting you to participate in our first-ever video challenge:
📹 InterSystems Tech Video Challenge 📹
Submit a video on any topic related to InterSystems IRIS products or services from October 21 - November 10, 2024.
🎁 Gifts for everyone + main prizes!
🔍 What’s the challenge?Create a short video (up to 15 minutes) demonstrating a unique use of InterSystems technology. Whether it’s an innovative solution, a creative project, or a cool use case, we want to see it all!
📢 How to Enter:To participate, you have to fill out the form where you will be required to upload your video. After we upload it to Developer Community YouTube we will post the link in the comments to this post for you to use everywhere.
Who can participate: Any Developer Community member, except for InterSystems employees (contractors are welcome to participate). Create an account!
General Requirements:
The video must focus on InterSystems products or services and be technical in nature.
All content must be in English (incl. code, screenshots, etc.).
Videos must be 100% original and cannot be translations of any previously submitted videos for contests in any community.
All information presented must be accurate and reliable.
Videos should be less than 15 minutes long.
Different authors can submit videos on the same topic with distinct examples.
NB. Our panel of experts will make the final decision on whether a video qualifies for the contest based on criteria such as quality and relevance. Their decision is final and cannot be appealed.
Contest Timeline
📝 October 21 - November 10: create a video and fill out the form throughout this period. DC members can vote for published videos with Likes – votes in the Community award.
📝 November 11-17: experts voting time
📝 November 18: winners' announcement
Prizes
1. Everyone Wins! All participants will receive a special gift:
🎁 Nike Golf Dri-FIT Swoosh Perforated Cap
2. Expert Awards – videos will be judged by the InterSystems experts:
🥇 1st place: 10.9-inch iPad Wi‑Fi 64GB / Sony WH-1000XM5 Wireless Headphones
🥈 2nd place: Moleskine Vertical Device bag - 15" //LEGO NASA Artemis Space Launch System
🥉 3rd place: AirPods 4 with Active Noise Cancellation / LEGO Hogwarts™ Castle: The Great Hall
As an alternative, any winner can choose a prize from a lower prize tier than his own.
3. Developer Community Award – video with the most likes. The winner will have the option to choose one of the following prizes:
🎁 AirPods 4 with Active Noise Cancellation / LEGO Hogwarts™ Castle: The Great Hall
Note: Authors can only win once per category (up to two total prizes: one from Expert and one from Community). In case of a tie, the number of expert votes will serve as the tie-breaking criterion.
Note: Each winning video will be awarded a single prize, regardless of the number of authors.
🎯 Extra bonuses
Here is the list of additional bonuses to help you win the prize! Please welcome:
Bonus
Nominal
Details
Topic bonus
3
Choose a topic from the list of the proposed topics below to get this bonus.
Article bonus
3
Write a brand new explanatory article to support your video and use the tag #Video in it.
Application bonus
5
Upload a new application (or an improved version of an existing one) from your video to Open Exchange.
Translation bonus
2
Translate your video into one of the languages of our regional communities (ES, PT, JP, CN, FR) and upload it using the same form; mention in the description that it is a translation, and provide a YouTube link to the original.
YouTube Shorts bonus
2
Create a YouTube Short for your video (vertical video, up to 60 seconds).
LinkedIn bonus
1
Share the video on your LinkedIn, mentioning that you’re participating in the InterSystems Tech Video Challenge, and tagging the Developer Community LinkedIn page.
Proposed topics
Here's a list of proposed topics that will give your article extra bonuses:
✔️ Using AI / GenAI / RAG✔️ Using Embedded Python in Interoperability✔️ Using External Language Gateways (C#, Java, Python)✔️ Using Data Fabric / Data Lake / Data Warehouse / Data Mesh✔️ Using FHIR✔️ Using REST✔️ IKO common deployments
---
Get ready to shine and inspire others with your tech skills! We can't wait to see what you come up with. ✨
Note 1: By participating in the contest, you agree to have your video uploaded to the Developer Community YouTube.
Note 2: Delivery of prizes varies by country and may not be possible for some of them. A list of countries with restrictions can be requested from @Liubka Zelenskaia
Wow! Something new and very exciting! Can we use a project we have submitted for a different developer contest if we make a completely new video about it and the project has been updated since the contest? Yes, you can 😊 Hi Devs,The "Tech Video Challenge" has begun!We can't wait to see all the amazing entries. 🤩 Wow!The first video appeared. 🤩
Using Character Slice Index in InterSystems IRIS by @Robert.Cemper1003
Who's next?) Hello! I am currently attempting to submit my video, but every time I fill out the form it tells me something went wrong. Hello, please check direct messages. Hi,
Where are we supposed to upload the YouTube Short and Article? Since the form has a field only for the video. Hello, you need to publish an article on the Developer Community and use the tag #Video. To upload a short, please send it using the same form but add in the description that it's a short and name the video. Thanks and good luck! Thanks for your reply. What if we need to submit a long video and a short as well? Do we need to submit another form with the short? Yes, submit the form twice Was the 10th also the deadline for the shorts and the articles? Is there a particular @ or # you would like us to use when tagging the developer community on LinkedIn? You can collect bonuses until the end of the voting period (November 17). Please tag @InterSystems Developer Community; no specific tags are required. Thanks!
Announcement
Anastasia Dyubaylo · Oct 21, 2024
Hello Community!
We are pleased to invite all our clients, partners and members of the community to take cart in the InterSystems DACH Symposium 2024. The registration is already open!
For all those who were unable to travel to the Global Summit in the USA this year, there is once again the opportunity to get all the important information at our InterSystems DACH Symposium in November.
➡️ InterSystems DACH Symposium 2024
🗓 November 5 - 7, 2024
📍 Scandic Frankfurt Museumsufer | Wilhelm-Leuschner-Straße 44 | 60329 Frankfurt am Main
The event kicks off with a welcome session followed by keynotes addressing critical trends in data platforms and AI. Attendees will explore the latest innovations in healthcare, finance, and logistics, with industry experts sharing success stories and practical applications of InterSystems IRIS and related technologies. The day includes breakout sessions focusing on AI, cloud strategies, interoperability, and real-world case studies, concluding with a networking event, giving participants a chance to engage and share insights with peers.
We would be delighted to have your presence to enrich this event with your experience and expertise. Save the Date and register today using our registration form.
Check out the full agenda of the event here.
We look forward to seeing you at InterSystems DACH Symposium 2024! Looking forward to it!
Article
Daniel Cole · Feb 14
InterSystems has been at the forefront of database technology since its inception, pioneering innovations that consistently outperform competitors like Oracle, IBM, and Microsoft. By focusing on an efficient kernel design and embracing a no-compromise approach to data performance, InterSystems has carved out a niche in mission-critical applications, ensuring reliability, speed, and scalability.
A History of Technical Excellence
During its earlier years, InterSystems distinguished itself through its groundbreaking database architecture, which addressed inefficiencies in legacy relational database systems. While competitors like Oracle relied on rigid relational designs, InterSystems introduced a multi-model database powered by its proprietary kernel. This innovation allowed data to be handled as tables, objects, multidimensional arrays, or key-value pairs—all within a unified database engine. The result was a significant performance improvement for transactional workloads compared to conventional databases. These capabilities laid the foundation for InterSystems' dominance in industries such as healthcare and finance, where systems must handle vast amounts of data without compromising speed or accuracy.
Staying Competitive Against Industry Giants
Despite competition from publicly traded companies with vast resources, such as AWS and Microsoft, InterSystems remains a trusted provider for mission-critical applications. This enduring trust stems from its technical superiority and a laser-focused commitment to its customers.
A Kernel Built for Reliability and Low Latency At the core of InterSystems IRIS is its kernel, a high-performance engine designed to maximize speed, reliability, and scalability. Unlike competitors who rely on external layers or middleware for performance scaling, the IRIS kernel integrates key features like data indexing, journaling, and caching natively, ensuring low-latency performance under extreme workloads.
Journaling and Crash Recovery: Advanced journaling techniques in the IRIS kernel minimize recovery times, enabling systems to restore to a consistent state within seconds, even in catastrophic scenarios. Competing solutions often suffer from slower recovery times, which can result in costly downtime.
Scalability with ECP and Sharding: InterSystems IRIS outperforms its rivals by enabling horizontal scalability with Enterprise Cache Protocol (ECP) and data sharding. This allows organizations like Epic Systems, a healthcare software giant, to manage millions of daily healthcare transactions reliably.
Trusted for Mission-Critical Applications Industries like healthcare and finance, where failure is not an option, continue to rely on InterSystems. For example, the U.S. Department of Veterans Affairs and Credit Suisse trust IRIS to deliver consistent, high-performance data handling. This trust is underpinned by InterSystems’ ability to process millions of transactions per second with deterministic latency, far outperforming AWS’s Aurora in transaction-heavy use cases.
Customer-Centric Innovation Unlike publicly traded competitors, InterSystems reinvests heavily in research and development instead of focusing on shareholder returns. This focus has driven innovations like IntegratedML, which embeds machine learning directly into the database, and Multi-Model Data Access, which eliminates the need for costly ETL pipelines.
Positioned for the Age of AI
As artificial intelligence reshapes industries, the demand for high-performance, scalable data systems has reached unprecedented levels. InterSystems IRIS, with its optimized kernel and inherent scalability, is uniquely positioned to meet these demands.
Speed and Low-Latency Performance In AI applications, where milliseconds matter, InterSystems IRIS excels by leveraging in-memory processing and intelligent indexing to minimize latency. Benchmarks reveal that InterSystems IRIS was between 2.7 and 3.1 times more efficient than AWS Aurora MySQL and the efficiency advantage grew larger as the number of nodes in the cluster increased. .
Effortless Scalability for Compute-Intensive Workloads InterSystems IRIS supports vertical scaling for high-core systems and horizontal scaling through sharding and ECP. This architecture allowed Mass General Brigham to scale its healthcare data platform, enabling seamless integration of over 100 million patient records across its network of hospitals and clinics. By leveraging InterSystems IRIS, Mass General Brigham achieved rapid query performance and real-time insights, critical for clinical decision support and population health analytics.
Co-Located Data for AI and Analytics By integrating operational and analytical workloads within the same environment, InterSystems eliminates the need for ETL processes that introduce latency and complexity. This capability allows AI models to access live datasets instantly, accelerating workflows in industries like supply chain and manufacturing.
AI-Ready Integration InterSystems IRIS integrates seamlessly with Python, IntegratedML, and popular AI frameworks like TensorFlow and PyTorch. These integrations enable organizations to deploy AI models directly within the database environment, reducing overhead and latency compared to traditional approaches that require moving data to external systems.
Hard Numbers That Matter
Performance Benchmarks: Test results demonstrate a large advantage in performance for InterSystems IRIS when compared to AWS Aurora MySQL, MariaDB, Microsoft SQL Server, Oracle and PostgreSQL. The insert rate for InterSystems IRIS was between 1.7 times and 9 times faster than for the other systems. The data query rate for InterSystems IRIS was between 1.1 times and 600 times faster than for the other systems.
Scalability: The IRIS kernel supports the ingestion of millions of transactions per second, making it ideal for global enterprises.
Real-World Impact: Over 1 billion health records globally are managed using InterSystems technology, demonstrating unmatched scalability and reliability.
Conclusion
InterSystems’ unique kernel design and multi-decade commitment to innovation have cemented its place as a leader in mission-critical systems. Whether handling healthcare transactions for Epic, powering financial systems for Harris Associates, or enabling real-time AI applications, InterSystems IRIS consistently delivers unmatched speed, reliability, and scalability. As industries embrace the computational demands of AI, InterSystems remains the trusted partner for organizations that cannot afford to compromise.
Daniel,
Excellent article both from a historical and current perspective!
Rich Very interesting insights. Thank you great reminder for why I've loved this technology and company for the past 22 years :) In a historical context:- the Kernel is essentially MUMPS- ECP (DCP), the Cache Object model (NextGen) and the concept of Namespaces, Databases and Mappings came originally from DataTree'Beating' Oracle/ SQLServer et al, is piece of cake for IRIS.What I am more worried about is 'new' technologies such as "Eventual Consistent" synchronization on large clusters.I think IRIS could do more on clustering techniques.