Clear filter
Announcement
Anastasia Dyubaylo · Oct 30, 2019
Hi Community,
Please join the upcoming InterSystems Israel Meetup in Herzelia which will be held on November 21st, 2019!
It will take place in the Spaces Herzliya Oxygen Ltd from 9:00 a.m. to 5:30 p.m.
The event will be focused on the InterSystems IRIS: it will be divided into IRIS for Healthcare and IRIS Data Platform. A joint lunch will be also included.
Please check the draft of the agenda below:
09:00 – 13:00 Non-Healthcare Sessions:
API Management
Showcase: InterSystems IRIS Directions
IRIS Adopting InterSystems IRIS
REST at Ease
IRIS Containers for Developers
13:00 – 14:00 Joint lunch for both morning and afternoon groups14:00 – 17:30 Healthcare Sessions:
API Management
Build HL7 Interfaces in a Flash
FHIR Update
Showcase: InterSystems IRIS Directions
Note: The final agenda will be published closer to the event.
So, remember:
⏱ Time: November 21st, 2019, from 9:30 a.m. to 5:30 p.m.
📍Venue: Spaces Herzliya Oxygen Ltd, 63 Medinat HaYehudim st., Herzelia, Israel
✅ Registration: Just send an email to ronnie.greenfield@intersystems.com*
We look forward to seeing you!
---
*Space is limited, so register today to secure your place. Admission free, registration is mandatory for attendees. Please check out the final agenda of the event:
Non-Healthcare Sessions:
📌 09:00 – 09:30 Gathering
📌 09:30 – 10:15 IRIS Data Platform Overview 📌 10:15 – 11:00 IRIS Adopting InterSystems IRISIn this session, we will introduce the InterSystems IRIS Adoption Guide, and describe the process of moving from Caché and/or Ensemble to InterSystems IRIS. We will also briefly touch on the conversion process for existing installations of Caché/Ensemble-based applications.Takeaway: InterSystems helps customers as they adopt InterSystems IRIS. 📌 11:00 – 11:45 API ManagementThis session will introduce the concept of API management and outline the InterSystems IRIS features that enable you to manage, monitor, and govern your APIs with full confidence.Takeaway: InterSystems IRIS includes comprehensive capabilities for API management. 📌 11:45 – 12:15 Resources and Services for InterSystems Developers. ObjectScript Package Manager IntroductionTakeaway: Attendees will learn about Developers Community, Open Exchange and other Resources and Services available for developers on InterSystems data platforms and will know about InterSystems Package Manager and how it can help in InterSystems IRIS solutions development 📌 12:45 – 13:00 REST at EaseThis session provides an overview of how to build REST APIs. Topics will include: using the %JSON adapter to expose and consume JSON data for REST endpoints, code-first and spec-first approaches for REST development, and a brief discussion of proper API management.Takeaway: Attendees will learn how to efficiently build, document, and manage REST APIs.
Healthcare Sessions:
📌 13:00 – 14:00 Welcome and Lunch
📌 14:00 – 14:45 InterSystems IRIS for Health Overview 📌 14:45 – 15:00 Showcase: InterSystems IRIS DirectionsThis session provides additional information about the new and future directions for InterSystems IRIS and InterSystems IRIS for Health.Takeaway: InterSystems IRIS and IRIS for Health have a compelling roadmap, with real meat behind it. 📌 15:00 – 15:45 API ManagementThis session will introduce the concept of API management and outline the InterSystems IRIS features that enable you to manage, monitor, and govern your APIs with full confidence.Takeaway: InterSystems IRIS includes comprehensive capabilities for API management. 📌 15:45 – 16:15 Resources and Services for InterSystems Developers. ObjectScript Package Manager IntroductionTakeaway: Attendees will learn about Developers Community, Open Exchange and other Resources and Services available for developers on InterSystems data platforms and will know about InterSystems Package Manager and how it can help in InterSystems IRIS solutions development 📌 16:15 – 17:00 Build HL7 Interfaces in a FlashThis session introduces our new HL7 productivity toolkit. We will give an overview, and demonstrate some key features, such as the Production Generator and Message Analyzer. We will also discuss how you can cost-effectively move from another interface engine to InterSystems technology.Takeaway: You can build HL7 interfaces more efficiently with the new productivity toolkit in InterSystems IRIS for Health.
The agenda is full of interesting stuff. Join the InterSystems Israel Meetup in Herzelia! 👍🏼 I'll participate in the meetup with the session:
📌 11:45 – 12:15 Resources and Services for InterSystems Developers. ObjectScript Package Manager IntroductionTakeaway: Attendees will learn about Developers Community, Open Exchange and other Resources and Services available for developers on InterSystems data platforms and will know about InterSystems Package Manager and how it can help in InterSystems IRIS solutions development
Come join InterSystems Developers Meetup in Israel!
Announcement
Anastasia Dyubaylo · Nov 1, 2019
Hi Everyone,
Please welcome the new Global Summit 2019 video on InterSystems Developers YouTube Channel:
⏯ InterSystems IRIS and Intel Optane Memory
Optane is a new class of memory from Intel that can accelerate the performance of hard drives. In this video, we will review the performance benefits and show high-level cost comparisons of using Intel Optane memory with InterSystems IRIS. We will also outline various use cases for Optane memory and storage.
Takeaway: Attendees will learn the benefits of using Intel's Optane technology with InterSystems IRIS.Presenter: @Mark.Bolinsky, Senior Technology Architect, InterSystems
And...
Don't forget to subscribe to our InterSystems Developers YouTube Channel.
Enjoy and stay tuned!
Article
Eduard Lebedyuk · Oct 21, 2019
InterSystems API Management (IAM) - a new feature of the InterSystems IRIS Data Platform, enables you to monitor, control and govern traffic to and from web-based APIs within your IT infrastructure. In case you missed it, here is the link to the announcement. And here's an article explaining how to start working with IAM.
In this article, we would use InterSystems API Management to Load Balance an API.
In our case, we have 2 InterSystems IRIS instances with /api/atelier REST API that we want to publish for our clients.
There are many different reasons why we might want to do that, such as:
Load balancing to spread the workload across servers
Blue-green deployment: we have two servers, one "prod", other "dev" and we might want to switch between them
Canary deployment: we might publish the new version only on one server and move 1% of clients there
High availability configuration
etc.
Still, the steps we need to take are quite similar.
Prerequisites
2 InterSystems IRIS instances
InterSystems API Management instance
Let's go
Here's what we need to do:
1. Create an upstream.
Upstream represents a virtual hostname and can be used to load balance incoming requests over multiple services (targets). For example, an upstream named service.v1.xyz would receive requests for a Service whose host is service.v1.xyz. Requests for this Service would be proxied to the targets defined within the upstream.
An upstream also includes a health checker, which can enable and disable targets based on their ability or inability to serve requests.
To start:
Open IAM Administration Portal
Go to Workspaces
Choose your workspace
Open Upstreams
Click on "New Upstream" button
After clicking the "New Upstream" button you would see a form where you can enter some basic information about the upstream (there are a lot more properties):
Enter name - it's a virtual hostname our services would use. It's unrelated to DNS records. I recommend setting it to a non-existing value to avoid confusion. If you want to read about the rest of the properties, check the documentation. On the screenshot, you can see how I imaginatively named the new upstream as myupstream.
2. Create targets.
Targets are backend servers that would execute the requests and send results back to the client. Go to Upstreams and click on the upstream name you just created (and NOT on update button):
You would see all the existing targets (none so far) and the "New Target" button. Press it:
And in the new form define a target. Only two parameters are available:
target - host and port of the backend server
weight - relative priority given to this server (more weight - more requests are sent to this target)
I have added two targets:
3. Create a service
Now that we have our upstream we need to send requests to it. We use Service for it.Service entities, as the name implies, are abstractions of each of your upstream services. Examples of Services would be a data transformationmicroservice, a billing API, etc.
Let's create a service targeting our IRIS instance, go to Services and press "New Service" button:
Set the following values:
field
value
description
name
myservice
the logical name of this service
host
myupstream
upstream name
path
/api/atelier
root path we want to serve
protocol
http
the protocols we want to support
Keep the default values for everything else (including port: 80).
After creating the service you'll see it in a list of services. Copy service ID somewhere, we're going to need that later.
4. Create a route
Routes define rules to match client requests. Each Route is associated with a Service, and a Service may have multiple Routes associated withit. Every request matching a given Route will be proxied to its associated Service.
The combination of Routes and Services (and the separation of concerns between them) offers a powerful routing mechanism with which it is possible to define fine-grained entry-points in IAM leading to different upstream services of your infrastructure.
Now let's create a route. Go to Routes and press the "New Route" button.
Set the values in the Route creation form:
field
value
description
path
/api/atelier
root path we want to serve
protocol
http
the protocols we want to support
service.id
guid from 3
service id value (guid from previous step)
And we're done!
Send a request to http://localhost:8000/api/atelier/ (note the slash at the end) and it would be served by one of our two backends.
Conclusion
IAM offers a highly customizable API Management infrastructure, allowing developers and administrators to take control of their APIs.
Links
Documentation
IAM Announcement
Working with IAM article
Question
What functionality do you want to see configured with IAM? I have a question regarding productionized deployments.Can the internal IRIS web-server be used, i.e. Port 52773?Or should there still be a web-gateway between IAM and the IRIS instance?
Regarding Kubernetes:I would think that IAM should be the ingress, is that correct? Hi Stefan,
The short answer is you still need a web-gateway between IAM and IRIS.
The private web server (port 52773) minimal build of the Apache web server is supplied for the purpose of running the Management Portal not production level traffic.
I would think that IAM should be the ingress, is that correct?
Agreed. Calling @Luca.Ravazzolo.
Announcement
Jeff Fried · Nov 4, 2019
The 2019.3 versions of InterSystems IRIS, InterSystems IRIS for Health, and InterSystems IRIS Studio are now Generally Available!
These releases are available from the WRC Software Distribution site, with build number 2019.3.0.311.0.
InterSystems IRIS Data Platform 2019.3 has many new capabilities including:
Support for InterSystems API Manager (IAM)
Polyglot Extension (PeX) available for Java
Java and .NET Gateway Reentrancy
Node-level Architecture for Sharding and SQL Support
SQL and Performance Enhancements
Infrastructure and Cloud Deployment Improvements
Port Authority for Monitoring Port Usage in Interoperability Productions
X12 Element Validation in Interoperability Productions
These are detailed in the documentation:
InterSystems IRIS 2019.3 documentation and release notes
InterSystems IRIS for Health 2019.3 includes all of the enhancements of InterSystems IRIS. In addition, this release includes FHIR searching with chained parameters (including reverse chaining) and minor updates to FHIR and other health care protocols.
FHIR STU3 PATCH Support
New IHE Profiles XCA-I and IUA
These are detailed in the documentation:
InterSystems IRIS for Health 2019.3 documentation and release notes
InterSystems IRIS Studio 2019.3 is a standalone development image supported on Microsoft Windows. It works with InterSystems IRIS and InterSystems IRIS for Health version 2019.3 and below, as well as with Caché and Ensemble.
See the InterSystems IRIS Studio Documentation for details
2019.3 is a CD release, so InterSystems IRIS and InterSystems IRIS for Health 2019.3 are only available in OCI (Open Container Initiative) a.k.a. Docker container format. The platforms on which this is supported for production and development are detailed in the Supported Platforms document. Having gone through the pain of Installing Docker for Windows and then installing the InterSystems IRIS for HEALTH 2019.3 image and having got hold of a copy of the 2019.3 Studio I was please when I saw this announcement and excitedly went looking for my 2019.3......exe only to find out there is none and a small note at the end off the announcement to say that 2019.3 InterSystems IRIS and InterSystems IRIS for Heath will only be released in CD form.
Yours
Nigel Salm Nigel, just want to be sure that you read CD as Containers Deployment - so it will be available on every delivery site (WRC, download, AWS, GCP, Azure, Dockerhub) but in a container form. InterSystems Docker Imageshttps://wrc.intersystems.com/wrc/coDistContainers.csp
Announcement
David Reche · Nov 13, 2018
Hi Everyone!
We are pleased to invite you to the InterSystems Iberia Summit 2018 on 27th of November in Madrid, Spain!
The New Challenges of Connected Health Matter
Date: November 27, 2018
Place: Hotel Meliá Serrano, Madrid
Please check the original agenda of the event.
REGISTER NOW and hope to see you at the Iberia Healthcare Summit 2018 in Madrid!
Announcement
Anastasia Dyubaylo · Nov 26, 2018
Hi Community!We're pleased to welcome @Sean.Connelly as our new Moderator in Developer Community Team! Let's greet Sean with big applause and take a closer look at his bio!Sean about his experience:— I help healthcare organisations solve complex integration problems using products such as Ensemble, Healthshare and Mirth.With 20 years of experience Sean has worked with over 20 NHS Trusts, Scottish Boards, NHS Digital, NHS Scotland and Primary Care system providers. This has included many large scale integration solutions such as replacing or implementing brand new Integration Engines, PAS replacements, OrderComms, and Electronic Document handling.Some words from Sean:— I'm also an InterSystems product specialist with deep knowledge of Cache, Ensemble, Healthshare and IRIS. I'm a moderator and active contributor on the InterSystems official community site where you can find many examples of my technical writing. I also actively write open source frameworks and tools for these products and regularly use them to accelerate development services. [CHECK OUT SEAN'S DC PROFILE]— I also specialize in web application development, I've written dozens of SPA applications over the years including large scale solutions for single record patient portals, document management, read code submissions, a dental claim system across Scotland and the modernization of a Trusts legacy green screen PAS system.Some facts about Sean's business:— I currently run my own successful consultancy business called MemCog Ltd which has been going for over 5 years. Some of my customers include the Manchester University Trust where I have helped implement large scale OrderComms solutions, merged Hospitals and Systems, as well as developing an electronic document solution that has delivered millions of electronic letters between the Trust and GP Practices every year. Welcome aboard and thanks for your great contribution, Sean!
Announcement
Benjamin De Boe · Jan 8, 2019
Hi, As we announced at our Global Summit in October, we are developing dedicated connectors for a number of third-party data visualization tools for InterSystems IRIS. With these connectors, we want to combine an excellent user experience with optimal performance when using those tools to visualize data managed on InterSystems IRIS Data Platform. If you are already using Tableau or Power BI products to access our data platform through their respective generic ODBC connectors today, we're interested in learning more about your experiences thus far and would be very grateful if you could spend a few minutes on our survey.survey for Tableau userssurvey for Microsoft Power BI usersThanks,benjamin @Benjamin.DeBoe
Have you made any progress on this?
We just starting playing around with using Web Data Connectors in Tableau to call APIs into Cache.
Best,
Mike @Mike.Davidovich I am working with Benjamin on an Alpha version of the Tableau connector. Interested in your experience with using OBDC or JDBC connection to Tableau. Have you used that and what else would you like to see with a Tableau connector?
Feel free to post here or email directly -- carmen.logue@intersystems.com @Carmen.Logue Thanks, Carmen! I'm not sure I can add too much to your Alpha development I personally haven't been using the OBDC to connect into Cache. What I do know is that some other groups have used ODBC to connect into Cache with SQL projections.
My team want's to avoid projections specifically (at this point at least) because we tend to only use them to get data from Cache to our Data Warehouse (Oracle). Other than that, we still traverse our database via globals and good old MUMPS programming. The project I'm working on is taking those many many routines that we have that traverse globals for reporting, and transforming the data to JSON streams. An %CSP.REST API will call those routines and provide the data to a Tableau web data connector so we can get instant, live data without projecting our whole database.
I'm just getting start with Tablaue and Cache so I may have some more input in the future.
Best,
Mike
Announcement
Anastasia Dyubaylo · Apr 17, 2019
Hi Community!Please welcome a new video on InterSystems Developers YouTube Channel:Implementing vSAN for InterSystems IRIS Specific examples of using VMware and vSAN will illustrate practical advice for deploying InterSystems IRIS, whether on premises or in the cloud.Takeaway: I know how to deploy InterSystems IRIS using VMware and vSAN.Presenter: @Murray.Oldfield And...Additional materials to the video you can find in this InterSystems Online Learning Course.Don't forget to subscribe our InterSystems Developers YouTube Channel. Enjoy and stay tuned!
Article
Sergey Kamenev · May 23, 2019
PHP, from the beginning of its time, is renowned (and criticized) for supporting integration with a lot of libraries, as well as with almost all the DB existing on the market. However, for some mysterious reasons, it did not support hierarchical databases on the globals.
Globals are structures for storing hierarchical information. They are somewhat similar to key-value database with the only difference being that the key can be multi-level:
Set ^inn("1234567890", "city") = "Moscow"
Set ^inn("1234567890", "city", "street") = "Req Square"
Set ^inn("1234567890", "city", "street", "house") = 1
Set ^inn("1234567890", "year") = 1970
Set ^inn("1234567890", "name", "first") = "Vladimir"
Set ^inn("1234567890", "name", "last") = "Ivanov"
In this example, multi-level information is saved in the global ^inn using the built-in ObjectScript language. Global ^inn is stored on the hard drive (this is indicated by the “^” sign in beginning).
In order to work with globals from PHP, we will need new functions that will be added by the PHP module, which will be discussed below.
Globals support many functions for working with hierarchies: traversal tree on fixed level and in depth, deleting, copying and pasting entire trees and individual nodes. And also ACID transactions - as is done in any quality database. All this happens extremely quickly (about 105-106 inserts per second on regular PC) for two reasons:
Globals are a lower level abstraction when compared to SQL,
The bases have been in production on the globals for decades, and during this time they were polished and their code was thoroughly optimized.
Learn more about globals in the series of articles titled "Globals Are Magic Swords For Managing Data.":
Part 1.Trees. Part 2.Sparse arrays. Part 3.
In this world, globals are primarily used in storage systems for unstructured and sparse information, such as: medical, personal data, banking, etc.
I love PHP (and I use it in my development work), and I wanted to play around with globals. There was no PHP module for IRIS and Caché. I contacted InterSystems and asked them to create it. InterSystems sponsored the development as part of an educational grant and my graduate student and I created the module.
Generally speaking, InterSystems IRIS is a multi-model DBMS, and that's why from PHP you can work with it via ODBC using SQL, but I was interested in globals, and there was no such connector.
So, the module is available for PHP 7.x (was tested for 7.0-7.2). Currently it can only work with InterSystems IRIS and Caché installed on the same host.
Module page on OpenExchange (a directory of projects and add-ons for developers at InterSystems IRIS and Caché).
There is a useful DISCUSS section where people share their related experiences.
Download here:
https://github.com/intersystems-community/php_ext_iris
Download the repository from the command line:
git clone https://github.com/intersystems-community/php_ext_iris
Installation instructions for the module in English and Russian.
Module Functions:
.article_table td{
padding: 5px;
}
.article_table th{
text-align: center;
}
PHP function
Description
Working with data
iris_set($node, value)
Setting a node value.
iris_set($global, $subscript1, ..., $subscriptN, $value); iris_set($global, $value);
Returns: true or false (in the case of an error). All parameters of this function are strings or numbers. The first one is the name of the global, then there are the indexes, and the last parameter is the value.
iris_set('^time',1);
iris_set('^time', 'tree', 1, 1, 'value');
ObjectScript equivalent:
Set ^time = 1
Set ^time("tree", 1, 1) = "value"
iris_set($arrayGlobal, $value);
There are just two parameters: the first one is the array in which the name of the global and all its indexes are stored, and the second one is the value.
$node = ['^time', 'tree', 1, 1];
iris_set($node,'value');
iris_get($node)
Getting a node value.
Returns: a value (a number or a line), NULL (the value is not defined), or FALSE (in the event of an error).
iris_get($global, $subscript1, ..., $subscriptN); iris_get($global);
All parameters of this function are lines or numbers. The first one is the name of the global, and the rest are subscripts. The global may not have subscripts.
$res = iris_get('^time');
$res1 = iris_get('^time', 'tree', 1, 1);
iris_get($arrayGlobal);
The only parameter is the array in which the name of the global and all its subscripts are stored.
$node = ['^time', 'tree', 1, 1];
$res = iris_get($node);
iris_zkill($node)
Deleting a node value.
Returns: TRUE or FALSE (in the event of an error).
It is important to note that this function only deletes the value in the node and does not affect lower branches.
iris_zkill($global, $subscript1, ..., $subscriptN); iris_zkill($global);
All parameters of this function are lines or numbers. The first one is the name of the global, and the rest are subscripts. The global may not have subscripts.
$res = iris_zkill('^time'); // Lower branches are not deleted.
$res1 = iris_zkill('^time', 'tree', 1, 1);
iris_zkill($arrayGlobal);
The only parameter is the array in which the name of the global and all its subscripts are stored.
$a = ['^time', 'tree', 1, 1];
$res = iris_zkill($a);
iris_kill($node)
Deleting a node and all descendant branches.
Returns: TRUE or FALSE (in the case of an error).
iris_kill($global, $subscript1, ..., $subscriptN); iris_kill($global);
All parameters of this function are lines or numbers. The first one is the name of the global, and the rest are indexes. The global may not have indexes, in which case it is deleted in full.
$res1 = iris_kill('^example', 'subscript1', 'subscript2');
$res = iris_kill('^time'); // The global is deleted in full.
iris_kill($arrayGlobal);
The only parameter is the array in which the name of the global and all its subscripts are stored.
$a = ['^time', 'tree', 1, 1];
$res = iris_kill($a);
iris_order($node)
Traversal the branches of the global on a given level
Returns: the array in which the full name of the previous node of the global on the same level is stored or FALSE (in the case of an error).
iris_order($global, $subscript1, ..., $subscriptN);
All parameters of this function are strings or numbers. The first one is the name of the global, and the rest are subscripts. Form of usage in PHP and ObjectScript equivalent:
iris_order('^ccc','new2','res2'); // $Order(^ccc("new2", "res2"))
iris_order($arrayGlobal);
The only parameter is the array in which the name of the global and the subscripts of the initial node are stored.
$node = ['^inn', '1234567890', 'city'];
for (; $node !== NULL; $node = iris_order($node))
{
echo join(', ', $node).'='.iris_get($node)."\n";
}
Returns:
^inn, 1234567890, city=Moscow
^inn, 1234567890, year=1970
iris_order_rev($node)
Traversal the branches of the global on a given level in reverse order
Returns: the array in which the full name of the previous node of the global on the same level is stored or FALSE (in the case of an error).
iris_order_rev($global, $subscript1, ..., $subscriptN);
All parameters of this function are lines or numbers. The first one is the name of the global, and the rest are subscripts. Form of usage in PHP and ObjectScript equivalent:
iris_order_rev('^ccc','new2','res2'); // $Order(^ccc("new2", "res2"), -1)
iris_order_rev($arrayGlobal);
The only parameter is the array in which the name of the global and the subscripts of the initial node are stored.
$node = ['^inn', '1234567890', 'name', 'last'];
for (; $node !== NULL; $node = iris_order_rev($node))
{
echo join(', ', $node).'='.iris_get($node)."\n";
}
Returns:
^inn, 1234567890, name, last=Ivanov
^inn, 1234567890, name, first=Vladimir
iris_query($CmdLine)
Traversal of the global in depth
Returns: the array in which the full name of the lower node (if available) or the next node of the global (if there is no embedded node) is contained.
iris_query($global, $subscript1, ..., $subscriptN);
All parameters of this function are strings or numbers. The first one is the name of the global, and the rest are subscripts. Form of usage in PHP and ObjectScript equivalent:
iris_query('^ccc', 'new2', 'res2'); // $Query(^ccc("new2", "res2"))
iris_query($arrayGlobal);
The only parameter is the array in which the name of the global and the indexes of the initial node are stored.
$node = ['^inn', 'city'];
for (; $node !== NULL; $node = iris_query($node))
{
echo join(', ', $node).'='.iris_get($node)."\n";
}
Returns:
^inn, 1234567890, city=Moscow
^inn, 1234567890, city, street=Req Square
^inn, 1234567890, city, street, house=1
^inn, 1234567890, name, first=Vladimir
^inn, 1234567890, name, last=Ivanov
^inn, 1234567890, year=1970
The order differs from the order in which we established it because everything is automatically sorted in ascending order in the global during insertion.
Service functions
iris_set_dir($FullPath)
Setting up a directory with a database
Returns: TRUE or FALSE (in the case of an error).
iris_set_dir('/InterSystems/Cache/mgr');
This must be performed before connecting to the database.
iris_exec($CmdLine)
Execute database command
Returns: TRUE or FALSE (in the case of an error).
iris_exec('kill ^global(6)'); // The ObjectScript command for deleting a global
iris_connect($login, $pass)
Connect to database
iris_quit()
Close connection with DB
iris_errno()
Get error code
iris_error()
Get text description of error
If you want to play around with the module, check e.g. docker container implementation
git clone https://github.com/intersystems-community/php_ext_iris
cd php_ext_iris/iris
docker-compose build
docker-compose up -d
Test the demo page on localhost:52080 in the browser.PHP files that can be edited and played with are in the php/demo folder which will be mounted to inside the container.
To test IRIS use the admin login with the SYS password.
To get into the IRIS settings, use the following URL:http://localhost:52773/csp/sys/UtilHome.csp
To get into the IRIS console of this container, use the following command:
docker exec -it iris_iris_1 iris session IRIS
Especially for DC and those who wants to use we run a virtual machine with a Caché php-module was set up.
Demo page on english. Demo page on russian. Login: habr_test Password: burmur#@8765
For self-installation of the module for InterSystems Caché
Have Linux. I tested for Ubuntu, the module should also compiled and work under Windows, but I didn’t test it.
Download the free version:
InterSystems Caché (registration required). As to Linux, Red Hat and Suse are supported out of the box, but you can also install them on other distribution packages.
Install the cach.so module in PHP according to the instructions..
Just out of interest, I ran two primitive tests to check the speed of inserting new values into the database in the docker container on my PC (AMD FX-9370@4700Mhz 32GB, LVM, SATA SSD).
Insertion of 1 million new nodes into the global took 1.81 seconds or 552K inserts per second.
Updating a value in the same global 1,000,000 times took 1.98 seconds or 505K updates per second. An interesting fact is that the insertion occurs faster than the update. Apparently this is a consequence of the initial optimization of the database aimed at quick insertion.
Obviously, these tests cannot be considered 100% accurate or useful, since they are primitive and are done in the container. On more powerful hardware with a disk system on a PCIe SSD, tens of millions of inserts per second can be achieved.
What else can be completed and the current state
Useful functions for working with transactions can be added (you can still use them with iris_exec).
The function of returning the whole global structure is not implemented, so as not to traverse the global from PHP.
The function of saving a PHP array as a subtree is not implemented.
Access to local database variables is not implemented. Only using iris_exec, although it's better with iris_set.
Global traversal in depth in the opposite direction is not implemented.
Access to the database via an object using methods (similar to current functions) is not implemented.
The current module is not quite yet ready for production: not tested for high loads and memory leaks. However, should someone need it, please feel free to contact me at any time (Sergey Kamenev sukamenev@gmail.com).
Bottom line
For a long time, the worlds of PHP and hierarchical databases on globals practically did not overlap, although globals provide strong and fast functionality for specific data types (medical, personal).
I hope that this module will motivate PHP programmers to experiment with globals and ObjectScript programmers for simple development of web interfaces in PHP.
P.S. Thank you for your time! Nice!Just tried this with docker container on my local machine.And got 1 million insertions(1,000,000) in 1,45 sec on my mac pro. Cool!
Announcement
Anastasia Dyubaylo · Aug 26, 2019
Hi Developers!
InterSystems Developers Community today unites more than 7,000 developers from all over the world. Since 2016, our community has been growing and improving for you, our dear developers!
Together we've done a lot over these years, and much more is planned for the future!
So, who makes our community better every day? Who tries for all of us and improves the space for developers?
Let's warmly greet our team:
@Evgeny.Shvarov – founder of InterSystems Developers community, Startups and Community manager at InterSystems.
@David.Reche – founder & manager of Spanish Developers Community & Senior Sales Engineer at InterSystems.
@Anastasia.Dyubaylo – our Community & Social Media Manager at InterSystems. She leads Global Masters Advocacy Hub and all InterSystems Developers social networks. Anastasia also reviews many of InterSystems' events on the community and on social media.
@Olga.Zavrazhnova2637 – our Global Masters Advocacy Hub Manager at InterSystems. She is managing Global Masters since it's launching in 2016. Now Olga creating engagement campaigns and exploring new rewards ideas for Global Masters.
@Julia.Fedoseeva – Educational and Logistics manager at InterSystems and also our Global Masters Advocacy Hub Manager. She organizes the delivery of GM Rewards around the whole world.
Gamification and Community Management – it's about these guys. They're supporting you on your way with InterSystems Global Masters Advocacy Hub!
And...
Of course, our remarkable Moderators in Developers Community team.
Please welcome:
@Eduard.Lebedyuk – Sales Engineer at InterSystems in Moscow, Russia.
@Dmitry.Maslennikov – Developer Advocate, co-founder of CaretDev corp.
@Sean.Connelly – Managing Director / Software Engineer at MemCog LTD.
@John.Murray – Senior Product Engineer at George James Software.
@Scott.Roth – Senior Applications Development Analyst at the Ohio State University Wexner Medical Center.
@Jeffrey.Drumm – Vice President and Chief Operating Officer at Healthcare Integration Consulting Group (HICG).
@Henrique – System Management Specialist, Database Administrator at Sao Paulo Federal Court.
And our Moderators of the Spanish Community Team:
@Francisco.López1549 – Project Manager & Head of Interoperability at Salutic Soluciones, S.L.
@Nancy.Martinez – Solution Consultant at Ready Computing.
So!
Now you know all InterSystems Developer Community heroes!
Stay tuned with us! 🚀 Awesome!!! I've always got superb guidance, and direction to all my questions. Thank you guys !!Nice to put faces to the people :) Great job guys !! Thanks, @Eric.David! Thank you! Happy to work with such people! Great team, perfect Community! A nice overview -- I like those "cheat sheets" Applause for your all!Keep up the good work all! Like the community very much ! Thanks to all of you, guys, for your effort and help!! Great job!! (And remember, with great power comes great responsibility ) Thanks, Udo! Another version of KYC - Know Your Community ) Thanks, Marco! See you on GS2019!
Discussion
Nikita Savchenko · Dec 12, 2019
Hello, InterSystems community!
Lately, you have probably heard of the new InterSystems Package Manager - ZPM. If you're familiar with it or with such package managers as NPM, Dep, pip/PyPI, etc. or just know what is it all about -- this question is for you! The question I want to arise is actually a system design question, or, in other words, "how should ZPM implement it".
In short, ZPM (the new package manager) allows you to install packages/software to your InterSystems product in a very convenient, manageable way. Just open up the terminal, run ZPM routine, and type install samples-objectscript: you will have a new package/software installed and ready to use! In the same way, you can easily delete and update packages.
From the developer's point of view, quite as same as in other package managers, ZPM requires the package/software to have a package description, fairly represented as module.xml file. Here's an example of it. This file has a description of what to install, which CSP applications to create, which routines to run once installed and so on.
Now, straight to the point. You've also probably heard of InterSystems WebTerminal - one of my projects which is quite widely used (over 500 installs over the last couple of months). We try to bring WebTerminal to ZPM.
So far, anyone could install WebTerminal just by importing an XML file with its code - no more actions were needed. During the class compilation, WebTerminal runs the projection and does all required settings on its own (web application, globals, etc - see here). In addition to this, WebTerminal has its own self-updating mechanism, which allows it to self-update when the new version comes out, made exactly with the use of projections. Apart from that, I have 2 more projects (ClassExplorer, Visual Editor) that use the same import-and-install convenient installation mechanism.
But, it was decided that ZPM won't accept projections as a paradigm and everything should be described in module.xml file. Hence, to publish WebTerminal for ZPM, the team tried to remove Installer.cls class (one of WebTerminal's classes which did all the install-update magic with the use of projections) and manually replaced it with some module.xml metadata. It turned to be quite enough for WebTerminal to work but it potentially leads to unexpected incompatibilities to be 100% compatible with ZPM (see below). Thus, the source code changes are needed.
So the question is, should ZPM really avoid all projection-enabled classes for its packages? The decision of avoiding projections might be changed via the open discussion here. It's not a question of why can't I rewrite WebTerminal's code, but rather why not just accept original software code even if it uses projections?
My opinion was quite strong against avoiding projection-enabled classes in ZPM modules. For multiple reasons. But first of all, because projections are the way how the programming language works, and I see no constructive reasoning against using them for whatever the software/package is designed for. Avoiding them and cutting Installer.cls class from the release is absolutely the same as patching a working module. I agree that the packages which ship specifically for ZPM should try to use all installation features which module.xml provides, however, WebTerminal is also shipped outside of ZPM, and maintaining 2 versions of WebTerminal (at least, because of the self-update feature) makes me think that something is wrong here.
I see the next pros of keeping all projection-enabled classes in ZPM:
The package/software will still be compatible with both ZPM and a regular installation done for years (via XML/classes import)
No original package/software source code changes needed to bring it to ZPM
All designed functions work as expected and don't cause problems (for instance, WebTerminal self-updates - upon the update, it loads the XML file with the new version and imports it, including projection-enabled Installer.cls file anyway)
Cons of keeping all projection-enabled classes in ZPM:
Side effects made during the installation/uninstallation, made by projection-enabled classes won't be statically described in the module.xml file, hence they are "less auditable". There is an opinion that any side effect must be described in module.xml file.
Please indicate any other pros/cons if this isn't the full list. What do you think?
Thank you! Exactly not for installing purposes, you're right, I agree. But what do you think about the WebTerminal case in particular?
1. It's already developed and bound to projections: installation, its own update mechanism, etc.2. It's also shipped outside of ZPM3. It would work as usual if only ZPM supported projections
I see you're pointing out to "It might need to support Projections eventually because as you said it's a part of language" - that's what mostly my point is about. Why not just to allow them. Thanks! Exactly, I completely agree about simplicity, transparency, and installation standard. But see my answer to Sergey's answer - what to do with WebTerminal in particular?
1. Why would I need to rewrite the update mechanism I developed years ago (for example)?2. Why would I need to maintain 2 code bases for ZPM & regular installations (or automate it in a quite crazy way, or just drop self-update feature when ZPM is detected)3. Why all these changes to the source code are needed, after all, if it "just works" normally without ZPM complications (which is how the ObjectScript works)
I think this leads to either "make a package ZPM-compatible" or "make ZPM ObjectScript-compatible" discussion, isn't it? The answer to all this could be "To make the world the better place").
Because if you do all 3 you get:
the same wonderful Web terminal, but with simple, transparent, and standard installing mechanism with and yet another channel for distribution, cause ZPM seems to be a very handy and popular way to install/try the staff.
Maybe yet another channel of clear and handy app distribution is enough argument to change something in the application too?
True points. For sure, developers can customize it. I can do another version of WebTerminal specifically for ZPM, but it will involve additional coding and support:
1. A need to change how the self-update mechanism works or shut it down completely. Right now, the user gets a message on the UI, suggesting to update WebTerminal to the latest version. There's quite a lot of things happen under the hood.2. Thus, create an additional pipeline (or split the codebase) for 2 WebTerminal versions: ZPM's one and a regular one with all the tests and so on.
I am wondering is it worth doing so in WebTerminal's perspective, or is it better to make WebTerminal a kind of an exception for ZPM. Because, still, inserting a couple of if (isZPMInstalled) { ... } else { ... } conditions to WebTerminal (even on front-end side) looks as anti-pattern to me. Thanks! Considering the points others mention, I agree that projections should not be the way to install things but rather the acceptable exception as for WebTerminal and other complex packages. Another option rather than having two versions of the whole codebase could be having a wrapper module around webterminal (i.e., another module that depends on webterminal) with hooks in webterminal to allow that wrapper to turn off projection-based installation-related features. I completely agree, and to get to
standard installing mechanism
for USERS, we need to zpm-enable as many existing projects as possible. To enable these projects we need to simplify the zpm-enabling, leveraging existing code if possible (or not preventing developers from leveraging the existing code). I think allowing developers to use already existing installers (whatever form they may take) would help with this goal. This is very wise, thanks Ed!
For zpm-enabling we plan to add some boilerplate module.xml generator for the repo, stay tuned Hi Nikita,
> A need to change how the self-update mechanism works or shut it down completely.If a package is distributed via package manager, its self-update should be completely removed. It should be a responsibility of package manager to alert the user that new version of package is available and to install it.
> Thus, create an additional pipeline (or split the codebase) for 2 WebTerminal versions: ZPM's one and a regular one with all the tests andso on.Some package managers allow to apply patches to software before packaging it, but I don't think it's the case for ZPM at the moment. I believe you will need to do a separate build for ZPM/non-ZPM versions of your software. You can either apply some patches during the build, or refactor the software so that it can run without auto updater if it's not installed. Hi Nikita!
Do you want the ZPM exception for the Webterminal only or for all your InterSystems solutions? ) The whole purpose of package manager is to get rid of individual installer/updater scripts written by individual developers and replace them with package management utility. So that you have a standard way of installing, removing and updating your packages. So I don't quite understand why this question is raised in this context -- of course package manager shouldn't support custom installers and updaters. It might need to support Projections eventually because as you said it's a part of language, but definitely not for installing purposes. I completely support inclusion of projections.
ObjectScript Language allows execution of arbitrary code at compile time through three different mechanisms:
Projections
Code generators
Macros
All these instruments are entirely unlimited in their scope, so I don't see why we need to prohibit one way of executing code at compilation.
Furthermore ZPM itself uses Projections to install itself so closing this avenue to other projects seems strange. Hi Nikita!
Thanks for the good question!
The answer is on why module.xml vs installer.cls on projections is quite obvious IMHO.
Compare module.xml and installer.cls which does the same thing.
Examining module.xml you can clearly say what the installation does and easily maintain/support it.
In this case, the package installs:
1. classes from WebTerminal package:
<Resource Name="WebTerminal.PKG" />
2. creates one REST Web app:
<CSPApplication
Url="/terminal"
Path="/build/client"
Directory="{$cspdir}/terminal"
DispatchClass="WebTerminal.Router"
ServeFiles="1"
Recurse="1"
PasswordAuthEnabled="1"
UnauthenticatedEnabled="0"
CookiePath="/"
/>
3. creates another REST Web app:
<CSPApplication
Url="/terminalsocket"
Path="/terminal"
Directory="{$cspdir}/terminalsocket"
ServeFiles="0"
UnauthenticatedEnabled="1"
MatchRoles=":%DB_CACHESYS:%DB_IRISSYS:{$dbrole}"
Recurse="1"
CookiePath="/"
/>
I cannot say the same for Installer.cls on projections - what does it do to my system?
Simplicity, transparency, and installation standard with zpm module.xml approach vs what?
From the pros/cons, it seems the objectives are:
Maintain compatibility with normal installation (without ZPM)
Make side effects from installation/uninstallation auditable by putting them in module.xml
I'd suggest as one approach to accomplish both objectives:
Suppress the projection side effects when running in a package manager installation/uninstallation context (either by checking $STACK or using some trickier under-the-hood things with singletons from the package manager - regardless, be sure to unit test this behavior!).
Add "Resource Processor" classes (specified in module.xml with Preload="true" and not included in normal WebTerminal XML exports used for non-ZPM installation) - that is, classes extending %ZPM.PackageManager.Developer.Processor.Abstract and overriding the appropriate methods - to handle your custom installation things. You can then use these in your module manifest, provided that such inversion of control still works without bootstrapping issues following changes made in https://github.com/intersystems-community/zpm.
Generally-useful things like creating a %All namespace should probably be pushed back to zpm itself.
Question
Alex Van Schoyck · Jan 30, 2019
ProblemI'm working on exporting data from an Intersystems Cache database through the Cache ODBC Driver. There is a particular table that is giving me an error message. The ODBC Driver crashes and reports an error from the Cache system. I think I was able to trace down where the error is coming from, but I do not know how to debug or fix the error.The table I am trying to extract is called SEDMIHP.Here's the Error:
[Cache Error: <<UNDEFINED>%0AmBd16^%sqlcq.PRD.3284 ^SEDMIHP(4,77)>]
[Location: <ServerLoop - Query Fetch>]
Research/Trial & Error
I was able to open up Cache Management Studio and find the class that matched up with the table name. I should mention that this is my very first time working with Intersystems Cache, so I apologize if I'm sounding dumb or inexperienced here.
Within the SQLMap, I found this code:
<Data name="DESCRIP_2">
<RetrievalCode> S {DESCRIP_2}=$P($G(^PHPROP({L1},"DESC_CODES")),"\",2) S {DESCRIP_2}=$S($L({DESCRIP_2}):^SEDMIHP($P({DESCRIP_2},","),$P({DESCRIP_2},",",2)),1:{DESCRIP_2})
S {DESCRIP_2}=$E({DESCRIP_2},1,80)
</RetrievalCode>
</Data>
I'm thinking that the code in here is causing an issue. With my very limited understanding of ObjectScript, I think this code is manipulating the text/string, and maybe if there's an undefined or bad value in the data, its causing those functions to throw an error?
I have limited access to the Cache Management Portal, and I am able to find the table in the SQL Schema and run a query on it. About 300 rows of data are loaded before the same Error as above shows up, and it stops loading any more rows. This is why I'm thinking there is bad data.
I tried using ISNULL() and IFNULL() in the SELECT statement to try and skip any bad data, but had the same error in the same spot every time.
Questions
Is there an easy solution from the SQL side that can avoid this error?Is there anything I can do with the class code in Studio to debug or get more info about this error?
Any and all help is greatly appreciated!
Additional Info
Cache Version: Cache for OpenVMS/IA64 V8.4 (Itanium) 2012.1.5 (Build 956 + Adhoc 12486) 17-APR-2013 19:49:58.07 Dmitry, Thank you so much! That worked perfectly and solved the issue! I've been coming up empty handed for hours trying to figure out this issue. I really appreciate your help! If you can edit this code, you can try change to this.
<Data name="DESCRIP_2"> <RetrievalCode> S {DESCRIP_2}=$P($G(^PHPROP({L1},"DESC_CODES")),"\",2) S {DESCRIP_2}=$S($L({DESCRIP_2}):$Get(^SEDMIHP($P({DESCRIP_2},","),$P({DESCRIP_2},",",2))),1:{DESCRIP_2}) S {DESCRIP_2}=$E({DESCRIP_2},1,80) </RetrievalCode> </Data>
But not sure, if this correct.
What I did there, is, wrapped retrieving data from global ^SEDMIHP with the function $Get()
Or this way, with the default value
<Data name="DESCRIP_2"> <RetrievalCode> S {DESCRIP_2}=$P($G(^PHPROP({L1},"DESC_CODES")),"\",2) S {DESCRIP_2}=$S($L({DESCRIP_2}):$Get(^SEDMIHP($P({DESCRIP_2},","),$P({DESCRIP_2},",",2)),{DESCRIP_2}),1:{DESCRIP_2}) S {DESCRIP_2}=$E({DESCRIP_2},1,80) </RetrievalCode> </Data>
Announcement
Andreas Dieckow · Jan 9, 2019
InterSystems has completed the verification process for OpenJDK 8.Customers now have the option to either use the Oracle JDK, or the OpenJDK with all InterSystems products and versions that support Java 8. Support for future versions will continue to be supported on both of these Java Development Kits. Nice another step to much more visibility.
Announcement
Anastasia Dyubaylo · May 22, 2019
Hi Community!
Please welcome a new video on InterSystems Developers YouTube Channel:
InterSystems IRIS from Spark to Finish
This video demonstrates spinning up a cluster that combines InterSystems IRIS' powerful data management with Apache Spark's unique approach to parallelizing complex data engineering and analytics. Together, they let you make the best use of your distributed environment.
Takeaway: The combination of InterSystems IRIS and Apache Spark enables me to build powerful analytical solutions.Presenter: @Amir.Samary
And...
Additional materials to the video you can find in this InterSystems Online Learning Course.
Don't forget to subscribe our InterSystems Developers YouTube Channel.
Enjoy and stay tuned!
Announcement
Anastasia Dyubaylo · May 13, 2019
Hey Community!
It's time again for good tidings for you!
For the first time, InterSystems will be part of the WeAreDevelopers World Congress in Berlin, Germany, which brings together developers, IT experts and digital innovators to discuss and shape the future of application development.
From 6 to 7 June, we’re ready to welcome you at our booth #A5 and show you how InterSystems technologies enable intelligent interoperability and accelerate the creation of powerful, data-driven applications.
Schedule your individual meeting with InterSystems @ WeAreDevelopers in Berlin by quickly filing out the form on our event page [https://dach.intersystems.de/WeAreDevelopers2019] – the first three applicants will receive a FREE ticket!
To make it more challenging for you, we’ve decided to provide the webpage & form in German only.
So...Do not miss your chance!
Join the World’s Largest Developers Congress with InterSystems!