InterSystems Developer Community is a community of 20,563 amazing developers
We're a place where InterSystems IRIS programmers learn and share, stay up-to-date, grow together and have fun!

I am very pleased to announce that tomorrow (Dec 3) at 9 AM Cambridge time we plan to enable the new CCR UI for all users. No downtime should be required for the go-live. Existing beta testers will not see any change, but for non-beta testers the new Frost-based Angular UI will replace the legacy CSP-based application for the home page, navigation, System Details, and several other parts of the application.

2 1
0 290

InterSystems has corrected a defect that may cause Windows Telnet processes that are secured using SSL/TLS to hang indefinitely; this may then cause an instance to become unresponsive. This defect is present only on Windows platforms.

This defect affects:

0 0
0 188

Hi Community,

We're pleased to invite you to the online meetup with the winners of the InterSystems Interoperability Contest!

Date & Time: Friday, November 27, 2020 – 10:00 EDT

What awaits you at this virtual Meetup?

  • Our winners' bios.
  • Short demos on their applications.
  • An open discussion about technologies being used, bonuses, questions. Plans for the next contests.

3 3
1 298

I am getting the date 20201121090000 in the HL7 message, How do I convert it to 2020-11-21 09:00:00 in a easy way?

I am currently doing it by extracting the first 7 values and splitting as date and time and then adding a hyphen using substring.

Is there an easier way by using $ZDATE? or something like that?

0 20
0 2K

I'm running Windows 10 x64 Pro 20H2 with Intersystems Cache ODBC driver v2018.01.00.184. I've setup a System DSN using the 64-bit of ODBC Administrator.

I've been getting inconsistent results using my regular application (Microsoft Power BI) which I use through ODBC to query my hosted TrakCare T2017 instance.

Using Microsoft ODBC Test Tool (part of MDAC 2.8 SDK) I can verify I get the same errors as in Power BI. They seem to be ODBC driver related but I can't pin it down.

ODBC Test Tool GUI shows:

0 1
0 2.4K

I am looking for any pointers on how Intersystems IRIS Health can monitor a filesystem/Folder that user/s /applications can drop in CSV files via FTP and load the file to the IRIS DB . I understand that I will need create a record map for the CSV files, I am looking for any configuration references on how how to process files using file inbound adapters with the intent to pick up the CSV file as they are dropped in the target location and pass it to a Business process and ingest into the IRIS database

Any help would be greatly appreciated ...

0 10
0 817
Question
· Nov 26, 2020
10000 clients simultaneously

How InterSystems solutions handle C10k connections? ( https://en.wikipedia.org/wiki/C10k_problem )
For example, I want to create a social network on InterSystems platform.
In the scenario client (browser) => CSP Gateway => Cache can the Cache handle a large number of clients at the same time?
Quick analysis shows that CSP Gateway for each request opens a new TCP connection to Cache SuperServer (port 1972) allocating CSP session and license slot.

4 9
0 809

Dear Community,

I need to collect a file from an FTP server over FTPS.

I have the :

  • credentials saved correctly
  • correct host and port informatoin
  • correct filepath I am trying to collect from
  • I have setup the SSL configuration in System > Security Management > SSL/TLS Configurations and this tests sucessfully

When I run the business service to connect it connects, but soon after connecting it fails to open the directory:

0 3
0 401

I have a project to only filter certain pathology results into a downstream system.

Within a HL7 router and business I was planning on using a lookup table and either the exists() or Lookup(), but am having issues when using it with repeating fields or segments.

For example if I perform teh analysis per stated segment usign {} brackets this will work, as each stated repeat is assessed:

0 2
0 815

As we all well know, InterSystems IRIS has an extensive range of tools for improving the scalability of application systems. In particular, much has been done to facilitate the parallel processing of data, including the use of parallelism in SQL query processing and the most attention-grabbing feature of IRIS: sharding. However, many mature developments that started back in Caché and have been carried over into IRIS actively use the multi-model features of this DBMS, which are understood as allowing the coexistence of different data models within a single database. For example, the HIS qMS database contains both semantic relational (electronic medical records) as well as traditional relational (interaction with PACS) and hierarchical data models (laboratory data and integration with other systems). Most of the listed models are implemented using SP.ARM's qWORD tool (a mini-DBMS that is based on direct access to globals). Therefore, unfortunately, it is not possible to use the new capabilities of parallel query processing for scaling, since these queries do not use IRIS SQL access.

Meanwhile, as the size of the database grows, most of the problems inherent to large relational databases become right for non-relational ones. So, this is a major reason why we are interested in parallel data processing as one of the tools that can be used for scaling.

In this article, I would like to discuss those aspects of parallel data processing that I have been dealing with over the years when solving tasks that are rarely mentioned in discussions of Big Data. I am going to be focusing on the technological transformation of databases, or, rather, technologies for transforming databases.

12 4
3 798