InterSystems Developer Community is a community of 20,960 amazing developers
We're a place where InterSystems IRIS programmers learn and share, stay up-to-date, grow together and have fun!

Hi Community,

We're pleased to invite you to the online meetup with the winners of the InterSystems Interoperability Contest!

Date & Time: Friday, November 27, 2020 – 10:00 EDT

What awaits you at this virtual Meetup?

  • Our winners' bios.
  • Short demos on their applications.
  • An open discussion about technologies being used, bonuses, questions. Plans for the next contests.

3 3
1 301

I am getting the date 20201121090000 in the HL7 message, How do I convert it to 2020-11-21 09:00:00 in a easy way?

I am currently doing it by extracting the first 7 values and splitting as date and time and then adding a hyphen using substring.

Is there an easier way by using $ZDATE? or something like that?

0 20
0 2K

I'm running Windows 10 x64 Pro 20H2 with Intersystems Cache ODBC driver v2018.01.00.184. I've setup a System DSN using the 64-bit of ODBC Administrator.

I've been getting inconsistent results using my regular application (Microsoft Power BI) which I use through ODBC to query my hosted TrakCare T2017 instance.

Using Microsoft ODBC Test Tool (part of MDAC 2.8 SDK) I can verify I get the same errors as in Power BI. They seem to be ODBC driver related but I can't pin it down.

ODBC Test Tool GUI shows:

0 1
0 2.5K

I am looking for any pointers on how Intersystems IRIS Health can monitor a filesystem/Folder that user/s /applications can drop in CSV files via FTP and load the file to the IRIS DB . I understand that I will need create a record map for the CSV files, I am looking for any configuration references on how how to process files using file inbound adapters with the intent to pick up the CSV file as they are dropped in the target location and pass it to a Business process and ingest into the IRIS database

Any help would be greatly appreciated ...

0 10
0 828
Question
· Nov 26, 2020
10000 clients simultaneously

How InterSystems solutions handle C10k connections? ( https://en.wikipedia.org/wiki/C10k_problem )
For example, I want to create a social network on InterSystems platform.
In the scenario client (browser) => CSP Gateway => Cache can the Cache handle a large number of clients at the same time?
Quick analysis shows that CSP Gateway for each request opens a new TCP connection to Cache SuperServer (port 1972) allocating CSP session and license slot.

4 9
0 818

Dear Community,

I need to collect a file from an FTP server over FTPS.

I have the :

  • credentials saved correctly
  • correct host and port informatoin
  • correct filepath I am trying to collect from
  • I have setup the SSL configuration in System > Security Management > SSL/TLS Configurations and this tests sucessfully

When I run the business service to connect it connects, but soon after connecting it fails to open the directory:

0 3
0 403

I have a project to only filter certain pathology results into a downstream system.

Within a HL7 router and business I was planning on using a lookup table and either the exists() or Lookup(), but am having issues when using it with repeating fields or segments.

For example if I perform teh analysis per stated segment usign {} brackets this will work, as each stated repeat is assessed:

0 2
0 825

As we all well know, InterSystems IRIS has an extensive range of tools for improving the scalability of application systems. In particular, much has been done to facilitate the parallel processing of data, including the use of parallelism in SQL query processing and the most attention-grabbing feature of IRIS: sharding. However, many mature developments that started back in Caché and have been carried over into IRIS actively use the multi-model features of this DBMS, which are understood as allowing the coexistence of different data models within a single database. For example, the HIS qMS database contains both semantic relational (electronic medical records) as well as traditional relational (interaction with PACS) and hierarchical data models (laboratory data and integration with other systems). Most of the listed models are implemented using SP.ARM's qWORD tool (a mini-DBMS that is based on direct access to globals). Therefore, unfortunately, it is not possible to use the new capabilities of parallel query processing for scaling, since these queries do not use IRIS SQL access.

Meanwhile, as the size of the database grows, most of the problems inherent to large relational databases become right for non-relational ones. So, this is a major reason why we are interested in parallel data processing as one of the tools that can be used for scaling.

In this article, I would like to discuss those aspects of parallel data processing that I have been dealing with over the years when solving tasks that are rarely mentioned in discussions of Big Data. I am going to be focusing on the technological transformation of databases, or, rather, technologies for transforming databases.

12 4
3 811

Developing a Full-Stack JavaScript web app with Caché requires you to bring together the right building blocks. Previously, I outlined the basic steps to install and connect Node.js to Caché and make it's powerful multi-model database capabilites available for use with Node.js. You can use Caché as a NoSQL-, document- (with unique key-level access!), SQL- and object-database with Node.js. When developing JavaScript applications, you'll see how powerful this combination is and makes Caché a perfect fit for Node.js.

In the first part of this article series I will show how to get started with the React framework, one of the most popular frameworks currently taking over front-end development. In the next parts you'll learn how to connect a basic web app to a Caché back-end.

You'll see, it's very easy to get started with this technology - you can even compare the amount of basic knowledge you need to COS because you only need to know a few basic concepts to start!

9 8
2 4.9K

We have some code written by earlier team that read flat files data and create SDAs and push them to ECR Input Service. Now for every test cycle we have to load files to SDAs, check error log, fix the error, clear the ECR, and try new fixed flat files. I’d like to know if there is a method to validate SDA msgs prior to pushing to EFR Input operations ?

so everything we simple load flat files, create SDAs and validate them. If files all looks good with no SDA errros then we actually load them to ECR only once.

0 3
0 394

Hi,

I seem to be able to execute sqlproc using the convention defined here https://docs.intersystems.com/latest/csp/docbook/Doc.View.cls?KEY=GSQL_q... provided the package/folder is a single level. As, soon I have a nested folder structure I get an error while trying to execute this.

For instance,

select id, Utils.Users_getRole(id) roles from users.users

works fine while,

0 3
0 158