I have a Task that I am trying to write the output to a file. How do I check for if the Output File has been included in the Task Schedule, so I can direct my output of my class to that output file?
I'm in the process of setting up InterSystems IRIS for HealthShare with database mirroring, and I’d appreciate some guidance around how to handle an existing ODBC connection in this new setup.
An extension “extends” or enhances a FHIR resource or a data element in a custom way. The extension can be added to the root of a resource, such as “Patient.ethnicity” in US Core profile, and they can be added to individual elements such as HumanName, Address or Identifier.
Did you know that you can also add an extension to a primitive data type?
Primitives usually store a single item and are the most basic element in FHIR. For example: "Keren", false, 1234, 12/08/2024 etc.
For example, the patient resources might look like this:
Is anyone using DICOM Interoperability in IRIS for Health configured in Mirror?
I'm asking because I'm not sure how to handle where the DICOM messages are stored.
For some reason DICOM use the filesystem to store raw messages, the directory used can be configured in the StorageLocation production settings, obviously this is a big issue if/when a mirror failover occur.
Unfortunately in IRIS it's not possible to change the DICOM storage from file stream to global stream.
RabbitMQ is a message broker that allows producers (those who send a data message) and consumers (those who receive a data message) to establish asynchronous, real-time, and high-performance massive data flows. RabbitMQ supports AMQP (Advanced Message Queuing Protocol), an open standard application layer protocol. The main reasons to employ RabbitMQ include the following:
You can improve the performance of the applications using an asynchronous approach.
It lets you decouple and reduce dependencies between services, microservices, and applications with the help of a data message mediator, meaning that there is no need for producers and consumers of exchanged data to know each other.
It allows the long-running processing of sent data (with the results) to be delivered after utilizing a response queue.
It helps you migrate from monolithic to microservices, where microservices exchange data via Rabbit in a decoupled and asynchronous way.
It offers reliability and resilience by making it possible for messages to be stored and forwarded. A message can be delivered multiple times until it is processed.
Message queueing is the key to scaling your application. As the workload increases, you will only have to add more workers to handle the queues faster.
If one of your packages on OEX receives a review you get notified by OEX only of YOUR own package. The rating reflects the experience of the reviewer with the status found at the time of review. It is kind of a snapshot and might have changed meanwhile. Reviews by other members of the community are marked by * in the last column.
In a project I'm working on we need to store some arbitrary XML in the database. This XML does not have any corresponding class in IRIS, we just need to store it as a string (it's relatively small and can fit in a string). Since there are MANY (millions!) of records in the database I decided to reduce as much as possible the size without compressing. I know that some XML to be stored is indented, some not, it varies.
One of the challenges of creating a DICOM message is how to implement putting data in the correct place. Part of it is by inserting the data in the specific DICOM tags, while the other is to insert binary data such as a picture - In this article I will explain both.
To create a DICOM message, you can either use the EnsLib.DICOM.File class (to create a DICOM file) or the EnsLib.DICOM.Document class (to create a message that can be sent to PACS directly). In either case, the SetValueAt method will allow you to add your data to the DICOM tags.
Hello, we are currently using the Cache Database released in January 2018.
There are no specific classes or SQL mappings in the namespaces where data is stored.
Only a few routines(.mac) and stored data are in the global area.
In this case, when accessing the Cache DB through Java, is it essential to write SQL mappings or wrapper classes? The reason I ask is that our team doesn't have advanced developers who specialize in Cache.
Alternatively, is it possible to directly call pre-existing routines from Java?
Here at InterSystems, we often deal with massive datasets of structured data. It’s not uncommon to see customers with tables spanning >100 fields and >1 billion rows, each table totaling hundred of GB of data. Now imagine joining two or three of these tables together, with a schema that wasn’t optimized for this specific use case. Just for fun, let’s say you have 10 years worth of EMR data from 20 different hospitals across your state, and you’ve been tasked with finding….