查找

Question
· Oct 22

Using the Workflow Engine to capture Data Quality Issues

I am looking for a way to capture Data Quality issues with the Source data that is populating HealthShare Provider Directory. 1 way is to use Managed Alerts, but since it could be multiple Providers and different messages it seems silly to alert on every message that has the error. Instead, I was thinking of using the Workflow Engine so it could populate a Worklist for someone to review and work.

Looking over the Demo.Workflow Engine example, I am not comprehending on how to send a task to the Workflow manager to populate the worklist from a DTL.

Does anyone have any examples on how this could be done? Do I need to do it in a Process or can it be sent via a DTL?

1 Comment
Discussion (1)2
Log in or sign up to continue
Question
· Oct 22

Help in Creating Index in DTL based of a variable value

Hello. I need to transform a message

FROM:

MSH|^~\&|
SCH||61490||
PID|1||
RGS|1||1
AIS|1||
AIS|2||
AIS|3||
AIL|1||
AIP|1||

TO:

MSH|^~\&|
SCH||61490||
PID|1||
RGS|1||1
AIS|1||
AIL|1||
AIP|1||
RGS|1||1
AIS|2||
AIL|1||
AIP|1||
RGS|1||1
AIS|3||
AIL|1||
AIP|1||

The RGS, AIS, AIL and AIP are all under the RGS group. The one RGS segment that comes in will be copied across the group. If 3 AIS segments come in then I need 3 RGS groups, if 2 I need 2 RGS groups etc.

In my DTL (screenshot below) I have currently hardcoded the RGS index (1,2,3) but this will not be sufficient incase 4 AIS segments are sent in. I need this index to be variable based off the number of AIS segments that are sent in. 

I have seen posts about using "increment and a while" and I did try that but it didn't work. In the DTL above instead of setting the index to 1, 2, 3 I had set it to "trgs". tais is the count of the AIS sent in. 

Schema:

<?xml version="1.0"?> <Category name="TEST_Schema_2.3.1" 
          description="Orders scheduling." 
          base="2.3.1">     <MessageType name="SIU_S12" structure="SIU_ALL" returntype="base:ACK_S12"/>
    <MessageType name="SIU_S13" structure="SIU_ALL" returntype="base:ACK_S13"/>
    <MessageType name="SIU_S14" structure="SIU_ALL" returntype="base:ACK_S14"/>
    <MessageType name="SIU_S15" structure="SIU_ALL" returntype="base:ACK_S15"/>     <MessageStructure name="SIU_ALL" 
          definition="base:MSH~base:SCH~base:PID~{~base:RGS~base:AIS~base:AIL~base:AIP~}" 
          description="Custom structure with repeating RGS group"/>
</Category>

 

Any leads please?

5 Comments
Discussion (5)3
Log in or sign up to continue
Article
· Oct 22 2m read

Tips on handling Large data

Hello community,

I wanted to share my experience about working on Large Data projects. Over the years, I have had the opportunity to handle massive patient data, payor data and transactional logs while working in an hospital industry. I have had the chance to build huge reports which had to be written using advanced logics fetching data across multiple tables whose indexing was not helping me write efficient code.

Here is what I have learned about managing large data efficiently.

Choosing the right data access method.

As we all here in the community are aware of, IRIS provides multiple ways to access data. Choosing the right method, depends on the requirement.

  • Direct Global Access: Fastest for bulk read/write operations. For example, if i have to traverse through indexes and fetch patient data, I can loop through the globals to process millions of records. This will save a lot of time.
Set ToDate=+H
Set FromDate=+$H-1 For  Set FromDate=$O(^PatientD("Date",FromDate)) Quit:FromDate>ToDate  Do
. Set PatId="" For  Set PatId=$Order(^PatientD("Date",FromDate,PatID)) Quit:PatId=""  Do
. . Write $Get(^PatientD("Date",FromDate,PatID)),!
  • Using SQL: Useful for reporting or analytical requirements, though slower for huge data sets.

Streamlining Bulk Operations

Handling millions of records one by one is slow and heavy. To optimize, I have found that saving in batches, using temporary globals for intermediate steps and breaking large jobs into smaller chunks will make a huge difference. Turning off non-essential indices during bulk inserts will also speed up things.

Using Streams

For large text, XML or JSON payloads, Stream objects prevent memory overload. Dealing with huge files can consume a lot of memory if we are loading everything at once. I would prefer stream objects to read or write the data in chunks. This will keep things fast and efficient. 

Set stream = ##class(%Stream.GlobalCharacter).%New()
Do stream.CopyFromFile("C:\Desktop\HUGEDATA.json")
w "Size: "_stream.Size(),!

This will be a simple way of handling data safely without slowing down the system.

Soooo ya. Handling huge data is not just about making things fast, it's about choosing the right way to access, store and keep the system balanced smartly. 

From migrating millions of patient records to building APIs that handle quite large datasets, these approaches have made a real difference in performance and maintainability.

If you are working with similar concepts and want to swap ideas, please feel free to reach out, I am always happy to share what has worked with me. Open for feedbacks and your opinions too...

 

Thanks!!! :-)

6 Comments
Discussion (6)3
Log in or sign up to continue
Article
· Oct 22 4m read

Accélérez vos recherches textuelles grâce aux index %iFind

Bonjour à tous les membres de la communauté!

Beaucoup d'entre vous se souviennent certainement des fonctionnalités NLP disponibles dans IRIS sous le nom iKnow, qui ont été supprimées depuis peu de temps. Mais... Tout a-t-il été supprimé ? NON! Un petit village résiste à la suppression: les index iFind!

Discussion (0)1
Log in or sign up to continue
Question
· Oct 21

Routing Rule with a variable target

Hi all,

Recently we were experimenting with having a variable target on a routing rule and noticed some interesting behaviour, code below.

<rule name="My Rule" disabled="false">
<constraint name="docCategory" value="Generic.2.3.1"></constraint>
<when condition="((Document.{MSH:MessageType.triggerevent}=&quot;A43&quot;))">
<send transform="" target="To&quot; _(pContext.Document.GetValueAt(&quot;MSH:SendingFacility&quot;))_&quot;FromServiceTCPOpr"></send>
</when>
</rule>


This code compiles and works. If SendingFacility is "TOM" the message will rout to ToTOMFromServiceTCPOpr, and if it is "BOB" it will route to ToBOBFromServiceTCPOpr.

However - if we remove the space between the first &quot; and the first underscore as in below this rule will not compile with error

<send transform="" target="To&quot;_(pContext.Document.GetValueAt(&quot;MSH:SendingFacility&quot;))_&quot;FromServiceTCPOpr"></send>

 

Invalid target name: To"_(pContext.Document.GetValueAt("MSH:SendingFacility"))_"FromServiceTCPOpr. Target name should not contain "_ and _"

Is this an error with the compiler or an issue with the rule (or neither)?

Thanks.

1 Comment
Discussion (1)2
Log in or sign up to continue