Question
· Jul 8, 2022

Bulk Testing Business Process (Rules)

We are upgrading from Health Connect 2018.1.3 to IRIS Health Connect 2022.1, and one thing that we are particularly hesitant about is if our Business Rules will work in the new version.

I am trying to come up with a testing process for bulk testing our rules, and wanted to know if this could be done programmatically instead of having to modify all the Business Operations to have them write the HL7 data to a file. I caught Orlando Health's presentation at GS2022 but I am not sure that will work for my team.

We already capture all the raw data from Business Services, so I just was looking to write a process to run those through the Business Rules, and the output of the Business Rules to files, so I can then look at the difference between how our Production Results works vs how the Test Results look. 

so this is what I am thinking....

Service Raw Data file --> Business Rule --> Multiple Files based on Business Rule

Would this be possible using Cache Object Script?

Product version: IRIS 2022.1
Discussion (10)2
Log in or sign up to continue

A couple of questions about UnitTest as we have not used it before...

  1. Your application only tests the message you define like in Example.cls?
  2. What if we want to test more than one ADT message? Can this be modified to use a file instead?
  3. Also is it possible to test the DTL at the same time and send that result to a file based on the Rule within the Business Process?

I just would like to understand the process a bit more to see if it is valuable for us to use...like I said before ideally I would like to automate a way to feed in a file send it though all the necessary routers (to make sure the rules still work) and collect the output from the DTLs that I can compare to our Production server.

If I have the proof or can report on it, I can use it to show that the testing works, and put the rest of my team's mind at ease from moving from 2018.1.3 to 2022.1

Great questions to consider.

1. The single message.
  The princplie with the example is a single base message for maximum reuse.
  However you define many Methods with the prefix "Test" and these automatically get run by the UnitTest runner.
  Within each method you can send one or more variations of the base test message.
  For example: Consider RoutingRules with different behaviour for Age and Gender.
  Within the same TestMethod you can implemented small updates to the base message to ensure full coverage of your routing rule logic.
  ie: TestAgeGender Method, could send adjusted messages for various age ranges and genders.
2a. Yes the automatic correlation of one base message in BeforeAllTests was done as a convenience.
  It is trivial to add a helper method to pull messages for different XData sources as needed. I will add this. Good suggestion.
 b. Yes. I could also implement a much simpler utility that simply reads HL7 files from a directory and outputs a CSV report of:
   * Source filename
   * Routing Result: Which rule number was applied
   * Routing Return Value: Where it was sent, What transform was employed
   This doesn't need the UnitTest runner and the corresponding web report to be generated.
   Expect you could run the "Test Reporting" class on both the pre-migration and the post migration platform and compare the output report.
3. Yes I have experience with UnitTests for DTLs.
   The particular challenge for these I find are with generated fields like some Dates, and Episode Number.
   ie: You want to compare most things you transform about a message, but wish to exclude certain fields for UnitTest validation as they always have a different / unique value.
   A non-intellegent comparison tool will just say the output files are always different.
   For this I have some other code that analyses all DTLs in a namespace and generates corresponding UnitTest classes with targeted "Compare" methods, generated by parsing the DTL XML.
   This allows you to easily go in after generation and comment out a few specific HL7 fields you wish to exclude from comparison between the expected and the actual HL7 transform result.
   The reuse here is in adding new XData message pairs, for source and expected transformed message, to Tests to the class definition.
   The unit Test automatically picks up the new messages to transforma and validate. So you could actually employ an analyst role instead of developer, to enhance DTL UnitTests for edge cases.  

The main value with UnitTests is for continious integration and / or avoiding future changes breaking code behaviour in unexpected ways.
It also allows you to validate parts of a production in isolation.
It automates regression tests of code.
It can save time with rework. For example I had a routing rule for routing and filtering messages based on their Snomed classifications. The rules were quite complex and would take hours to manually test.
However with one Routing Rule UnitTest I could add an enhancement and retest in minutes with the confidence that previous behaviour was preserved.
There is a judgement on which approach brings the most immediate value, but in the longer term incrementally adding UnitTests can enforce more robust and controlled transformation and routing behaviors.

Alex can you explain a bit more on how 

>set tSC=##class(UnitTest.Test.DTL.TestTrans.TransformSource2).AddTestFromMessageBody("EnsLib.HL7.Message",1218511,1,.sourceXdataName,.targetXdataName)
 

is suppose to work? I tried the message above with the 1218515 message ID and got a syntax error.

DEVCLIN>set tSC=##class(UnitTest.Test.DTL.TestTrans.TransformSource2).AddTestFromMessageBody("EnsLib.HL7.Message",1218515,1,.sourceXdataName,.targetXdataName)

SET tSC=##CLASS(UnitTest.Test.DTL.TestTrans.TransformSource2).AddTestFromMessage
^
Body("EnsLib.HL7.Message",1218515,1,.sourceXdataName,.targetXdataName)
<CLASS DOES NOT EXIST> *UnitTest.Test.DTL.TestTrans.TransformSource2

Hi Scott,

Many thanks for reporting this. Yes this is unintuitive.

I have updated the implementation so:

1. Method GenerateTestCases now has additional parameter autocompile which is default yes.

2. Method AddTestFromMessageBody now has additonal parameter recompile default no.

Now when you generate new TestCases by default they will be auto-compiled and ready to accept attaching new HL7 messages for testing.

For clarity the message id "1218515" will be an EnsLib.HL7.Message RowId. This is the same ID seen in System Management Portal Integration Messages view as MessageBodyId.

To have added messages immediately ready (compiled) for next test run can use:

set recompile=1

set tSC=##class(UnitTest.Test.DTL.TestTrans.TransformSource2).AddTestFromMessageBody("EnsLib.HL7.Message",1218515,1,.sourceXdataName,.targetXdataName,recompile)

The class "UnitTest.Test.DTL.TestTrans.TransformSource2" was generated from an an HL7 DTL class "UnitTest.DTL.TestTrans.TransformSource2" in my test environment.

I have also added reference test class: UnitTest.DTL.TestTrans.TransformSource2.xml to github,

# Example Generate from reference class:
Do ##class(UnitTest.DTL.HL7TestCase).GenerateTestCases("UnitTest.DTL.TestTrans.TransformSource2", "UnitTest.DTL.Test2.TestTrans.", 0 , , 0,1, 1, .pStatus)

Skipping class UnitTest.DTL.TestTrans.seg.MSH not matched
Skipping class UnitTest.DTL.TestTrans.seg.PID not matched
Created TestCase: UnitTest.DTL.Test2.TestTrans.TransformSource2
Compilation started on 08/18/2022 19:56:23 with qualifiers 'ckf'
Compiling class UnitTest.DTL.Test2.TestTrans.TransformSource2
Compiling routine UnitTest.DTL.Test2.TestTrans.TransformSource2.1
Compilation finished successfully in 0.031s.

# Add a new test message pair from an existing HL7 message with rowid 189
set tSC=##class(UnitTest.DTL.Test2.TestTrans.TransformSource2).AddTestFromMessageBody("EnsLib.HL7.Message",189,1,.sourceXdataName,.targetXdataName,1)

# Run UnitTest
do ##class(UnitTest.DTL.Test2.TestTrans.TransformSource2).Debug()

# There will be one intentional error as the original template for manually adding messages is still present. But it does demonstrate attempting to process both pairs of message blocks.
# Perhaps this placeholder template should be suppressable when generating new TestCases if always programatically adding them.

 XData TESTMessageSource
{
<test><![CDATA[
<!-- Your Source HL7 Message or Segment content goes here -->
]]></test>
} XData TESTMessageTarget
{
<test><![CDATA[
<!-- Your Expected output for HL7 Message or Segment content goes here -->
]]></test>
}

Thank you for trying out the utility.

I'm a QA Engineer, we send HL7 message through Ensemble to test the messages against the building rules and validate data is correct in the messages. Is this a tool I could utilize to try and automate some of the testing? Going in the production, then view messages, then try and find a message to use for an example or alter one is very tedious. I'm trying to figure out out to create some automation testing through API, and or using Caristix and Ensemble. Any info from either one of you on your experience would be great.

Yes these 3 tools are all for test automation and shared to Open Exchange:

1. For stressing Routing Rules: https://openexchange.intersystems.com/package/UnitTest-RuleSet
Each UnitTest holds a Common Base message.
With each UnitTest method, modify the base message, send to router, and ASSERT the routing reason was as expected.
Had been useful on a project where there were over 100 different variations on a message to validate for correct routing.
Adding an additional routing variation, was quick (2 minutes) to automatically retest had not broken previous behavior. Would have taken a day of manual testing otherwise.

2. For validating DTL output of HL7 v2 originally.
https://openexchange.intersystems.com/package/UnitTest-DTL-HL7
For productivity:
* Scans namespace for existing DTL filtered by give a search path.
* Generates corresponding DTL UnitTest classes for each source DTL.
* Generates a reusable compare method. Allows to comment out asserts to ignore volatile content like Generated Identifiers and Dates. Has some HL7 oriented UnitTest Assert extensions for reuse.
* Provides a method for automatically attaching an existing HL7 message, to a generated UnitTest class as base message.
Isolates / focuses testing of DTL behavior from rest of end-to-end testing.

3. BulkProfile was a new tool variation for Scott's requirement for comparing routing of same source messages between different versions of software.
https://openexchange.intersystems.com/package/BulkProfile-HL7RoutingRules
A confirmation that migration of platform version did not affect existing routing behavior. Zero code expectation.