I have just published my third and final Tech Article. This article describes how you can develop a single Interface Code Base and then using Package and Global Mappings generate a multitude of Interfaces that either have date PUSHED into them or PULL data from a Source Application or Database and in my example, transform that data into either HL7 or FHIR messages. I then use HTTP to send the message to a Target Server.

I have attached the article as a PDF which might make it a bit easier to read and I will be releasing the actual software on OEX in the near future.

Hi

I have had a quick look at the SS_User Table in L2016. The problem with the SS_Useer table is that it tends to be customised to the needs of the customer and so in my version I have no Column "OtherUserLoc". I checked the class to see if it exists in the class definition and is not projected to SQL but it is not in the class either.

The SS_User table is not a child table. It does have some child Relationships. The Episode and Test Set Tables will reference the SS_User table for the user who registers the Episode and the Laboratory Technician who performs the Tests. They will point to the SS_User Table like this:

Property EPVISUserIDDR As SSUser [ SqlColumnNumber = 31, SqlFieldName = EPVIS_UserID_DR ];

Let me investigate and I will come back to you. As a matter of Interest what exactly do you mean by TrakCare - Components - Items?

Do you mean Episode -> TestSet -> TestItem?

Nigel

That is the most Likely cause, check the Message Type Field in the MSH. It Should show you the Message Code, the Trigger Event and the Message Structure. The Message Code would be ADT, the Trigger Event A31 and the Base Message Structure which in this example is ADT_A05

Another approach is to capture the contents of the message and write it into Notepad and save it as a .txt file and then from the Ensemble ->Interoperate -> HL7 V2.x ->HL7 Message Viewer, Select File from the Document Source Drop Down, locate your file, ignore the Document Number in File field, Then select the "Use ontent Declared Version Name". Your H7 message should be displayed and provided it is structurally sound the fields will all be in blue and hyperlinked and if you hover over any field you will get the Name displayed

Hi

I started writing a response to your questions yesterday afternoon and I wrote a lengthy explanation using LabTrak as the basis for my response. I wrote about 400 lines of explanation and then my laptop rebooted to install some updates and I lost everything I wrote. So I am going to answer but I am going to abbreviate the explanation as I don't have enough time to rewrite the whole document I wrote yesterday.

1) Different database Storage Models

  • DefaultStrorage
    • Is the Storage model used when you create a new Class/Table in Ensemble/IRIS.
    • Stores data in a Global whose name ends with a "D". It stores Indices for the Class/Table in a Global ending with an "I" and puts Stream data into a Global where the Name ends with an "S"
    • Data is stored in a $list structure. $list creates a couple of bytes at the beginning of each field that informs the Compiler how long the field is and the base datatype of the field: String, Number, Boolean and Bit.
    • By default, the RowId is a Sequential Contiguous Integer starting from 1. The RowId is referenced by the column named "ID"
    • The RowId can be overridden by creating an Index on a property of the class and assigning it the attributes [Unique, PrimaryKey, IDKey]. PrimaryKey says that the property value is the RowId for each row in the Table. IDKey says that the property Value is the Object.%Id() and is the Object Identifier. Unique is implicit in that you cannot have two different records with the same RowId value.
    • The Global reference for the Data Global looks like this:
    • ^GlobalD(RowId)=$lb(Property1,Property2,...,PropertyN)
    • An index on Property X will generate a Global Reference: ^GlobalI("{IndexName}",{Property_Value},{RowId_Value"})=""
    • The RowId, irrespective of whether it is system generated or based on the value of a specific property or properties is the "ID" in an SQL statement
  • SQLStorage
    • This model was implemented to allow InterSystems customers who had developed applications that used direct Global Access to create a Mapping of the Global Structures in such a way that the data in the global can be exposed as Class Objects or Table Rows.
    • The RowId can be referenced as "ID" in an SQL Statement. The RowId has to be generated by the application even if it is a sequential Integer, though in many applications including LabTrak typically subscripted their Application Globals with a data value generated by the application. For example, the Episode Number in LabTrak is based on the 2 character Site Code followed by a string of Numerics. e.g. AA1345363

2) Links between Tables

  • A Designative Reference is a term we use for a Property/Column whose value is the RowId of a row in another Class/Table. The most common examples are code tables. If you have a code table CT_Country where the Country_Code is the RowId of the table. A column on the table EP_VisitNumber (A Trak Episode)  called CountryDR is linked to a row in CT_Country. 
  • The CT_Country Global will look like this: ^User.CTCountryD({Country_Code})="{Country_Name}\{Country_Currency}\{Country_Telephone_Code}" and an example would be: ^User.CTCountryD("ZA")="South Agrica\ZAR\+27"
  • The Episode Record will look like this: ^TEPI({Episode_Number})="field1\Field2\...\ZA\...\FieldN".  The SQL query "Select Country_Code from SQLUser.EP_VisitNumber where EP_VisitNumber='AA12347690'" will display ZA from the Country_Code column. You can use a "->" (Points Into) construct supported in Ensemble/IRIS SQL to access the Country_Name without having to specify the CT_Country TableName in the FROM clause or the WHERE clause "WHERE EP_VisitNumber.Country_Code=CT_Country.ID"
  • The Select statement looks like this "SELECT EP_VisitNumber,EP_VisitSurname,Country_Code->CountryName from SQLUser.EP_VisitNumber
  • Designative References do not support Referential Integrity. That means that you could delete the "ZA" row in CT_Country even though there may be thousands of Episodes that point to that row. In order to enforce Referential Integrity, you declare a ForeignKey definition on CountryCode in the Class Definition. Ensemble will generate an Index of Country Codes and Episode Numbers and if there are any entries in this Index for the Country Code "ZA", i.e. Episodes linked to the Code "ZA" then an Error Status is generated that indicates that the ForeignKey Constraint prevents the delete of the the "ZA" row in CT_Country as there are rows in other Tables that reference that Row.
  • LabTrak doesn't use ForeignKeys (generally speaking), the application does, however, prevent you from Deleting rows in Code Tables if those Codes are referenced by other LabTrak Tables.

3) Relationships

There are two types of relationships most commonly used in Ensemble/IRIS Classes/Tables: Parent-Child and One-Many. Many-Many relationships can be defined. Refer to the documentation on Relationships to see how Many-Many relationships are defined. I can't think of any examples of Many-Many relationships in LabTrak so instead, I will just focus on the two main relationship Types.

Parent-Child Relationships

If you look at the TEPI global in LabTrak which is the global which stores Episodes. You will see something like this:

^TEPI({Episode_Number})="{Episode Fields}"

^TEPI({Episode_Number},1,{TestSetCode})="{Test_Set_Fields}"

^TEPI({Episode_Number},1,{Test_Set_Code},"DATA",{Test_Item_Code})="{Test_Item_Fields}"

for example:

^TEPI("AA00000401")=""SALM\NIGEL\M\44453\\\S2\61839\646\61838\480\61839\0DR\\R\\\1039444\\41C028W000\\\\\40\\\F\\fred.smith\\61839\646\N\\AA\,690425\\\\\\N\0\\\\N\H\38768\\\\41C028W000\\\\\\\\\\\\\\\Z03.9\52ZETH000003" - This is the Episode Record

^TEPI("AA00000401",1,"H065",1)="61842\594\jane.doe\61842\597\jane.doe\\N\\\\\\\\\\\\\61839\646\\\N\\\Y\\\A\~~~\ \N\\fred.smith\\\\\\N\\61838\28800\\\1\\\\\Y\\\\61839\38760\\\N\\\\\\\N\61843\8122\\\P\Z03.9\" - This is a Test Set Record within the Episode

^TEPI("AA00000401",1,"H065",1,"DATA","H0440")="4.14\\\\\AAH02\\\\\\\\\" - This is a Test Item Record within the Test Set.

As you can see this global is a Multidimensional Array. It has many subscript levels. The ability to create Arrays like this is one of the features that differentiates Cache/Ensemble/IRIS from any other Database Technology. It is one of the reasons why Ensemble/IRIS creates databases that are, on average, half the size of other database technologies on a Like for Like basis. Partly because the data records do not use fixed-length fields. By using Delimiters or $List we do not need to pad out field values with leading '0s or trailing spaces or NULL characters. Secondly, by storing the Test Sets within the Episode men's that when we access the Episod/IRIS loads the data page that contains that episode into memory it brings with it the Test Sets and Test Items. Where you walk through the Test Sets the Test Set Records are already in memory. We haven't had to go to another data structure in the Database to fetch the Test Set Records from a Test Set Table as you would need to do in other Relational Database Technologies.

If you delete an Episode the Test Sets and Test Items are deleted as well. You do not need to delete the Test Items first, then delete the Test Sets and then, finally, delete the Episode.

Nor do you need to maintain Indices to link Test Items to Test Sets or Test Sets to Episode.

In LabTrak the property in the Parent Table that references the Child rows in the Chile Table is prefixed with the word "Child" and the corresponding property in the Child Table that points to the row in the Parent Table is suffixed with the word "ParRef"

One-Many Relationships

In the One to Many relationship, there will be two tables, a Row in the "One" table is linked to Many rows in the "Many" table. From an Objects point of view, there is a Property in the "One" class that points to the related rows in the "Many" table. In LabTrak the property name is prefixed with the word "child". There is a corresponding property in the "Many" Table and that property name is suffixed with the word "ParRef".

Your Question

When I read your question and I looked at the Schema Diagram I was a bit confused as there is no SQLUser.CT_Loc table in LabTrak. There is however a table called SQLUser.CT_UserLocation.

If you use the Management Portal -> System Explorer -> SQL in the LabTrak namespace and you select the SQLUser schema in the Schema drop-down list on the left of the SQL form. You will be presented with a list of all Tables in the SQLUser schema. If you scroll down and find the table "CT_UserLocation" and you drag it into the "Execute SQL" text area on the form it will display the following SQL Statement:

SELECT 
CTUSL_RowId, CTUSL_Code, CTUSL_Desc, CTUSL_Destination_DR, CTUSL_UniqueSite, CTUSL_MoveDirectory, CTUSL_AccreditationNumber, CTUSL_WebReportFooter, CTUSL_DisplaySequence, CTUSL_RegionCode, CTUSL_Address1, CTUSL_Address2, CTUSL_Address3_Suburb, CTUSL_Address4_State_DR, CTUSL_Address5_PostCode, CTUSL_Phone, CTUSL_Fax, CTUSL_email, CTUSL_StructuredCode, CTUSL_StructuredCodeLength, CTUSL_StructuredCodeLevel, CTUSL_Address, CTUSL_DocCourierRun_DR, CTUSL_LocCourierRun_DR, CTUSL_ActiveFlag, CTUSL_DefaultNID, CTUSL_UnixMoveDirectory, CTUSL_PFforBatchEntry
FROM SQLUser.CT_UserLocation

Note the Columns  CTUSL_Address4_State_DR, CTUSL_DocCourierRun_DR, CTUSL_LocCourierRun_DR. These are all Designative References to rows in other Tables.

The EP_VisitTestSet table contains the following Columns:

SELECT 
VISTS_ParRef, VISTS_RowId, VISTS_TestSetCounter, VISTS_TestSet_DR, VISTS_DateOfEntry, VISTS_TimeOfEntry, VISTS_UserEntered_DR, VISTS_DateOfAuthorisation, VISTS_TimeOfAuthorisation, VISTS_UserAuthorised_DR, VISTS_PathologistID_DR, VISTS_ExcludeFromCCR, VISTS_Rule3Exempt_Sequence, VISTS_Priority_DR, VISTS_Rule3Exempt_Max, VISTS_TherapeutDosage, VISTS_TimeOfDosage, VISTS_24HUVolume, VISTS_24HUTimePeriod, VISTS_BB_TransfEvents_DR, VISTS_Rule3Exempt_Date, VISTS_DateOfPathologistAtt, VISTS_TimeOfPathologistAtt, VISTS_StaffNotes, VISTS_DateOfCreation, VISTS_TimeOfCreation, VISTS_StandardLettersChecked, VISTS_BB_DateRequired, VISTS_ExcludeFromPatientMean, VISTS_UserSite_DR, VISTS_Machine, VISTS_Printed, VISTS_SuperSet_DR, VISTS_StatusResult, VISTS_DFT_TimeOfFirstCollection, VISTS_HISTO_Extra, VISTS_HISTO_BillingItem, VISTS_SupressBilling, VISTS_HospitalRefNumber, VISTS_UserCreated_DR, VISTS_MoveToReferralLab_DR, VISTS_MoveToUserSite_DR, VISTS_DFT_DR, VISTS_DFT_Position, VISTS_DFT_DateOfFirstCollection, VISTS_StatusEntry, VISTS_Confidential, VISTS_SpecimenType_DR, VISTS_SpecimenNo, VISTS_DateOfCollection, VISTS_TimeOfCollection, VISTS_CollectedBy_DR, VISTS_DisplaySequence, VISTS_SupressReason, VISTS_UserSupress_DR, VISTS_DateOfSupressBilling, VISTS_Rule3Exempt_Comments, VISTS_AnatomicalSite_DR, VISTS_Reason_DR, VISTS_DateOfReason, VISTS_TimeOfReason, VISTS_UserReason_DR, VISTS_DateOfReceive, VISTS_TimeOfReceive, VISTS_PaymentCode_DR, VISTS_SpecimenGroup_DR, VISTS_Document_DR, VISTS_BB_Neonatal, VISTS_RR_Date, VISTS_RR_Time, VISTS_RR_User_DR, VISTS_RV_Date, VISTS_RV_Time, VISTS_RV_User_DR, VISTS_BB_DoNotFile, VISTS_DateOfLastChange, VISTS_TimeOfLastChange, VISTS_UserOfLastChange_DR, VISTS_DateOfSentSTM, VISTS_TimeOfSentSTM, VISTS_DateOfReceivedSTM, VISTS_TimeOfReceivedSTM, VISTS_MovementStatus, VISTS_AddedByAction, VISTS_PricingStatus, VISTS_ICD10List, VISTS_LongTerm, VISTS_RequestBy_DR, VISTS_LongTermReason, VISTS_PairedSeraQueue, VISTS_DoctorDR, VISTS_BB_TimeRequired, VISTS_BB_Tags, VISTS_DTOfResultChange, VISTS_DateOfFirstAuthorisation, VISTS_TimeOfFirstAuthorisation, VISTS_AnatomicalSiteFT, VISTS_ReasonReportedTo, VISTS_ReasonTelephone, VISTS_ReasonObservations, VISTS_ManualPCEntry, VISTS_ReasonClearResults, VISTS_ReasonClearSN, VISTS_ReasonResultsNR
FROM SQLUser.EP_VisitTestSet

If I add a WHERE Clause: WHERE VISTS_ParRef %STARTSWITH 'AA' it will return records from the site that "AA" 

If you run this query you will see rows of data that look like this

AA00000005 AA00000005||H065||1 1 H065 04/19/2010 1209

The ParRef is the Episode Number. The RowId for the Test Set is AA00000005||H065||1, {Episode}||{TestSetCode}||{TestSetCounter}.

I created an ODBC connection to a LabTrak Namespace on our UAT server. I unspecified the Server IP address, the Super Server Port Number, The Namespace and my Credentials. I tested the connection which was successful. Te ODBC connection can be used by Excel and other reporting tools to create Reports and Charts.

I then use an application called DBBeaver (available on the Open Exchange site) to create ERD diagrams of the LabTrak classes. DBeaver uses the InterSystems JDBC driver and you set up the connection in DBeaver itself.

I  

Hi

If you are using the HS FHIR Resource classes in IRIS for Health then you will see that attributes such as Patient.Picture is a BinaryStream. If you are using a BinaryStream you need to convert it into Base64 encoding which generates a stream of characters that do not include any control characters that may cause a JSON or XML parser to think that it has encountered a line terminator. If you look at the FHIR to SDA nd SDA to FHIR data transformations you will see how they Transform the Inbound Stream from Base64 to Binary and from Binary to Base 64 encoding for the outbound SDA to FHIR JSON or XML.

I felt the need to remind us that we were very lucky that InterSystems came along and did away with all those peculiarities. I should add it to the thread on the future of ObjectScript. Imagine if we had to sell that to a customer. Maybe it's because I first learned to write code like that and it is no wonder why I love ObjectScript so much.

I agree with Steve

You can use ^%GOF, GBLOCKCOPY,  or even Merge ^Global=^[{Old_Namespace_Name}]Global as long as you know which globals you need, that the globals are not already mapped from some other namespace.

When it comes to your code you have to take into account your Classes (.cls), include files (.inc), routines (.int) (rarely used in new Ensemble/IRIS applications but there are applications that are written entirely in routines and use old style "nested dot" syntax in the form:

TestList xx (xx,TestList)=""
sort="" f  sort=$o(^Global($zn,$j,"Literal",dh1,SORT,dh2,Master,report,sort),-1) q:sort=""  d
. epis="" f  epis=$o(^Global($zn,$j,"Literal",dh1,SORT,dh2,Master,report,sort,id)) q:epis=""  d
. . depseq="" f  depseq=$o(^Global($zn,$j,"Literal",dh1,SORT,dh2,Master,report,sort,id,depseq)) q:depseq=""  d
. . . dep="" f  dep=$o(^Global($zn,$j,"Literal",dh1,SORT,dh2,Master,report,sort,id,depseq,dep)) q:dep=""  d
. . . . sectseq="" f  sectseq=$o(^Global($zn,$j,"Literal",dh1,SORT,dh2,Master,report,sort,id,depseq,dep,sectseq)) q:sectseq=""  d
. . . . . sect="" f  sect=$o(^Global($zn,$j,"Literal",dh1,SORT,dh2,Master,report,sort,id,depseq,dep,sectseq,sect)) q:sect=""  d
. . . . . . tsseq="" f  tsseq=$o(^Global($zn,$j,"Literal",dh1,SORT,dh2,Master,report,sort,id,depseq,dep,sectseq,sect,xyzseq)) q:tsseq=""  d
 

I kid you not... 

There are .mac (same as .int but with embedded SQL, which compile down to .int), BPL's, DTL's, Business Rules,  modified HL7 schema's, CSP pages in /csp/{namesapce}, corresponding .js and .css in the /csp/broker/ directory but not necessarily, it depends on how they are referenced in the HTML Header.

So, in my opinion, Steve is correct. Either dismount the current database (you can tell that it is not mounted if there is no .lck file in the database directory. If there is a .lck file and you proceed with copying the .dat database file, there is a good chance that when you try and assign the new .dat to a namespace, it may fail to mount. Better to shut down the instance.

It is probably easier to delete the globals from the original database than to delete the logic as there are so many extensions that you need to check for (as listed above, and if you didn't write the original application, it is easy to forget about the .inc, .dtl,  .bpl, .hl7, .mac, .int entities (if there are any of course).

Export the globals you plan to delete from the original database, and I would use the Management Portal frankly as it is easier to check the checkboxes next to the global names you want to delete (and export first before you delete) than type in the global names in any of the terminal utility such as ^%GOF. One typo, and you think you exported ^Customer but actually exported ^Kustomer, but you killed ^Customer, and then you realise that ^Customer was actually mapped into the namespace. Now you've deleted it in the source database, and you need to restore it, but you don't have an export because of that typo. The Management Portal allows you to exclude System Globals and Mapped Globals and avoid an accidental kill.

It is also a good idea to adopt a meaningful naming convention to indicate the difference between your DEV, QC, UAT and PRD databases and whether the database is for data (DT) or code (RT or RTN). It is also worth considering separating your stream data into a third database using global mapping. The reason for this is that if you have, for example, a Patient Class and one of the properties is Picture as %BinaryStream, the Binary Stream content will be stored in the 'S' global in the Storage Definition. Pictures of your Patient may be useful to display in your Patient Registration UI. Still, unless you are exporting that Patient as a FHIR Patient Resource, it is unlikely that you will reference the picture very often. Even with Objects, IRIS won't load the Picture into memory (Global Buffer Pool or your User Process) unless specifically referenced, i.e. "Patient.Picture". IRIS creates a "stream" directory though I am not sure what its purpose is. 

MyDatabase-DT-PRD (Data, Production)

MyDatabase-RT-PRD (Code, Production)

form Namespace

MyNamespace-PRD

Nigel

The equivalent for Dat/Times is $zdt($h,3) which will give you YYYY-MM-DD HH:MM:SS

If you want greater precision you can use

$zdt($now(),3,,6)

If $now() is 65954,68889.600788 then

$zdt($now(),3,,6) returns 2021-07-29 19:08:09.600788

This can be useful where you have data flowing in where there are many records per second and you want to be more accurate when assigning a Date/Time to the data.

The 6 in $zdt($now(),3,,6) indicates 6 digits in the Milliseconds

Nigel

Nigel

I would have responded with a similar option:

classmethod ABC(byref p1 as %String(MAXLEN=*)) as %Status

set tSC=$$$OK

try {

     if $g(p1("{some_attribute_name}")) set .....

    set p1("{a_new_node"})="{a_value}")

    }

...

But if you wanted to keep it in proper objects pass in p1, byref, as a %ListOfDataTypes or %ListOfObjects or %ArrayOfDataTypes or %ArrayOfObjects depending on whether you are passing in an ordered list or and subscripted array

Hi

I have spent the last 4 months developing a generic HL7 and FHIR Codebase. The way it works is as follows:

1) I have a Codebase Namespace that consists of a number of common classes:

a) A Message Queue Service that processes messages in a Message Queue. The Message Queue class has a number of standard properties including: CreateTS, ProcessTS, CompletedTS, Target Message Type (HK7 or FHIR), Fields for the generated Request FHIR or HL7 message, Fields for the Response HL7 or FHIR message.  Fileds for the HL7 ACK/NACK Code and Text, HTTP Response Status, Overall Message Status.

b) There are a number of standard methods for every  Message Queue,: CreateMessage(), Update Message(), GetNextMessage(), CompleteMessage(), ResendMessage(), ResendMessageRange, PurgeMessages(). These methods are called by the Business Service that Processes the Message Queue, Methods that are used by the Busiess Process to Resend Messages or Complete a Message and the UpdateMessage() method is used to update the fields in the message with data it collects as the message passes through the Production. 

c) Code tables to convert some application field values into HL7 or FHIR equivalents.

d) The following Business Operations: HL7HTTPOperation, HL7HTTPSOperation, FHIRHTTPOperation, FHIRHTTPSOperation, HL7FileOperation and FHIRFileOperation 

e) A Business Service that calls the GetNextMessage() method and if it finds a message it puts the MessageId into an Ensemble Request Message and sends it to the Business Process.

f) The Business Process uses the MessageId to open the message Queue Object which apart from the standard fields mentioned in (a) contains any fields derived from the source application that identify the source data that you want to transform into an HL7 or FHIR Message. In the OnRequest() method of the Process you use the fields in the Message to retrieve the data from the data source and then invoke a DTL to transform the data into an HL7 or FHIR Message. The resultant message is written to the Message Queue object. Then depending on how the Interface is configured the Ensemble Request Message is sent on to any of the the Business Operations I listed. The Business Operations then do the work to write the HL7 or FHIR message into the HTTPRequest Message. The HTTP/S Operations then send the HTTP request to the Target Server. When a response is received the response status is noted and the message queue object updated. If there is content in the response then that too is written into the Message Queue Object and an Ensemble response message is instantiated and sent back to the Business Process.

g) The Business Process OnResponse() method processes the response it receives from the HTTP Operation and updates the Message Queue Object with the overall status of the transaction. It also calls the CompleteMessage() Method which tidies everything up and sets the CompletedTS to the current Date/Time. There are two calculated fields: TimeFromCreateToComplete and TimeFromProcessToComplete. The second one is  based on the time from when the message was picked up by the Business Service through to completion. The first time field records the time it took from when the message was originally created through to completion. 

h) There is a House Keeping Business Service that purges the Message Queue, Ensemble Messages, Ensemble Log Files ($$$TRACE etc), the Inbound and outbound file directories. The purges are based on configurable "Number of Days to Keep {messages/logs/etc...}"

i) There is a Debug Logger that you can call anywhere in your code to record some debg information. I did this because you can't use $$$TRACE or any of the $$$LOG... calls in non-Production classes. It also encourages developers to use a standard debugging approach using a standard format rather than having lots of "set ^Nigel($i(^Nigel),$zdt($h,3))="The value of variable x is "_$g(x) approach.

j) There is an Alert Notification System. I have the concept of Error Alert Notifications such as a comms failure in the HTTP Operation or an exception caught in a Try/Catch. Then there are Condition Notifications. They are intended for a different audience and an example would be when the Interface discovers that there is no entry in a code table for a specific code. An email will be sent to the appropriate person/people to update the code table. If the code isn't there a day later a follow-up email will be sent. The Alert Notification Service also checks the number of items in the Production Queues and will send emails to notify someone that the queues are either empty (that could mean that something has broken in the source database/application and it is no longer creating messages for the message queue.

k) There is a single configuration object. There is not a single hard coded value in the entire codebase. Every element of the interface gets its context from the configuration object. The configuration Object has about 40 properties and include the name of the message queue class, the Business Process Production Item Name and Business Process Class Name, the Inbound and Outbound File Directory names, the HL7 or FHIR Request and Response File names (if you are using the File Operation which is intended mainly for testing purposes but can also be used if you are using FTP to send a message onto some other system.

l) The Codebase sits in one database along with the code tables and code table mapping classes and their contents.

m) When you want to create a new Interface you create a new Namespace and map the Codebase classes into that Namespace. You also map the code tables and Code table mapping globals into the new Interface Namespace.

n) In the source application database you import one class. The Interface Mapping Class. If you have an application or ensemble interface in your source then in some appropriate class where you have the details of a Patent or other Entity that you want to send into the new interface you add two lines of code into that class that populate an array of name/value pairs that correspond to the message queue specific fields in the message queue class created for that interface. as mentioned earlier the Message Queue Class has a number of standard properties and methods. When you create a new Interface you create a new Message Queue Class and inherit the Properties and Methods into the class and then add properties for any identifiers that the interface will use to access the data that will be used in the Data Transformation to create your HL7 or FHIR message. The name/value pairs in the array you pass to the Interface Mapping class method correspond to those identifying property names.

o) The alternative is that an Interface will access the source database and walk through a given table and for every entry it finds in that table it will call the Interface Mapping class method to create a new Message Queue Object with the identifier for that item in the table you are processing. This is really useful for Bulk Loads of data from one source into a target destination.

p) There is also a testing module that will take data from any source. You can define test conditions. For example: Scramble the data in one of the source table fields, generate random data for any field, insert or remove items from any collection, remove values from properties. This module uses a combination of the Populate Utils Methods, a whole host of Patient Names, Addresses, Identifiers and you can write custom methods to manipulate the data from the data source. Tests are conducted on one or more source records which are grouped into a "Manifest". You can tell the Manifest how many records you want to process,  where to get the records from. The Manifest Records are then processed by the Test Interface which will run some SQL or object code to get the source data for each Manifest Record. It then calls the DTL you have created to convert that source data into an HL7 or FHIR message. It then runs through the test conditions and applies them to each generated message before the message is sent to the HTTP operations and optionally the File Operations. Details of the Request and Response messages are captured in the Manifest Records. You can also specify documentation on what you expect to happen when the transformed data is processed by the target application that you are testing and once completed there is a manifest report that will generate a .cav file containing the data from every record in the manifest. aving analysed this you update the manifest with the outcome and if the outcome differs from the expected outcome then you either send the whole manifest back to the developers and tell them to fix the bug or you look at why the tests produced a different outcome and whether that is due to issues in the source database or application. Tests are defined in a Tests Table and Tests are then grouped into a Test Set which is linked to a Manifest. That means that Tests can be reused for other test scenarios

Te entire collection of classes are extensively documented. I have determined that once I have monitored the first Interface that is going unto testing in the very near future and I resolve any bugs that haven't been picked up by the various unit tests I have done so far then assuming it is stable (lol I wrote it, how could it not be)  then it will take 1- days to generate any new interface. I wrote this architecture as I have 14 HL7 and FHIR Interfaces that I have to write and they are all almost identical.

The whole framework comes with a few CSP pages that I created to maintain the code tables, code table mappings and the configuration class. I am busy learning Angular at the moment and once I have that under my belt then I will create a REST API that any UI can call so that anyone who uses the Framework can build their own UI on top of it. 

If anyone is interested in this framework please contact me. If there are enough people interested then I will do the work to publish this on Open Exchange.

If anyone wants the Frame work but doesn't want to do any of the development to make it work in your environment then I will do the implementation, create the interfaces and whatever else is required on a contract basis.

There are some aspects f this framework that could make better use of some features in Ensemble such as Code Tables, Uploading CSV files, the Managed Alert Monitoring System (which I rather like but have not built it into this framework as I have my Alert Notification Module which does pretty much the same thing but does not have the 'Help Desk" feel of the Ensemble Alert Monitoring System.

If there are other adapters that are required then I can add them into the framework codebase.

Nigel

Hi Ben

I would be very happy to participate. I have a number of Patient HL7 Interfaces and a lot of LabTrak Interfaces as well so I will feed a little bit of hands on help to get my IRIS for Health 2021Python Foundation completed and configured and then I add an operation into any of the interfaces to send a steady stream of HL7 messages into the FHIR Server.

$$$ASSERT(condition) Writes an entry of type Assert, if the argument is false. condition is an ObjectScript expression that evaluates to true or false.

$$$LOGASSERT(condition) Writes an entry of type Assert, for any value of the argument. condition is an ObjectScript expression that evaluates to true or false.

so in the same way that you would have a $$$LOGERROR() statement in your code or a $$$TRACE the $$$ASSERT AND $$$LOGASSERT() will write to ^Ens.Log.UtilD in the case of $$$Assert()

The exact code for each is defined in the Ensemble.inc Include file

#define ASSERT(%condition) If '(%condition) { $$$LOGASSERT("Assert Condition Failed: "##quote(%condition)$char(13,10)) BREAK }

#define LOGALERT(%arg) Do ##class(Ens.Util.Log).LogAlert($$$CurrentClass,$$$CurrentMethod,%arg)

interesting: the $$$$ASSERT has a BREAK statement in it.

so if you have BREAK L/L+/S/S+ then if the task is running in the fore ground then it will automatically BREAK at that point.

If the Production item is not running in the foreground the BREAK will be ignored.

It's basically the same as saying 'If condition $$$LOGINFO("message")' I suspect it is probably useful in BPL's where to write an IF construct is tedious in comparison with a single statement $$$.... in ObjectScript

I tend to fall somewhere in the middle. I think that python has a great future within ISC products but I also feel that there is a place for objectScript as well. I would hate to lose objectScript but I guess ill only know once I have become fluent in Python with a little bit of R and Julia and maybe some RUST and don't forget (Angular, React, Vue, Node).js The main point is that for an IRIS developer you have so many choices to use for specific use cases and I can't think of another environment that offers that flexibility with such elegance and simplicity.

Nigel

Hi

Just one additional comment. If the databases/namespaces are in different instances on the same or different servers, then you need to create a remote database connection to the databases on the secondary server (the server instance which contains the tables you want to use in queries in your primary server instance). Once you have made the secondary databases available to your primary server then you can create a namespace on the primary server to include the database or databases (if you split your globals into one database and your routines into another database) and then you can use Package Mapping on the primary Server Namespace to map the Class/Table Packages into the Primary server Instance. What you don't want to do is map the globals associated with the secondary server namespace into the primary server namespace as this will cause the globals to be created in the primary server namespace database which means that if you ever disconnect the servers then the data that rightfully belongs to the secondary server will be orphaned in the databases of the primary server.

Nigel

Yours

Nigel

Hi Andre,

I would like to apologise if the tone of my reply came across as vengeful,, that was certainly not my intention. An hour ago I started writing a response where I took two examples from documentaries that I had watched during these long Covid Lockdown hours. One was about the production of a single Rolls Royce car and the amount of time, craftsmanship, quality control, pride and perfection that goes into the manufacture of each car. Each component is crafted by an individual who has practised his art over decades of trial and error. If the component has the slightest 'fault', quite often some imperfection that 99% of the population would not notice but the man who crafted the component and the quality control manager did notice and the item would be scrapped and the process would start all over again and I guess that it is for that reason that the best Rolls Royce cars sell for around $12 million. Probably half of that cost goes into those discarded items that did not meet the standard of quality that Rolls Royce prides itself in.

The second documentary was about the ice cream machines that franchise owners of McDonald's restaurants have to buy when they buy a McDonald's restaurant and how those machines have a very complicated cleaning program, which, should it fail, renders the machine unusable until a certified mechanic is called out to come and fix the machine. The mechanics are certified by the manufacturers of the ice cream machines and the rates they charge are high and in the food business where profit margins are slim is easy to understand why roughly one-third of McDonald's restaurants in the USA are not serving ice cream as the machine is 'broken'.  25% of the companies revenue is derived from the 'services' of their mechanics. The company could probably make an ice cream machine that doesn't break down but to do so would eliminate 25% of their revenue. It just so happens that the company that makes the ice cream machines are located in the same city as McDonald's headquarters and they have had a 50-year relationship where McDonald's earn a certain amount of revenue through the sales of the ice cream machines and the ice cream machine manufacturer earns 25% of their revenue by supplying machines that in a sense are designed to fail periodically. It also turns out that both McDonald's and the ice cream manufacturer are owned by a nameless holding company.

In the original version of this reply I went into far greater detail about each documentary and then two things happened. For the last month I have been using a program called Grammarly which, in the free version, will do basic spelling checks and other gramatical errors but in the paid version it will analyse your text and using fairly sophisticated algorithms will score your text against 10 different criteria and make suggestions, very good suggestions, as to how a paragraph could be rephrased as the original is too verbose or is too passive or aggressive and so on. I suspect that if I had run my original message it would have detected that the tone was a  bit 'vengeful' and would have suggested ways that I could express the concepts I was trying to convey in a more palatable manner. The other thing that happened is I accidentally hit the back page button and I lost all of the text that I had written and so I rewrote it and given that I didn't have an hour to write out all of the detail in my original I ended up writing this text instead. Grammarly tells me that I score 5/5 for Informal, 4/5 for optimistic and 3/5 for confidence. I guess the last one, confidence, is due to the fact that I haven't yet linked the messages behind the documentaries to the subject of software. 

I have worked in software companies that have been in business for 30 years or so and there were people and practices within those companies that led to a certain sense of 'we do it like this because we have always done it like this'. That can work in two ways, if you happen to have worked out a formula where all of the constituent parts are tried and tested and produce a certain level of excellence then that software company is likely to produce good applications and will continue in business for many years to come. Somewhere in those companies you are likely to find individuals who take great pride in their work and that sense of excellence influences those around that person and challenges them to aim for the same levels of excellence. On the otherhand it can lead to a company that started off with an innovative product that sold well and as a result they have applied the same standard to everything that they have done thereafter irrespective of changes in technology or fresh ideas brought in by new employees but eventually fail as the software they produce is no longer innovative and probably fails periodically requiring a 'sotware expert' to come on site and fix it. It is a fine balance to maintain. Companies cannot just change the way that they do things at the say so of some fresh employee with bright ideas. Nor can they remain stuck in a certain well trodden path because eventually they will be left behind as other younger and more adventurous companies enter the market with their innovative products and steal the lime light.

Excellence comes at a price. Those individuals who have taken their craft seriously and have become masters in their trade do not come cheap. Companies that try and produce excelllence using people with fewer skills and a willingness to work for lower wages are likely to produce poor software. The art is in matching the right people for the right tasks. Investing in the areas that demand excellence whilst not ignoring the role of the often overlooked managers who hold the whole enterprise together with their stoic reliance on repetative tasks.

At this point Grammarly is telling me that I have said enough. My ratings are now showing 5/5 for Optimistic and Confident and 5/5 for Formal as opposed to 5/5 for informal that it scored me half way through the text.

Yours respectfully

Nigel