Julian Matthews · Feb 20, 2019 go to post

Hi Scott.

I have just taken a look, and it doesn't seem to appear in anything I have (Highest being v2.7.1). Looking around online, ORU^R40 was apparently introduced in HL7 V2.8.

Julian Matthews · Feb 19, 2019 go to post

Could you generate a message to your ens.alert (or equivalent) from the BO, and then immediately call ##class(Ens.Director).EnableConfigItem to disable the business process?

Julian Matthews · Feb 18, 2019 go to post

Hi Andrew.

I don't think the operation to a downstream system would be the appropriate place for adjusting the HL7 content.

Using Transforms within a router will be the best approach for this, and while it might seem a little cumbersome creating a transform per message type you should be able to remove some of the leg work by using sub-transforms.

For example; if you were looking to adjust the datestamp used in an admission date within a PV1, rather than complete the desired transform work in every message transform that contains a PV1, you create a sub-transform for the PV1 segment once and then reference this in your transform for each message type. If you then have additional changes required to the PV1 which is message specific (say, a change for an A01 but not an A02) you can then make this change in the message specific transformation.

As far as transforming datetime in HL7, I have faced my share of faff when it comes to this with suppliers. The best approach from within the transforms is to use the built in ensemble utility function "ConvertDateTime" listed here

Julian Matthews · Jan 25, 2019 go to post

Just to add to this - I have had a play with this new function within the 2019 Preview release and it works really well.

Julian Matthews · Jan 10, 2019 go to post

You might be hitting a hard limit on the performance of the hardware you're using. Are you able to add more resource and try again?

Julian Matthews · Jan 9, 2019 go to post

What type of business service are you using? If you are using a single job on the inbound, I guess you're hitting a limit on how fast the adapter can work on handling each message (in your case, you're getting around 15ms per message)

You could look at increasing the pool size and jobs per connection if you're not worried about the order in which the messages are received into your process.

Julian Matthews · Dec 27, 2018 go to post

Hi Alexandr.

If you are looking to run a task at specific times, you could create a new task which extends %SYS.Task.Definition to then be selectable as an option from the task manager.

For example, I have a folder which I need to periodically delete files older than x days.

To achieve this, I have a class that looks like this:

Class DEV.Schedule.Purge Extends %SYS.Task.Definition
{

Parameter TaskName = "Purge Sent Folder";

Property Directory As %String;

Property Daystokeep As %Integer(VALUELIST = ",5,10,15,20,25,30") [ InitialExpression = "30" ];

Method OnTask() As %Status
{

Set tsc = ..PurgeSentFolder(..Directory,..Daystokeep,"txt")
Quit tsc
}

Method PurgeSentFolder(Directory As %String, DaysToKeep As %Integer, Extention As %String) As %Status
{
// Calculate the oldest date to keep files on or after
set BeforeThisDate = $zdt($h-DaysToKeep_",0",3)

// Gather the list of files in the specified directory
set rs=##class(%ResultSet).%New("%File:FileSet")
Set ext = "*."_Extention
do rs.Execute(Directory,ext,"DateModified")

// Step through the files in DateModified order
while rs.Next() {
set DateModified=rs.Get("DateModified")
if BeforeThisDate]DateModified {
// Delete the file
set Name=rs.Get("Name")
do ##class(%File).Delete(Name)
}
// Stop when we get to files with last modified dates on or after our delete date
if DateModified]BeforeThisDate 
set tSC = 1
}
quit tSC
}

}

Then I created a new task in the scheduler, selected the namespace where the new class exists, and then filled in the properties and times I want the task to run.

Julian Matthews · Dec 24, 2018 go to post

Hi Eric.

My first check would be looking at the console log for that instance to see if there's anything wobbling in the background. Specifically checking for any entries around the time the monitor thinks it has gone down.

Failing that, it's probably worth going to WRC. The last thing I think you need this close to Christmas is the Primary dropping and you needing the Mirror to be working.

Julian Matthews · Dec 24, 2018 go to post

If you go to the management portal for the "down" mirror, are their any errors that might point to the issue?

I recently saw this happen where the mirror had run out of space to store the journal files, so the mirror stopped functioning and was showing as "down".

Julian Matthews · Nov 29, 2018 go to post

Hi Stephen.

Are you able to select the specific queue from the Queues page and press the abort all button, or does it return an error?

Julian Matthews · Nov 15, 2018 go to post

Yes, a new schema can do this along with a transformation.

If you have an existing schema, probably best to clone it and then edit the clone to speed things up.

Julian Matthews · Nov 2, 2018 go to post

So I found that it is possible to save single messages using the "HL7 V2.x Message Viewer" which might not be suitable for you if you're looking to export loads of messages.

One option could be to add a new HL7 file out operation, search for your desired messages from the Router you wish to "export" from and then resend them to a new target which can be selected from the Resend messages page.

Julian Matthews · Oct 18, 2018 go to post

Sorry John, I hadn't had my coffee when I read your post.

When you look at the first message heading info within the Trace, does the Time Processed come before or after the Time Created of Message 2?

Julian Matthews · Oct 1, 2018 go to post

Hi all, I have answered my own question.

Per number, I will need to create a new HS.SDA3.PatientNumber and set the required information, and then insert each HS.SDA3.PatientNumber into the HS.SDA3.PatientNumbers list that exists within the HS.SDA3.Patient object.

For the benefit of anyone else that stumbles across this post (and myself when I forget this in a few weeks time and end up finding my own post): To achieve this in Terminal as a testing area, these are the steps I followed:

Set Patient  = ##class(HS.SDA3.Patient).%New()

Set PatNum1 = ##class(HS.SDA3.PatientNumber).%New()

Set PatNum2 = ##class(HS.SDA3.PatientNumber).%New()

Set PatNum1.Number = "123456"

Set PatNum1.NumberType = "MRN"

Set PatNum2.Number = "9999991234"

Set PatNum2.NumberType = "NHS"

Do Patient.PatientNumbers.Insert(PatNum1)

Do Patient.PatientNumbers.Insert(PatNum2)

Julian Matthews · Sep 21, 2018 go to post

Hi Akio.

Generally speaking, the password for the "_SYSTEM" account is set by the user during the install process, so I don't think there will be a default.

Julian Matthews · Jul 17, 2018 go to post

Is the Business Process custom? If so, it's possible that there is a bit of bad code that is returning an error state and then continuing to process the message as expected?

It might help if you provide some more detail on the BP itself.

Julian Matthews · Jun 15, 2018 go to post

Hi Guilherme.

I think your best starting point will be providing your system specifications, the OS you're running Studio on, and the version of Studio/Cache you are running.

Depending on the issue it could be anything that is causing your problems.

Julian Matthews · Jun 15, 2018 go to post

You could run the compile command you would use in Terminal within Studio using the output window. However, the only "speed boost" would be time saved from not launching and logging in to Terminal.

If you're suffering from really slow compile times, it might just be that your namespace is huge, or a sign of a need to upgrade your hardware.

Get in touch with either WRC or your sales engineer, and they'll happily help out. 

Julian Matthews · Jun 4, 2018 go to post

Thanks for the response.

It sounds like I should be fine with the machine I'm running (Win7, 120GB SSD, 8GB RAM, i5 CPU (dual core)).

My biggest hits are at start up, but once up and running it's pretty snappy. I should probably try to be more patient!

Julian Matthews · May 29, 2018 go to post

The accepted answer would probably be your best shot.

Say for example you wanted a count of all messages that have come from a service called "TEST Inbound", you could use the SQL query option (System Explorer>SQL) to run the following:

SELECT count(*)
FROM Ens.MessageHeader WHERE SourceConfigName = 'TEST Inbound'

If you wanted to put a date range in as well (which is advisable if what you're searching is a high throughput system and your retention is large):

SELECT count(*) Total
FROM Ens.MessageHeader where SourceConfigName = 'TEST Inbound' AND TimeCreated >= '2018-04-30 00:00:00' AND TimeCreated <= '2018-04-30 23:59:59'
Julian Matthews · May 29, 2018 go to post

Hi Gadi.

Glad to hear you have something in place now.

I guess a task is better for specifying the exact time for the event to run rather than every x number of seconds, as the timing could go out of sync if the service was restarted at any point.

For the setup I have, because I want to check the status of the drive free space (and I also check a specific folders files to alert if any of them have existed for more than 5 minutes) it makes sense to just let the Call interval run every x seconds.

Julian Matthews · May 24, 2018 go to post

I use a similar approach to Edwards answer (using a BS that runs every x seconds), but for alerting when freespace on my drives fall below a specified value.

I'm not sure if my approach is the most efficient to your requirement, but I would do something like this in the Service:

  • Query the table and sort it so you get the most recent result at the top (I'm assuiming the dateTime field is suitable for this)
  • Take a Snapshot of the results, and and get the first datetime result.
  • Compare it to the system datetime
  • Send an alert to Ens.Alert if the difference is more than 24 hours
Julian Matthews · May 18, 2018 go to post

Thanks Joyce, I made contact with them instead of support, and after a webex the solution was found!

It turns out the performance issues I had been getting when adding the entire namespace was because I had included all of the system folders (ens, enslib, EnsPortal, etc). So on each launch, Eclipse was then reindexing the entirety of the core files in each namespace.

Julian Matthews · May 17, 2018 go to post

I tried this as a way of moving everything so far into our source control system, and the performance impact on Eclipse/Atelier was soul destroying.