Julian Matthews · Aug 9, 2021 go to post

Hey Muhammad.

This should happen automatically when the message schema is applied.

So, if this is based on an inbound HL7 service, then the Schema must be set in service using the setting "MessageSchemaCategory":

Alternatively, if you are generating the message using the test option for a process/operation, then you can set the HL7 Document Property from the test options:

Julian Matthews · Aug 6, 2021 go to post

It's been a short while since I threw this together, and I ended up adding in two important changes.

  1. OS Authentication
  2. Use of ##Class(Backup.General).ExternalSetHistory()

As Global Masters reminded me of this post, I thought I should at least update it with the latest version:

@echo off
rem VMTools should pass in either freeze or thaw.
if "%1" == "freeze" goto doFreeze
if "%1" == "thaw" goto doThaw
echo.
echo Nothing Matched. Exiting...
EXIT /b

:doFreeze
rem Call external freeze. OS Authentication negates need for login credentials.
c:\InterSystems\HealthShare\bin\irisdb -s"C:\InterSystems\HealthShare\Mgr" -U%%SYS ##Class(Backup.General).ExternalFreeze()
echo.
echo.
rem Check errorlevel from highest to lowest here.
if errorlevel 5 goto FreezeOK
if errorlevel 3 goto FreezeFAIL

rem If here, errorlevel did not match an expected output.
rem Assume Failure.
echo errorlevel returned unexpected value
goto FreezeFAIL

:FreezeOK
echo SYSTEM IS FROZEN
rem Error levels from freeze do not match standard convention, so we return 0 when successful.
EXIT /b 0

:FreezeFAIL
echo SYSTEM FREEZE FAILED
EXIT /b 1

:doThaw
c:\InterSystems\HealthShare\bin\irisdb -s"C:\InterSystems\HealthShare\Mgr" -U%%SYS ##Class(Backup.General).ExternalThaw()
echo.
echo SYSTEM IS THAWED
echo.
c:\InterSystems\HealthShare\bin\irisdb -s"C:\InterSystems\HealthShare\Mgr" -U%%SYS ##Class(Backup.General).ExternalSetHistory()
echo.
echo BACKUP RECORDED
EXIT /b 0
Julian Matthews · Aug 5, 2021 go to post

Hey Yone.

In the first part of your question, you're converting the hexadecimal to Ascii and then attempting to convert the Ascii string to Base64 (giving you "TFgDBAEBAgF5wyYjMTQ1Oz7DunxMWAcIAQEBAhgkWE1M").

The reason you're getting a different result on the Base64 encoding on ObjectScript is because you're encoding the hexadecimal string "4C5803040101020179C3913EC3BA7C4C580708010101021824584D4C" into Base64 without converting it to Ascii first. If you try the hexadecimal string in an online Base64 encoder, you'll see the same output:

Julian Matthews · Jul 31, 2021 go to post

Hey Virat.

It's difficult to point you in the direction of documentation that isn't supplied by InterSystems, as the material is generally good and available online. 

The sidebar on the forum has some great links to various resources, and even has a link to a docker image for the community edition (which is free).

If you do look at the community edition, you may find it useful to install the EnsDemo namespace to be able to try a few prebuilt productions and get a feel for things.

Julian Matthews · Jul 30, 2021 go to post

It will behave as if it was unauthenticated in the sense that you'll login automatically, however the trust is based on the successful login to the operating system, whereas actual unauthenticated access just lets anyone in.

When reviewed within my organisation, it was certainly preferred when compared to leaving user/pass in plaintext in a script.

Julian Matthews · Jul 30, 2021 go to post

Would OS Authentication be of use to you?

When enabled, you should be able to automatically log in based on the OS user account running the script, however you will need an IRIS account with the exact same username as the OS account, and it will need the appropriate permissions in IRIS for what you're looking to do.

Julian Matthews · Jul 28, 2021 go to post

Hey Jay.

I appreciate this probably doesn't help you, but thought I would share in case it's of interest to anyone.

I had this same issue with Kaspersky a few years ago when installing a preview of Healthconnect on my local machine to review some upcoming features (Does anyone else remember FHIR?).

As each attempted install would result in an internal virus response and some light hearted ribbing from colleagues, I was quite keen to get this resolved.

I worked with WRC as well as Kaspersky, and we found that the "threat" detected was oddly tied to the build number of Windows 10, and we would not get a detection with the same version of Kaspersky running on different builds of Windows 10 or on any builds of Windows Server Edition we had currently in operation and free to test.

At the time, Kaspersky did state that they had updated their definitions which I confirmed worked, however that could easily have been tied to the specific build of the Healthconnect install exe, or could just be something reintroduced over the last 3 years since I had reported this issue to Kaspersky.

FWIW - if there was something in the installer that was a red flag to all AV suppliers, then I would suspect that it would be addressed (especially as it'd probably flag up with whatever AV is used by Intersystems). However behavior detection isn't an exact science, and I wouldn't be surprised if adjusting the installer to appease Kaspersky is then detected by another supplier as an attempt to avoid detection by an AV.

Julian Matthews · Jun 18, 2021 go to post

Just to add an alternative to what has already been offered: you could also use the Data Import Wizard and provide the data in a CSV. You could then insert it into your table quite easily and without trying to built up a 100 union query :)

Julian Matthews · Jun 15, 2021 go to post

I'm also quite interested in this. If there is an example being shared, I'd love to see it.

Julian Matthews · Jun 11, 2021 go to post

I forgot to mention the article from @Murray Oldfield and I really should have considering it was a big help when implementing this myself.

I will say that I was caught out when trying the example scripts kindly provided in the comments as they are shown as two distinct scripts, and I found that VMWare would run every script in the folder on the freeze, and then all again but in reverse order for the thaw. This would mean that VMWare was effectively freezing and thawing the environment in a single hit, and then trying to backup before thawing and then freezing the environment.

Julian Matthews · Jun 11, 2021 go to post

Hey Nigel.

The key take away from any attempts to backup a running environment is to use freeze/thaw scripts.

The idea being that the backup solution will prompt the IRIS system to freeze the DBs for the backup to take place, and then thaw after the fact.

I recently embarked on this myself, and posted an article showing my journey based on using VMWare and a windows environment. However this should be easily adaptable to other backup solutions.

The only change I have made since that article is that I am not passing the login credentials via a separate file, and instead have OS Authentication enabled within my IRIS system so that the user account running the script is able to automatically authenticate at runtime.

Julian Matthews · May 27, 2021 go to post

Hey Otto.

If you extend your class with "Ens.Rule.FunctionSet", the ClassMethods contained within will be available as a function for the DTL:

Julian Matthews · May 13, 2021 go to post

Hey ED Coder.

You should be able to just use the EnsLib.HL7.Service.HTTPService class for your Service, and then the request can be sent to the port you define as you would for any other HL7 inbound. You can also then specify the message schema in the same way. 

The "url" you provide can be the IP of the machine with the port afterwards. For example, I just set up an inbound on a dev environment and then sent a message from Insomnia.

I sent it as a post to http://IP:PORT with the HL7 message as the content of the HTTP body:

Julian Matthews · Apr 20, 2021 go to post

Hey Yone.

In "C:\InterSystems\HealthShare_2\" there should be a file called iris.cpf. Have you tried editing the entry for DefaultPort to a known open port?

Take a copy of this file before editing it in case this makes things worse :)

Julian Matthews · Apr 14, 2021 go to post

Just as a little addition to this.

Be careful when implementing the ##Class(Backup.General).ExternalSetHistory() and then running the script independent to actually taking backups from your external system.

Default journal retention is usually 2 days OR 2 backups. If this is called a few times without a backup, you will have effectively deleted your journals past the point of your most recent backup.

Julian Matthews · Apr 1, 2021 go to post

Hey Jeffrey.

That's done the trick - I wrongly assumed that not specifying the locale in the outformat would assume the current locale.

Thank you for your help!

Julian Matthews · Mar 19, 2021 go to post

I've cobbled the following together from other posts that come close to this (for example, here and here) and running each line in terminal should disable all the services from the production in the namespace you run it in:

zn "NAMEOFNAMESPACEHERE"

Set tRS = ##class(%ResultSet).%New("Ens.Config.Production:EnumerateConfigItems")

Set tStatus = tRS.%Execute("Production.Name.Here", 1)

While tRS.%Next(.tStatus) {set sc = ##class(Ens.Director).EnableConfigItem(tRS.%Get("ConfigName"), 0, 1)}

Line 1 sets your namespace, 2 and 3 bring back the list of services (the flag set to 1 on the third line specifies listing the services, setting this to 3 will bring all operations)

Line 4 is a while loop to iterate through the result set and to then use Ens.Director.EnableConfigItem to disable the config item by name (flag 1 is the enable/disable flag, and the second is to tell it to update the production).

This could probably be made nicer and more efficient (eg. disabling all of the config names without updating the production and then updating the production once using "##class(Ens.Director).UpdateProduction()" to avoid doing it once per entry) however I hope it works as a starting point.

Julian Matthews · Mar 19, 2021 go to post

I believe that the HL7 Router which you are using to send to the operation should have a configuration item for setting the response target (called ResponseTargetConfigNames).

So the route the message would take is:

Julian Matthews · Mar 17, 2021 go to post

Ahh that explains where I went wrong, thanks!

In my mind, the write action was writing the data to the object while it was still server-side, and then it was the save that actually commits it to the destination, which is why I was only looking at the status of the save method.

Julian Matthews · Feb 23, 2021 go to post

Hey Scott.

If you were open to having a Service in your production which is what your function sends its two variables (and the service then passes it onto your Operation) you could have something like this:

ClassMethod SendPage(PagerNumber As %String, Message As %String) As %Status
{
    //The String passed to Ens.Director must match a service name within the active production
    set tsc = ##class(Ens.Director).CreateBusinessService("Pager From Function Service",.tService)
    
    if ($IsObject(tService))
    {
        set input = ##class(osuwmc.Page.DataStructures.Page).%New()
        set input.PagerNumber = PagerNumber
        Set input.Message = Message
        
        set tsc = tService.ProcessInput(input)
        Quit tsc
                
    }
    else
    { 
    Quit 0
    }
}

and then you have a custom service that looks a little like this:

Class osuwmc.Services.PageService Extends Ens.BusinessService
{
Property TargetConfigName As Ens.DataType.ConfigName;

Parameter SETTINGS = "TargetConfigName";

Method OnProcessInput(pRequest As osuwmc.Page.DataStructures.Page) As %Status
{
    set tsc=..SendRequestAsync(..TargetConfigName, pRequest)
    
    Quit tsc
}

}

Then when you add the service to your production (remembering to match it to the name declared in the service code), you can select your target operation as a config item, and when the function is triggered it should go Function -->Service-->Operation.

Edit: my Service Class example had an error in the SETTINGS parameter, I have corrected it. 

Julian Matthews · Feb 23, 2021 go to post

Hey Scott.

I think you can achieve this by setting the second parameter to a comma delimted list of the request names, and then pass each value afterwards (in the same order you have the names).

For example:

Method Sample(pReq As osuwmc.Page.DataStructures.Page, Output pResp As %Net.HttpResponse) As %Status
{

    Set FormItems = "PNo,PMsg"
    
    set tSC = ..Adapter.Post(.tResponse,FormItems,pReq.PagerNumber,pReq.Message)

    if ('tSC)

    {

        $$$LOGERROR(tSC)

    }

    quit tSC

}
Julian Matthews · Feb 18, 2021 go to post

Hey Mufsi, I had this happen to me and according to WRC this is a known issue fixed in 2020.1.1.

It is caused by an attempt to get an exclusive lock on a specific node for the mirrored database (which is read only) where the task is being scheduled.

There is an alternative workaround to the steps you took by using the ^TASKMGR interface from the %SYS namespace in a Terminal session as it doesn't try to perform any write operations on the read-only mirror databases.

Julian Matthews · Feb 16, 2021 go to post

Just to expand on Davids response - the File Outbound adapter will create a new file per message assuming the filename settings are configured so that the filenames it produces are unique per message.

If you were to set the Filename property to a specific value rather than use the timestamp specifiers (so for example set it so that the filename is output.txt) then each message should write the data at the end of the file, giving you a single file with all of the entries.

Julian Matthews · Jan 27, 2021 go to post

You won't be able to do this using the built in viewer, however you can query the sql tables directly and then interrogate the results using your preferred method.

For example, I had a spike in activity on the 20th of Dec which made a disk fill a lot more than usual, but the purges meant I couldn't just check the messages as they had since been deleted. So I ran the following in the SQL option in the management portal:

SELECT *
FROM Ens_Activity_Data.Days
Where TimeSlot = '2020-12-20 00:00:00'

I then used the print option to export to CSV to then use a simple Pivot table to work through each Host name to see what had a dramatically higher number of messages to what I would usually expect. (I actually exported a few days worth of data to compare between days, but hopefully you're getting the idea)

You could always explore using Grafana to produce a nice visual representation of the data that you've surfaced using a method like this.

Julian Matthews · Jan 20, 2021 go to post

Hey Werner.

I know I have ignored your request on how to call a class method (Jeffery has you covered by the looks of things), but you could use $PIECE to break the string apart and then insert what you need.

For example if "source.{PhoneNumberHome(1).Emailaddress}" is equal to "myemail@myemaildomain.co.uk" then 

"test"_$PIECE(source.{PhoneNumberHome(1).Emailaddress},".",1)_"test"_"."_$PIECE(source.{PhoneNumberHome(1).Emailaddress},".",2,*)

will return: 

"testmyemail@myemaildomaintest.co.uk"

The idea being that we

  • Start the new string with "test"
  • take everything before the first period with $P(source.{PhoneNumberHome(1).Emailaddress},"."1) 
  • Add "test" in again
  • Add a period that gets dropped from the $PIECE from using the period as the delimiter
  • Provide everything from the second period onwards with $P(source.{PhoneNumberHome(1).Emailaddress},".",2,*)
Julian Matthews · Nov 24, 2020 go to post

Hey ED Coder.

There are built in classes to manage this in a nicer way.

##class(Ens.Util.Time).ConvertDateTime() is a good starting point. For example:

Here is the filled in classmethod call for easy copy/pasting:

Set NewDate = ##class(Ens.Util.Time).ConvertDateTime(HL7Date,"%Y%m%d%H%M%S","%Y-%m-%d %H:%M:%S")

The values for each section of the date are defined by the following: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cl…

Julian Matthews · Nov 24, 2020 go to post

Hey Ahmad.

If this is all happening in the one production, then you will have an inbound service for each port.

You could setup a router with a specific rule for each destination, and then use the rule constraint to be restricted on the source service. That way, if you point all three services at the one router, it will only use the rule within the router that has the service listed as a constraint.

For example:

I hope that helps!