Hey Ashok.

I think there's a slight correction needed for your code snippet.

Specifically, for the line "do object.Organizations.GetAt(key).%DeleteId(key)" the DeleteId using the value of Key will not be the correct value to delete the intended object.

Should the code block instead be:

ClassMethod %OnDelete(oid As %ObjectIdentity) As %Status [ Private, ServerOnly = 1 ]
{
    Set object = ..%Open(oid,,.status)
    If $$$ISERR(status) Quit status
    If $ISOBJECT(object.Organizations) {
        Set org= object.OrganizationsGetSwizzled()
        While org.GetNext(.key){
            Set tID = object.Organizations.GetAt(key).%Id()
            Do object.Organizations.GetAt(key).%DeleteId(tID)
        }
    }
    Quit $$$OK
}

Hey David.

I'll give XPath a try and see how I get on and update accordingly.

Edit:

It seems that trying to use xpath for the elements within the cdata block fail, and attempting to use xpath for <jkl> returns the content of the entire cdata as a stream object. However I think this is going to be a limitation of the xml I am working with rather than xpath itself.

Hey Julius, thank you for this. 

Unfortunately, there is a risk of hitting the maxstring lengths due to one of the fields within the CData blocks will be a base64 encoded document or two. But this does look interesting for other use cases.

Hey Jainam.

I don't get a lot of hands on time with ASTM messaging but, from what you've shared, what you're receiving is using carriage returns.

The TCP Adapter does have a property of "Terminators" that is set to use ASCII Character 10 (line feed) for this, but this value is not configurable from the management portal (which is stated in the documentation)

I'm not sure if this is what you have tried where you stated "I've modified the class file", and I can't say for sure if this will work as I have no way to test it locally, but you could try creating a custom adapter which extends the in built adapter and overwrite the property, and then creating a custom service which extends the in built service and uses the custom adapter.

Something like:

Class Custom.EDI.ASTM.Adapter.TCPAdapter Extends EnsLib.EDI.ASTM.Adapter.TCPAdapter
{
/// Overwriting the original value of $C(10) (line feed) with $C(13) (carriage return)Property Terminators As%String [ InitialExpression = {$C(13)} ];
}

and 

Class Custom.EDI.ASTM.Service.TCPService Extends EnsLib.EDI.ASTM.Service.TCPService
{

Parameter ADAPTER = "Custom.EDI.ASTM.Adapter.TCPAdapter";

}

And finally, use the custom service in your production.

If this doesn't work, you might need to look into what your sending system is actually outputting for the carriage returns/line feeds.

Hey Joel.

Absolutely - there are a couple levels of ambiguity for the IDE without throwing a #dim in to know what that response object could be.

The biggest being that the target of the call of ..SendRequestSync/Async is something that can, and for good practice should, be configured within the Interoperability Production. Therefore the IDE has nothing to work from with regards to identifying what the response object will be to then provide code completion. Even if it's not configured within the Interoperability Production directly but a developer has simply declared the Target Dispatch Name as a string within the code, the link between that name and the target Class is still contained within the Production and could also not yet exist in the production at the time of development.

It's great to see the effort put into improving the Language Server which has massively reduced the need to use #dim, but I'm not we will see the last of it for some time.

I did post an example last year of how I traditionally would use #DIM, and it was (rightly) pointed out that the scenario was no longer required due to how VSCode behaves these days (well, the Language Server extension used in VSCode).

However today I have found a more common example where #DIM seems to be needed for code completion, based around using Business Services/Processes within an Interoperability Production.

Class Some.Business.Process Extends Ens.BusinessProcess
{

Property OnwardTarget As Ens.DataType.ConfigName;

Parameter SETTINGS = "OnwardTarget";

Method OnRequest(pRequest As Ens.Request, Output pResponse As Ens.Response) As %Status
{
    //Some code above
	
	Set someReqMsg = ##Class(Some.Req.Msg).%New()
	Set someReqMsg.abc = "123"
	Set tSC = ..SendRequestSync(..OnwardTarget,someReqMsg,.someRespMsg)
	#; No autocomplete on someRespMsg.xyz without #dim before ..SendRequestSync
	
	//Some code below
	
	Quit $$$OK
}

}

  Without the #dim in the above, there is no context for the IDE to know what someRespMsg returned from the ..SendRequestSync() call will be.

That said, its use is certainly dwindling compared to pre-VSCode times and it's continued usage can be more from habit than necessity a lot of times (I'm certainly guilty of this).

.NET 6 and IRIS 2022.1.

As "w gw.invoke("System.Convert","ToBoolean(System.UInt64)",123)" also doesn't work for me, I have to assume there's an issue with my older versions.

Thanks again for your help!

Ahh, I see. So I was trying to call a static method as if it was a constructor. Thank you for showing me this!

Interestingly, if I try your examples, I can recreate the ambiguity error, but then get an error when attempting to specify the full param specification:

Method not found: System.Convert.ToBoolean(int)

However this + the reply from Enrico has got me on a good footing, so will try a few things and see where I land.

Julian Matthews · Dec 31, 2025 go to post

This is really helpful to know. I think I have seen it not work like this in the past and picked it up as a habit, but can't say for sure.

To test this for myself, I have written out above pseudocode into real classes and found that the code completion doesn't kick in when attempting in the $$$TRACE, but does when attempting outside of the macro params:

   
I'll try get this logged via github later today assuming this isn't intended behaviour for the Language Server.

EDIT:

Logged here and I realise that my habit of using #Dim for Intellisense 100% comes from my historic use of Studio.

Julian Matthews · Dec 30, 2025 go to post

3. vscode connect failed.

It looks like your images failed to upload so it's not clear what (if any) errors you are getting when the connection failed.

Julian Matthews · Dec 29, 2025 go to post

Hey Scott.

The call to ##class(%SYS.OAuth2.AccessToken).IsAuthorized() will return the most recent active token (if available). So a token generated in a BS running every hour would be the same token returned when your BP makes a call to ##class(%SYS.OAuth2.AccessToken).IsAuthorized(). 

If you were to do this and still keep the code in the BP per your codeblock, you'd find that your BP would then almost never hit the condition of if 'isAuth unless the token happened to expire between the time of your next BS run and the BP attempting to use that token.

That all said, would it be much of a time save having the BS running the task every hour vs checking the validity of the token at the point of using it and then only reauthorising when it's no longer valid?

Julian Matthews · Dec 29, 2025 go to post

I tend to break up my usage between SET when an object is being created within the class, and using #DIM when working with responses from another ClassMethod. For example:

Set SearchParams = ##Class(DM.EmployeeSearch).%New()
Set SearchParams.EmployeeID = 042114040518#DIM Employee As DM.Employee
Set tSC = ##class(DM.Employee).EmployeeSearch(SearchParams,.Employee)
If '$ISERROR(tSC){
    $$$TRACE(Employee.Name) //Should return "Jim Halpert"
}
Julian Matthews · Dec 8, 2025 go to post

Additional question:

When using the EnsLib.File.PassthroughService, I seem to be seeing a lot of orphan Ens.StreamContainer entries.

By all accounts, the Ens.StreamContainer entries are being sent from a service and process and does gain an entry in the Ens.MessageHeader table on creation.

When attempting to manually delete these Ens.StreamContainer entries, I do get an error:

SQLCODE: <-412>:<General stream error>]


  [%msg: <Error attempting to delete stream objectforfield StreamFC: ERROR #5019: Cannot deletefile'\\original\path\to\file.here'>]

I'm assuming that this is due to the behaviour of the EnsLib.File.PassthroughService where it deletes the file once processed, and the delete of the Ens.StreamContainer is then attempting to remove a file that no longer exists, however the Ens.MessageHeader entry is then successfully deleted, creating the orphan.

Should this be the case?

Edit: This does not seem to be directly limited to EnsLib.File.PassthroughService as it is also present with the EnsLib.HL7.Service.FileService Service.

Julian Matthews · Dec 5, 2025 go to post

Thanks Jeffrey - it's certainly highlighted some hidden pockets of orphaned messages for me. I'm going to track this in a separate thread, but basically it seems there's a few edge cases where ACKs are being saved and don't end up in the IO log and don't get an entry in the Message Header table.

Julian Matthews · Dec 5, 2025 go to post

Based on this, is it fair to assume that the following alteration to the SQL query in Scott's above post would give a truer reflection of orphaned messages?

SELECT HL7.ID,HL7.DocType,HL7.Envelope,HL7.Identifier,HL7.MessageTypeCategory,HL7.Name,HL7.OriginalDocId,HL7.ParentId, HL7.TimeCreated
FROM EnsLib_HL7.Message HL7
LEFTJOIN Ens.MessageHeader hdr
ON HL7.Id=hdr.MessageBodyId
LEFTJOIN Ens_Util.IOLogObj ack
ON HL7.Id = ack.InObjectId
WHERE hdr.MessageBodyId ISNULLAND ack.InObjectId ISNULL
Julian Matthews · Jun 24, 2025 go to post

If you're looking to simply convert timezone codes to the UTC offset, I'd setup a simple utility that returns the desired string based on a case statement.

Something like:

ClassMethod DisplayTimezoneToUTCOffset(pInput as%String) As%String{
    Quit$CASE($ZCONVERT(pInput,"U"),
    "CEST":"+02:00",
    "CET":"+01:00",
    "BST":"+01:00",
    :"")
}

If you then want to build the entire string, you could do similar to above for the months, and then have the following:

Class Demo.Utils.ExampleUtils.TimeConvert
{
ClassMethod DisplayDateTimeToLiteral(pInput as%String) As%String{
    // Deconstruct the input string into it's logical parts.// Variables starting d are "display" values that will be converted later.Set DayOfWeek = $P(pInput," ",1) //Not usedSet dMonth = $P(pInput," ",2)
    Set Day = $P(pInput," ",3)
    Set Time = $P(pInput," ",4)
    Set dTimeZone = $P(pInput," ",5)
    Set Year = $P(pInput," ",6)

    //Return final timestampQuit Year_"-"_..DisplayMonthToLiteral(dMonth)_"-"_Day_" "_Time_..DisplayTimezoneToUTCOffset(dTimeZone)

}

ClassMethod DisplayTimezoneToUTCOffset(pInput as%String) As%String{

    Quit$CASE($ZCONVERT(pInput,"U"),
    "CEST":"+02:00",
    "CET":"+01:00",
    "BST":"+01:00",
    :"")
}

ClassMethod DisplayMonthToLiteral(pInput as%String) As%String{

    Quit$CASE($ZCONVERT(pInput,"U"),
    "JAN":"01",
    "FEB":"02",
    "MAR":"03",
    //etc. etc."JUL":"07",
    :"")
}

}

Which then gives the following:

Julian Matthews · Jun 19, 2025 go to post

Hi Nezla.

As the URL starts out as "ac1.mqtt.sx3ac.com", suggesting that the API is using MQTT, is the %net.httprequest adapter the right way to go?

I think it'd be good to take a look at Using the MQTT Adapters to see how the adapter can be used if you haven't already.

Julian Matthews · Jun 5, 2025 go to post

Have you tried adding backticks to your double-quotes to escape them?

PS C:\Users\Colin\Desktop> irissession healthconnect -U user 'Say^ImplUtil(`"Your String Here`")'
Julian Matthews · Jun 3, 2025 go to post

Hey everyone!

  • Name - Julian
  • Where you’re from / based - United Kingdom
  • What you do (your role / company / areas of interest) - On paper, I'm a developer in the NHS. However like so many here, I'm very much a jack of all trades.
  • Expertise
    • HL7 (V2/FHIR)
    • Objectscript
    • OpenEHR
    • Poking around so many different clinical systems' front and back ends
  • Fun Fact or Hobbies – I do enjoy flying FPV drones, but live in a country where the weather is constantly fighting against that enjoyment.
  • LinkedIn - it's here however I must warn that my network is an eclectic mix of our industry, and then people in the space of event and crowd safety, with some drones thrown into the mix.
Julian Matthews · Jun 2, 2025 go to post

It is technically possible, but you are somewhat limited by your receiving endpoints capability to receive these in batch.

It relies on the use of FHS (File Header) and BHS (Batch Header) segments at the start of your batch message, and then all of the content you wish to send in that batch, finalised with BTS (Batch Trailer) and FTS (File Trailer)

You may want to look first at how IRIS will display a batch message within the system here and look here as to how you can work with them. But I do want to stress again that you will want to ensure first and foremost that the system you want to send this to can actually support it so that you don't lose time to building a solution using this method only to find it will never work.

Julian Matthews · May 20, 2025 go to post

Out of interest:

  • Are both instances running on the same version of Windows including architecture?
  • Are both instances using the same license type for IRIS?
    • I'm not sure if one using a community/temp license vs a "full" license could have this effect, but thought it's worth asking.
  • If you call $System.Util.CreateGUID() without writing it to the database, does the CPU still hit 100% on the single core (just thinking this could point to the CreateGUID call being the bottleneck instead of the DB write)
Julian Matthews · Apr 22, 2025 go to post

It's slightly dependant on the HL7 Version and Message Type, but assuming you're working with ORM_O01 messages using HL7 V2.3, you have a few options:

Example 1, Multiple Ifs, one for each variation (I only did 2 in the screenshot to save time) :

Example 2, If/Else where we assume that we only want "I" in OBR:18 when PV1:2 is "I", and otherwise set to "O": 

Example 3, use the $CASE function:

All of these can be combined with the response you just got here HL7 DTL formatting | InterSystems Developer Community | Business Process for the same question, assuming you want to put them into a subtransform, but I'd start with what I have shared here first, and venture into subtransforms and auxiliary-based configs after you get this nailed down. 

Julian Matthews · Apr 22, 2025 go to post

To give an example of where I have done something similar:

I have a Service that will check if the active mirror has changed since the last poll, and will trigger a message to the "Ens.Alert" component in the production if it has changed.

To do this, I have a service with the following code:

Class Demo.Monitoring.SystemMonitor Extends Ens.BusinessService
{

Method OnProcessInput(pInput As%RegisteredObject, Output pOutput As%RegisteredObject, ByRef pHint As%String) As%Status
{
	Set tsc = ..CheckMirroring()
	Quit tsc
}

Method CheckMirroring() As%Status
{
	Set triggered = 0//Get the current serverSet CurrServer = $PIECE($SYSTEM,":")

	/**Check Global exists, and create it if it does not.(Should really only ever happen once on deployment, but this is a failsafe)**/Set GBLCHK = $DATA(^$GLOBAL("^zMirrorName"))
	If GBLCHK = 0{
		Set^zMirrorName = CurrServer
		Quit$$$OK//No need to evaluate on first run
	}

	If^zMirrorName = CurrServer {
		/*Do not Alert*/Quit$$$OK
	}
	Else {
		/*Alert*/Set AlertMessage = "The currently active server has changed since the last check, suggesting a mirror fail over."Set AlertMessage = AlertMessage_" The previous server was "_^zMirrorName_" and the current server is "_CurrServer_"."Set^zMirrorName = CurrServer
			
		Set req=##class(Ens.AlertRequest).%New()
		Set req.SourceConfigName = "System Monitor"Set req.AlertText = AlertMessage
		Set req.AlertTime = $ZDATETIME($HOROLOG,3)_".000"Set tSC = ..SendRequestSync("Ens.Alert", req)
		Quit tSC
	}
}

}

Then, from within my production, I then have the service that is using this class configured with "CallInterval" set to the desired frequency of running:

Julian Matthews · Apr 16, 2025 go to post

Assuming you are iterating though the OBX's, you can simply add a variable where you increment it for each OBX you are processing.

Assuming your code is still similar to your code from your other account, you should have an incrementing variable of i. 

Try sticking in

Do pOutput.SetValueAt(i,"ORCgrp(1).OBRuniongrp.OBXgrp("_i_").OBX:4")
Julian Matthews · Apr 14, 2025 go to post

Setup 2 is my preferrable option for a few reasons:

  1. If I need to pause messaging to a specific endpoint and replay a few messages, I can pause the traffic at the Router and then work with the Operation without impacting the traffic to the other endpoints.
  2. If a receiving systems flow needs to pass through a more complex process that is synchronous, I can enforce the synchronicity at that systems router/process again without any impact on the other messaging flows.
  3. Any Errors returned from an Operation to the Router are sent only to that systems specific router
  4. System specific transformations and routing rules can be held within the system specific router and not in one mega-router.

So in effect, I would suggest something like:

Any common transforms you need to apply can then go in at the System A router (for example, I had a system with a bug in the outputs we would get, so worked around it once earlier in the flow rather than trying to fix it in every transform for the downstream systems).

Beyond that, I'd recommend the routing rules in the System A router is minimal but still apply a base level of filtering based on the message types the downstream routers will be receiving. There no point sending System D A04 messages if that router then will never process them. However, if System D will receive A08's when PV1:2 is "E", I'd allow all A08's from System A's Router to go to System D's router, and then do the explicit check for A08's when PV1:2 is "E" from within the rules of System D's Router.