Hey David.

I'll give XPath a try and see how I get on and update accordingly.

Edit:

It seems that trying to use xpath for the elements within the cdata block fail, and attempting to use xpath for <jkl> returns the content of the entire cdata as a stream object. However I think this is going to be a limitation of the xml I am working with rather than xpath itself.

Hey Jainam.

I don't get a lot of hands on time with ASTM messaging but, from what you've shared, what you're receiving is using carriage returns.

The TCP Adapter does have a property of "Terminators" that is set to use ASCII Character 10 (line feed) for this, but this value is not configurable from the management portal (which is stated in the documentation)

I'm not sure if this is what you have tried where you stated "I've modified the class file", and I can't say for sure if this will work as I have no way to test it locally, but you could try creating a custom adapter which extends the in built adapter and overwrite the property, and then creating a custom service which extends the in built service and uses the custom adapter.

Something like:

Class Custom.EDI.ASTM.Adapter.TCPAdapter Extends EnsLib.EDI.ASTM.Adapter.TCPAdapter
{
/// Overwriting the original value of $C(10) (line feed) with $C(13) (carriage return)
Property Terminators As %String [ InitialExpression = {$C(13)} ];
}

and 

Class Custom.EDI.ASTM.Service.TCPService Extends EnsLib.EDI.ASTM.Service.TCPService
{

Parameter ADAPTER = "Custom.EDI.ASTM.Adapter.TCPAdapter";

}

And finally, use the custom service in your production.

If this doesn't work, you might need to look into what your sending system is actually outputting for the carriage returns/line feeds.

Hey Joel.

Absolutely - there are a couple levels of ambiguity for the IDE without throwing a #dim in to know what that response object could be.

The biggest being that the target of the call of ..SendRequestSync/Async is something that can, and for good practice should, be configured within the Interoperability Production. Therefore the IDE has nothing to work from with regards to identifying what the response object will be to then provide code completion. Even if it's not configured within the Interoperability Production directly but a developer has simply declared the Target Dispatch Name as a string within the code, the link between that name and the target Class is still contained within the Production and could also not yet exist in the production at the time of development.

It's great to see the effort put into improving the Language Server which has massively reduced the need to use #dim, but I'm not we will see the last of it for some time.

I did post an example last year of how I traditionally would use #DIM, and it was (rightly) pointed out that the scenario was no longer required due to how VSCode behaves these days (well, the Language Server extension used in VSCode).

However today I have found a more common example where #DIM seems to be needed for code completion, based around using Business Services/Processes within an Interoperability Production.

Class Some.Business.Process Extends Ens.BusinessProcess
{

Property OnwardTarget As Ens.DataType.ConfigName;

Parameter SETTINGS = "OnwardTarget";

Method OnRequest(pRequest As Ens.Request, Output pResponse As Ens.Response) As %Status
{
    //Some code above
	
	Set someReqMsg = ##Class(Some.Req.Msg).%New()
	Set someReqMsg.abc = "123"
	Set tSC = ..SendRequestSync(..OnwardTarget,someReqMsg,.someRespMsg)
	#; No autocomplete on someRespMsg.xyz without #dim before ..SendRequestSync
	
	//Some code below
	
	Quit $$$OK
}

}

  Without the #dim in the above, there is no context for the IDE to know what someRespMsg returned from the ..SendRequestSync() call will be.

That said, its use is certainly dwindling compared to pre-VSCode times and it's continued usage can be more from habit than necessity a lot of times (I'm certainly guilty of this).

Ahh, I see. So I was trying to call a static method as if it was a constructor. Thank you for showing me this!

Interestingly, if I try your examples, I can recreate the ambiguity error, but then get an error when attempting to specify the full param specification:

Method not found: System.Convert.ToBoolean(int)

However this + the reply from Enrico has got me on a good footing, so will try a few things and see where I land.

This is really helpful to know. I think I have seen it not work like this in the past and picked it up as a habit, but can't say for sure.

To test this for myself, I have written out above pseudocode into real classes and found that the code completion doesn't kick in when attempting in the $$$TRACE, but does when attempting outside of the macro params:

   
I'll try get this logged via github later today assuming this isn't intended behaviour for the Language Server.

EDIT:

Logged here and I realise that my habit of using #Dim for Intellisense 100% comes from my historic use of Studio.

Hey Scott.

The call to ##class(%SYS.OAuth2.AccessToken).IsAuthorized() will return the most recent active token (if available). So a token generated in a BS running every hour would be the same token returned when your BP makes a call to ##class(%SYS.OAuth2.AccessToken).IsAuthorized(). 

If you were to do this and still keep the code in the BP per your codeblock, you'd find that your BP would then almost never hit the condition of if 'isAuth unless the token happened to expire between the time of your next BS run and the BP attempting to use that token.

That all said, would it be much of a time save having the BS running the task every hour vs checking the validity of the token at the point of using it and then only reauthorising when it's no longer valid?

Hey Jonathan.

The utility function ..CurrentDateTime() should be returning a value that is based on the servers local timezone. As you're in EST, I'm guessing you're getting UTC/GMT. You might want to look at addressing this early and not code around it just yet as this could then be corrected at an OS level and you're suddenly subtracting 50000 from the correct result.

I tend to break up my usage between SET when an object is being created within the class, and using #DIM when working with responses from another ClassMethod. For example:

Set SearchParams = ##Class(DM.EmployeeSearch).%New()
Set SearchParams.EmployeeID = 042114040518
#DIM Employee As DM.Employee
Set tSC = ##class(DM.Employee).EmployeeSearch(SearchParams,.Employee)
If '$ISERROR(tSC){
    $$$TRACE(Employee.Name) //Should return "Jim Halpert"
}

Additional question:

When using the EnsLib.File.PassthroughService, I seem to be seeing a lot of orphan Ens.StreamContainer entries.

By all accounts, the Ens.StreamContainer entries are being sent from a service and process and does gain an entry in the Ens.MessageHeader table on creation.

When attempting to manually delete these Ens.StreamContainer entries, I do get an error:

SQLCODE: <-412>:<General stream error>]


  [%msg: <Error attempting to delete stream object for field StreamFC: ERROR #5019: Cannot delete file '\\original\path\to\file.here'>]

I'm assuming that this is due to the behaviour of the EnsLib.File.PassthroughService where it deletes the file once processed, and the delete of the Ens.StreamContainer is then attempting to remove a file that no longer exists, however the Ens.MessageHeader entry is then successfully deleted, creating the orphan.

Should this be the case?

Edit: This does not seem to be directly limited to EnsLib.File.PassthroughService as it is also present with the EnsLib.HL7.Service.FileService Service.

Based on this, is it fair to assume that the following alteration to the SQL query in Scott's above post would give a truer reflection of orphaned messages?

SELECT HL7.ID,HL7.DocType,HL7.Envelope,HL7.Identifier,HL7.MessageTypeCategory,HL7.Name,HL7.OriginalDocId,HL7.ParentId, HL7.TimeCreated
FROM EnsLib_HL7.Message HL7
LEFT JOIN Ens.MessageHeader hdr
ON HL7.Id=hdr.MessageBodyId
LEFT JOIN Ens_Util.IOLogObj ack
ON HL7.Id = ack.InObjectId
WHERE hdr.MessageBodyId IS NULL AND ack.InObjectId IS NULL

If you're looking to simply convert timezone codes to the UTC offset, I'd setup a simple utility that returns the desired string based on a case statement.

Something like:

ClassMethod DisplayTimezoneToUTCOffset(pInput as %String) As %String{
    Quit $CASE($ZCONVERT(pInput,"U"),
    "CEST":"+02:00",
    "CET":"+01:00",
    "BST":"+01:00",
    :"")
}

If you then want to build the entire string, you could do similar to above for the months, and then have the following:

Class Demo.Utils.ExampleUtils.TimeConvert
{
ClassMethod DisplayDateTimeToLiteral(pInput as %String) As %String{
    // Deconstruct the input string into it's logical parts.
    // Variables starting d are "display" values that will be converted later.
    Set DayOfWeek = $P(pInput," ",1) //Not used
    Set dMonth = $P(pInput," ",2)
    Set Day = $P(pInput," ",3)
    Set Time = $P(pInput," ",4)
    Set dTimeZone = $P(pInput," ",5)
    Set Year = $P(pInput," ",6)

    //Return final timestamp
    Quit Year_"-"_..DisplayMonthToLiteral(dMonth)_"-"_Day_" "_Time_..DisplayTimezoneToUTCOffset(dTimeZone)

}

ClassMethod DisplayTimezoneToUTCOffset(pInput as %String) As %String{

    Quit $CASE($ZCONVERT(pInput,"U"),
    "CEST":"+02:00",
    "CET":"+01:00",
    "BST":"+01:00",
    :"")
}

ClassMethod DisplayMonthToLiteral(pInput as %String) As %String{

    Quit $CASE($ZCONVERT(pInput,"U"),
    "JAN":"01",
    "FEB":"02",
    "MAR":"03",
    //etc. etc.
    "JUL":"07",
    :"")
}

}

Which then gives the following: