go to post Julian Matthews · Nov 27, 2024 Hey Brad. Apologies, I'm not entirely sure why I typed unicode in full upper-case when that's not present in the helper dialog or the drop down. How confident are you that what you're receiving is actually unicode? The adapter by default will look at what's in MSH:18 and will only use the selection in the adapter setting if this is blank in the message. Firstly, try setting this to "!latin1" (without the quotes) to force it to operate as latin1 as per the support info for DefCharEncoding: Putting ! before the encoding name will force the use of the named encoding and will ignore any value found in MSH:18. If that fails, I'd then cycle through the other options starting with "!utf-8" and then one of the variants of Unicode available when using the drop down Be careful - there are some overlaps when it comes to come encodings where things look fine until certain symbols come into play, at which point you end up with some interesting outputs.
go to post Julian Matthews · Nov 26, 2024 Hey Brad. The adapter has two sets of options here which can lead to confusion. We first have the charset for the adapter for the File adapter elements, and then the Default Char Encoding for the HL7 adapter elements. As a starting point, I would try changing the Charset setting to Binary, and then setting the DefCharEncoding to UNICODE to match what is in your header.
go to post Julian Matthews · Nov 18, 2024 Please look here for the solution: https://community.intersystems.com/post/base-64-message-taking-consideri...
go to post Julian Matthews · Nov 15, 2024 So to do what you're trying to do in your DTL, add in a code block and paste in the following: Set CHUNKSIZE = 2097144 Set outputStream=##class(%Stream.TmpCharacter).%New() Do source.EncodedPdf.Rewind() While ('source.EncodedPdf.AtEnd) { Set tReadLen=CHUNKSIZE Set tChunk=source.EncodedPdf.Read(.tReadLen) Do outputStream.Write($SYSTEM.Encryption.Base64Encode(tChunk,1)) } Do outputStream.Rewind() Set Status = target.StoreFieldStreamRaw(outputStream,"OBXgrp(1).OBX:5.5") ) Yours is almost doing the same thing but, as Enrico points out with your code sample, you have the "Set tSC = tStream.Write($C(10))" line adding in the line breaks whereas my example has this excluded. Separately, as alluded to by Scott, when adding the base 64 encoded PDF stream to the HL7, you'll want to use the StoreFieldStreamRaw method for the HL7. Trying to do a traditional set with a .Read() risks the input being truncated.
go to post Julian Matthews · Nov 15, 2024 Hey Smythe. Your Base64 has line breaks, so is breaking onto a new line which is then being read as a new line in the HL7. Depending on what method you are using to convert the PDF to Base 64, you should have a setting to not use line breaks.
go to post Julian Matthews · Oct 22, 2024 It's a bodge, but can you create a new namespace with the same name, delete the task, and then delete the namespace again?
go to post Julian Matthews · Oct 21, 2024 Hey Anthony. Depending on your version of Iris, I would recommend swapping out your use of %GlobalCharacterStream with %Stream.GlobalCharacter as the former is depreciated. Additionally, I would recommend swapping them out for their temp couterparts so you're not inadvertently creating loads of orphaned global streams, especially where you're dealing with files of this size.
go to post Julian Matthews · Oct 18, 2024 It's a wild shot in the dark, but looking here: https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=ESQL_adapter_methods_creating#ESQL_transactions has a try/catch where the catch has the following: catch err{ if (err.%ClassName(1)="common.err.exception") && ($$$ISERR(err.status)) { set tSC = err.status } else { set tSC = $system.Status.Error(err.Code,err.Name,err.Location,err.InnerException) } If you try to recreate this, does the code you're looking for appear in either err.Code,err.Name,err.Location, or err.InnerException?
go to post Julian Matthews · Sep 24, 2024 I thought that this would be a case of the Tilde being a special character for your target document due to its common use in HL7 for repeating fields. However, I ran a test to see what I got when trying this. I created a transform for a PV1 segment, and attempted to set the value of PV1:1 to the output of the replace function and the input string contained a few commas: I then ran this, and got this result: Not only did it successfully replace the commas with tildes, but the virtual document now see's it as a repeating segment (even though the field is not repeating in it's specification). I know this doesn't directly help you, but wanted to share my results in case it helped lead you to finding a solution. (for ref, this is from version 2022.1 of Iris For Health)
go to post Julian Matthews · Sep 10, 2024 Ahh, I see what you mean. I have not had to work in this way with IRIS and OpenEHR, so unfortunately I wouldn't be able to provide much insight.
go to post Julian Matthews · Sep 9, 2024 Hey Joost. What do you mean when you say "handle"? Our interactions with OpenEHR for reading and creating/updating compositions are generally via traditional HTTP based API's, which is relatively simple to set up with IRIS.
go to post Julian Matthews · Sep 9, 2024 I 100% agree with Eduard. Even back when I had two mirrored instances sat running in the same physical location, we were saved many times by mirroring when there had been issues with either IRIS or the server/OS itself. It's also very helpful for managing upgrades, and even server migrations (by adding in the new servers as async members, and then demoting a failover member on an old server and promoting a new server from async to failover).
go to post Julian Matthews · Sep 4, 2024 Jumping off of the answer I have given here only earlier today and being in a country currently observing(?) BST, you'll want to use the following approach: Use $ZDATETIMEH with the dformat of 3 and tformat of 7 (Note for the tformat that it's expecting the input as UTC, which is what you have with your source timestamp) Use $ZDATETIME with the dformat of 8 and tformat of 1 Realise quickly that the tformat includes separators between the date and time, and within the time itself, so wrap it all in $ZSTRIP to remove all punctuation and whitespace... Basically this: WRITE $ZSTRIP($ZDATETIME($ZDATETIMEH("2023-09-28T20:35:41Z",3,7),8,1),"*P") Gives you this: And demonstrating this for a date that isn't affected by BST you'll note that the time stays the same as the input because BST isn't in effect in the winter, taking the timezone I'm in back to GMT: I hope this helps!
go to post Julian Matthews · Sep 4, 2024 The link in my last reply actually contains the answer, which is always useful. I have tweaked it slightly so that it's a single line, but the output is the same. To get the current date and time with milliseconds, you can do the following: WRITE $ZDATETIME($ZDATETIMEH($ZTIMESTAMP,-3),3,1,3) This is: Starting with the output of $ZTIMESTAMP Converting to a $HOROLOG format adjusted for your local timezone using $ZDATETIMEH Converting to the desired format using $ZDATETIME I hope this helps!
go to post Julian Matthews · Sep 3, 2024 This will be that caveat I warned of which is detailed in the documentation. You could do something like: Write $ZDATETIME($h_"."_$P($NOW(),".",2),3,1,3) Which takes the value of $H and appends the milliseconds from $NOW() to then form your datestamp: However the documentation I linked to warns that there can be a discrepancy between $H and $NOW() so this approach could then lead to your timestamp being off by up to a second. As you are trying to work to the level of milliseconds, I suspect accuracy is very important and therefore I would not recommend this approach. Take a look here and see if this example of comparing $h, $ZTIMESTAMP, and $NOW() helps, and the example of converting from UTC to the local timezone helps.
go to post Julian Matthews · Sep 3, 2024 If you replace $h with $NOW(), this should do as you need However there is a caveat with regards to Timezones mentioned in the online documentation that you may want to review to ensure it works as you'd expect and need.
go to post Julian Matthews · Aug 30, 2024 Have you tried to open the pdf in a text editor like notepad++ to see what it looks like? It might be that the stream is incomplete, or you're writing the base64 to the file output without decoding?
go to post Julian Matthews · Aug 1, 2024 Hi David. As Luis has stated, this doesn't allow you to make direct changes to the message. However, you can use this to set a variable that can then be referenced within a transformation. The Property variable can only be "RuleActionUserData" To use this in an action: And then within the DTL, you can reference "aux.RuleActionUserData":
go to post Julian Matthews · Jul 26, 2024 Although I have seen environments where namespaces are used to separate Dev/Test/Prod etc. I have found that having the Prod environment on the same environment as the Non-Prod Namespaces is a risk to the Prod environment should an active piece of development take down the underlying machine (one example was a developer* making a mistake when working with Streams and had created an infinite loop in writing a stream and the server very quickly gained a 10GB pdf that filled the disk to capacity and the environment stopped working). A common use case for multiple namespace for me would be for instances where the activity within the namespace is significantly distinct from the others. For example, we have a namespace that is dedicated to DICOM activity. While we could have put this in a single "LIVE" themed namespace, the volume of activity we'd see for DICOM would have filled our servers disk if kept to the same retention period as other standard retention period. So we have a DICOM namespace that has a retention period of around 14 days compared to others that are between 30 and 120 days. *It was me. I was that developer.
go to post Julian Matthews · Jul 18, 2024 Thanks Scott. I'm also not rushing to delete based on counts, but it's still interesting to review. I ran the "Complete Ensemble Message Body Report" from Suriya's post's Gist against a namespace and it ran for about 8 hours, which now has me nervous to run the Delete option. Although, to be fair, this is a namespace that has been in operation for about 10 years, so I might start smaller and work my way up.