go to post Julian Matthews · Apr 5, 2024 As a short term approach, you may want to look into using Stunnel in client mode to encrypt the traffic and then set something up similar to: This would mean that the traffic between your 2016 instance and stunnel is unencrypted but all on the same machine, and then stunnel handles the encryption between your machine and the external site using TLS1.3. However, even if you go this route, I would still recommend getting the process started for upgrading to a newer version.
go to post Julian Matthews · Feb 2, 2024 Increasing the pool value will have some effect on the RAM and CPU usage, but no different than having other jobs running through the production. If you move all the components over to using the actor pool (by setting the individual pool settings to 0) it should be easy enough to raise the value bit by bit while keeping an eye on the performance of the production and the cpu/ram usage of the machine to find a sweet spot. If the API just needs a bit of extra resource when there's a small spike in the inbound requests, then this should not be of too much concern as it will just calm down once it's processed what has been requested. If, however, there's a chance that it could be overloaded from inbound requests and the worry is that the server won't cope, then maybe look at using Intersystems API Manager to sit in front of the environment and make use of things like the rate limiting feature. Or you could go even further and begin caching responses to return if the API is being queried for data that isn't changing much so that there's less processing being done for each request if it's going to get called for the same information by multiple requests in quick succession. You could make your own solution with caché/Iris, or look at something like redis.
go to post Julian Matthews · Feb 2, 2024 I'm thinking to increase the pool parameter, but I'm not sure if it's a good idea. If you are not concerned about the order of which you are processing the inbound requests, then upping the pool size to the number of parallel jobs you're looking to run with should do what you need However, you may need to then also apply this logic to related components that the Process interacts with, otherwise you will end up just moving the bottleneck to another component. Alternatively, if it fits your use case, you could use the Actor Pool for your production components and then increase it to a point where you see the bottleneck drop off. Paolo has provided the link to the documentation on Pools, which has some info on considerations for the use of the two different types of Pool.
go to post Julian Matthews · Dec 22, 2023 Hi Nimisha. I see what you mean now. It does seem like the code within a code block doesn't have access to the methods from Ens.BusinessProcess. I suspect your only option for is to use the "Call" activity set to synchronous with a "Sync" after it.
go to post Julian Matthews · Dec 22, 2023 Thanks Luis. The issue I'd have is that the clock starts on the poll interval at the point the service is started, so a restart of the server/production would then shift the time of day it tries to run, which would not be ideal if I needed a single run at a specific time of day. I might try a combination of the large poll interval and defining a schedule (based on the other responses) and see if that has the desired effect, but I may need to just concede and continue using the task manager. 🙂
go to post Julian Matthews · Dec 22, 2023 Are you able to share the full error you're seeing? I have just tested this, and I'm getting no such errors when compiling.
go to post Julian Matthews · Dec 21, 2023 Hi Mary - thank you for your reply. Unfortunately the Schedule Option isn't suitable where we need the job to run only once at a set time per day. And, our current solution is very similar to the accepted answer from Ashok Kumar, which is what I was hoping to simplify in some way.
go to post Julian Matthews · Nov 8, 2023 This may cause some regions to opt for FHIR as a response while others opt for other solutions such as OpenEHR. There's no reason for these to be seen as competing standards. A model where the data storage is OpenEHR and the data transfer is FHIR is seen by some as the best of both worlds 🙂
go to post Julian Matthews · Oct 11, 2023 I'm not sure there's a way to select it when creating a production export. The data in stored within the Global "Ens.Config.BusinessPartnerD" which you could export separately and then import into your new environment?
go to post Julian Matthews · Oct 11, 2023 I haven't seen anything "official" but there was this article from 2021 that provided a way to recreate it: VSCode Tips & Tricks - SOAP Wizard | InterSystems Developer Community |
go to post Julian Matthews · Sep 22, 2023 Hey Christine. If I'm reading your question and subsequent replies correctly, you're trying to take the value of PV1:7.1, and then use that in a SQL query. The answer has been given by Ashok when you put their replies together, but hopefully putting it all into a single response will make things easier to follow. If this is the case, then you will want to do the following: Step 1: Set a variable to the value of PV1:7.1: Step 2: Add a code block, and use this to run your sql query: Step 3: Do what you need to with the value of ID - for the sake of this response, I'm just setting the value of PV1:7.2 to the ID returned from the query that inserted into the variable "Output": It's worth knowing that, when working with Embedded SQL, prefixing a variable with a colon is how you can pass variables in and out of the Embedded SQL section of code. However it's a bit clearer when working directly with ObjectScript vs a DTL. For example, if we had the following table: ID COL_A COL_B 1 ABC 123 2 DEF 234 We could have the following in ObjectScript: Set X = "" // X is null Set Y = "ABC" &SQL( SELECT COL_B into :X From TestTable WHERE COL_A = :Y ) WRITE X //X is 123
go to post Julian Matthews · Sep 21, 2023 I don't believe there is a way of increasing the system limit on string lengths. Even if there is, it's best to approach this by working with the data as a stream. Otherwise you could end up in a cat and mouse game of needing to increase the length the next time you get a larger document
go to post Julian Matthews · Sep 20, 2023 The input is a string, so the max length will be your system max (which should be 3,641,144). Assuming you're trying to retrieve the stream from a HL7 message, you will probably want to use the built in method GetFieldStreamBase64 So you could try something like: Set tStream = ##class(%Stream.TmpBinary).%New() Set tSC = pHL7.GetFieldStreamBase64(.tStream,"OBX:5") And then your decoded file would be in the temp stream. (You may need to tweak this slightly depending on how you intend to then use the stream, and the correct property path of the Base64 within the HL7 message)
go to post Julian Matthews · Sep 14, 2023 Stupidly, no. As the default was set to "TEST" in my function, it worked fine throughout testing. Once the upgrade to Prod occurred, the issue was spotted, and the simple solution was to just reset the values. As I was moving from an adhoc to a official release, I chalked it up to that. Next time, WRC will be getting a call 🙂
go to post Julian Matthews · Sep 14, 2023 Word of warning for using this - I have had this value be reset during an upgrade on Healthconnect. I was using this value in a function to control the output of a Transform, and I hadn't accounted for the chance of this returning null. My code looked something like this: ClassMethod WhatEnvAmI() { Set Env = $SYSTEM.Version.SystemMode() If (Env = "LIVE")||(Env = "FAILOVER"){ Quit "LIVE" } Quit "TEST" } So, post upgrade, the transform suddenly begun outputting values specific to the test system from the live environment.
go to post Julian Matthews · Sep 6, 2023 This is a rather subjective based on the skill level of the intended audience. You could add a comment to the ClassMethod to provide context to what is being done and why. For example: /// This ClassMethod takes a delimited String from System-X that consists of sets of Questions and Answers. /// The Sets are delimited by a pipe "|" and then the questions and answers are delimeted by a colon ":" /// The response from this ClassMethod is a %Library.DynamicArray object containing the questions and answers ClassMethod createResponse(data As %String(MAXLEN="")) As %Library.DynamicArray { ;1.- Questions splitted by "|" Set listQuestions = $LISTFROMSTRING(data, "|") Set items = [] Set questionNumber = 0 ;2.- Iterate For i=1:1:$LISTLENGTH(listQuestions) { Set questionAnswer = $LISTGET(listQuestions, i) ;3.- Update variables Set questionNumber = questionNumber + 1 Set question = $PIECE(questionAnswer, ":", 1) Set answer = $ZSTRIP($PIECE(questionAnswer, ":", 2), "<W") //Get rid of initial whitespace ;4.- Generate item Set item = { "definition": ("question "_(questionNumber)), "text": (question), "answer": [ { "valueString": (answer) } ] } Do items.%Push(item) } Quit items } Or you could go one step further and be more descriptive with your comment at each action within your code. So, instead of: ;2.- Iterate You could write something like: ;2.- Iterate through the list of Questions and Answers If your intended audience is not familiar with ObjectScript, then you may want to introduce them to features in stages. For example, you could use $ZSTRIP on both the question and answer in your For loop, but only nest it for the answer and use comments to describe it all. Something like: // Retrieve the question from the delimited entry Set tQuestion = $PIECE(questionAnswer, ":", 1) // Strip any whitespace from the start of the question Set question = $ZSTRIP(tQuestion, "<W") // It is also possible to nest functions, so below we will retrieve the answer and remove the whitespace in a single line. Set answer = $ZSTRIP($PIECE(questionAnswer, ":", 2), "<W")
go to post Julian Matthews · Jul 1, 2023 If you were to use the Parenthesis () Syntax in your routing rule, you could simply make your rule something like: The reason this works is that using the parenthesis syntax to access repeating values from a message will return all of the entries in the repeating segment as a *delimited string, and you can then check the string contains the - character using the contains function. *Do make sure that the delimiter isn't the character you're looking for. Maybe throw in a trace when you first test this and check how it's returned.
go to post Julian Matthews · Jun 26, 2023 Hey Guillaume. Funnily enough - it's one of your github repos where I located the demo I'm trying use as a jumping off point (but from https://github.com/grongierisc/InstallEnsDemoHealth/blob/master/src/CLS/Demo/DICOM/Process/WorkList.cls) Basically, I'm stuck trying to work out if I should scrap the wakeup calls etc, and just call the external data when I get a C-FIND-RQ message and then call "CreateIntermediateFindResponse" for each result set entry, or if it's necessary to use the wakeup calls and somehow hold the result set in context and move to the next result set entry on each Ens.AlarmResponse received. ETA: The approach taken was to use the initial message as a trigger to call off to an external db, and write the results into a local table, and then use the Ens.AlarmResponse as the trigger to grab the top entry from the local table and return this to the calling system. This then allows for a cancel to come in and interrupt the process (the cancel will trigger a deletion of the appropriate rows in the local table)
go to post Julian Matthews · Jun 6, 2023 Hey Michael. A good use of a lookup table would be when working with an integration between two systems that use differing codes for the same values. For example, you could have System A that records Sex as 1 for Male, 2 for Female, 0 for Not Known, and 9 for Not Specified, whereas System B uses M for Male, F for Female, and O for Other. You could have a winding If/Else in a transform, or you could simply reference a lookup table in your DTL using the ..Lookup() function: and then build up your lookup table to look like this: As you can see, System A has more values than System B so the values for Not Known and Not Specified are being added as Other in my example. Another example could be you needed to filter messages in a router based on a code within the HL7 message. You could add the codes to a lookup table as the key and a description as the value, and then use the Exists() function within your routing rule: Which becomes: