As a short term approach, you may want to look into using Stunnel in client mode to encrypt the traffic and then set something up similar to:

This would mean that the traffic between your 2016 instance and stunnel is unencrypted but all on the same machine, and then stunnel handles the encryption between your machine and the external site using TLS1.3.

However, even if you go this route, I would still recommend getting the process started for upgrading to a newer version.

Increasing the pool value will have some effect on the RAM and CPU usage, but no different than having other jobs running through the production. If you move all the components over to using the actor pool (by setting the individual pool settings to 0) it should be easy enough to raise the value bit by bit while keeping an eye on the performance of the production and the cpu/ram usage of the machine to find a sweet spot.

If the API just needs a bit of extra resource when there's a small spike in the inbound requests, then this should not be of too much concern as it will just calm down once it's processed what has been requested.

If, however, there's a chance that it could be overloaded from inbound requests and the worry is that the server won't cope, then maybe look at using Intersystems API Manager to sit in front of the environment and make use of things like the rate limiting feature.

Or you could go even further and begin caching responses to return if the API is being queried for data that isn't changing much so that there's less processing being done for each request if it's going to get called for the same information by multiple requests in quick succession. You could make your own solution with caché/Iris, or look at something like redis.

I'm thinking to increase the pool parameter, but I'm not sure if it's a good idea.

If you are not concerned about the order of which you are processing the inbound requests, then upping the pool size to the number of parallel jobs you're looking to run with should do what you need 

However, you may need to then also apply this logic to related components that the Process interacts with, otherwise you will end up just moving the bottleneck to another component.

Alternatively, if it fits your use case, you could use the Actor Pool for your production components and then increase it to a point where you see the bottleneck drop off.

Paolo has provided the link to the documentation on Pools, which has some info on considerations for the use of the two different types of Pool.

Thanks Luis.

The issue I'd have is that the clock starts on the poll interval at the point the service is started, so a restart of the server/production would then shift the time of day it tries to run, which would not be ideal if I needed a single run at a specific time of day. I might try a combination of the large poll interval and defining a schedule (based on the other responses) and see if that has the desired effect, but I may need to just concede and continue using the task manager. 🙂

Hey Christine.

If I'm reading your question and subsequent replies correctly, you're trying to take the value of PV1:7.1, and then use that in a SQL query. The answer has been given by Ashok when you put their replies together, but hopefully putting it all into a single response will make things easier to follow.

If this is the case, then you will want to do the following:

Step 1: Set a variable to the value of PV1:7.1:

Step 2: Add a code block, and use this to run your sql query:

Step 3: Do what you need to with the value of ID - for the sake of this response, I'm just setting the value of PV1:7.2 to the ID returned from the query that inserted into the variable "Output":



It's worth knowing that, when working with Embedded SQL, prefixing a variable with a colon is how you can pass variables in and out of the Embedded SQL section of code. However it's a bit clearer when working directly with ObjectScript vs a DTL.

For example, if we had the following table:

ID COL_A COL_B
1 ABC 123
2 DEF 234

We could have the following in ObjectScript:

    Set X = "" // X is null
    Set Y = "ABC"
    &SQL(
        SELECT COL_B
        into :X
        From TestTable
        WHERE COL_A = :Y
    )
    WRITE X //X is 123

The input is a string, so the max length will be your system max (which should be 3,641,144).

Assuming you're trying to retrieve the stream from a HL7 message, you will probably want to use the built in method GetFieldStreamBase64

So you could try something like:

Set tStream = ##class(%Stream.TmpBinary).%New()
Set tSC = pHL7.GetFieldStreamBase64(.tStream,"OBX:5")

And then your decoded file would be in the temp stream.

(You may need to tweak this slightly depending on how you intend to then use the stream, and the correct property path of the Base64 within the HL7 message)

Stupidly, no.

As the default was set to "TEST" in my function, it worked fine throughout testing. Once the upgrade to Prod occurred, the issue was spotted, and the simple solution was to just reset the values. As I was moving from an adhoc to a official release, I chalked it up to that.

Next time, WRC will be getting a call 🙂

Word of warning for using this - I have had this value be reset during an upgrade on Healthconnect.

I was using this value in a function to control the output of a Transform, and I hadn't accounted for the chance of this returning null.

My code looked something like this:

ClassMethod WhatEnvAmI()
{
	Set Env = $SYSTEM.Version.SystemMode()
	
	If (Env = "LIVE")||(Env = "FAILOVER"){
		Quit "LIVE"
		}
	Quit "TEST"
}

So, post upgrade, the transform suddenly begun outputting values specific to the test system from the live environment.