Just a quick stab but I believe it's because you're extending %Persistent first and not Ens.Response and not specifying inheritance from the right so it's not appropriately applying the message viewer projection.

You normally wouldn't need to extend %persistent if these are going to be transient objects that you want to purge out via normal Ensemble purge (be sure to add in the appropriate logic to cascade delete child objects that are not %serial. Lots of posts on this "cascade delete"

Hey Scott - While the order itself isn't important as long as your %JSONFIELDNAME values are proper, my guess is that the current error you are hitting is one of the sub-class definitions we can't see here is missing this as well:

Parameter %JSONIGNOREINVALIDFIELD As BOOLEAN = 1;

For instance, I bet LastName is a field coming back in the object represented by this property

Property AttendingPhysicians As list Of User.REST.Epic.dt.ArrayOfAttendingPhysician(%JSONFIELDNAME = "AttendingPhysicians");

So you would have to add that class param to that class as well (or wherever appropriate downstream) unless you intend to handle the LastName field appropriately.

Hey Scott; I haven't read through all of this but use

Set tSC = pResponse.%JSONImport(tHTTPResponse.Data)
Quit:$$$ISERR(tSC) tSC

So you can capture perhaps why the import is failing.

Generally speaking as well, for items like:

Property AppointmentSchedules As User.REST.Epic.dt.ArrayOfScheduleProviderReturn(%JSONFIELDNAME = "AppointmentSchedules");

Which I believe is referencing a JSON Array of objects, you would need to do the following at bare minimum;

Property AppointmentSchedules As list Of User.REST.Epic.dt.ArrayOfScheduleProviderReturn(%JSONFIELDNAME = "AppointmentSchedules");
 

Believe you have several of these situations.

As I mentioned to Bob in ISC Discord chat, I'm in the midst of writing a tech article on podman w/ SAM - there are a few tricks to know even beyond the container download issue you may have. Sorry for the delay on the article - switched companies in the last week or two and my ISC Developer Community account is in transition!

For that download issue, if you can grab the containers its after on another machine and upload to /tmp or wherever, podman lets you specify a local folder to source them from - podman import is the command to look at - so podman has a copy and when compose is ran, as long as the tags match it'll pull em right in!

Edit: doc reference for podman-import: https://docs.podman.io/en/latest/markdown/podman-import.1.html

Believe you are looking for System Configuration->Security->Applications->Web Applications, then click on the link for the webapp that looks like: /csp/healthshare/devclin and adjust the Session Timeout setting.

Default is 900 seconds I believe but we often increase ours to 1800 (30 minutes).

However, not sure this will address it for you as I'm working with a rather Production as well and my result still comes back within a few seconds. You mentioned an HTTP 500 which is a generic error - I would have expected a 408 (Request Timeout) or 504 (Gateway Timeout) if it were truly a session timeout issue.

Wonder if you can see any error in the Application Error log.

What version are you running where $EXTRACT is not available? Not sure I’ve heard of such a situation.  
 

edit: noted you referenced ..ReplaceStr which is an Interoperability function. There are semi-equivalents of $EXTRACT and $FIND in there as well - $EXTRACT is ..SubString. But note if you use the solution I or David presented, don’t use .. in front of $EXTRACT, $FIND or $PIECE as these aren't interoperability functions but pure ObjectScript functions. 
 

my suggestion, that I know works as we do something similar, is as follows:

“(“_$PIECE(input,”(“,2)

Have to re-add the opening paren since we’re using that as our splitter, but some find it easier than chaining multiple functions together. David’s solution is certainly valid too.

https://docs.intersystems.com/iris20221/csp/docbook/Doc.View.cls?KEY=RCO...

Not to dog pile on as the links provided above provide the information but I want to highlight this one in particular, as it was a big 'gotcha' for us and even a long time InterSystems consulting partner I was working with at the time.

You can do everything else right, but if you setup your URL prefix in IIS for dev to be, say, /dev, so URLs would be myhost.com/dev/csp/blahblahblah, but your instance name is IRIS or ANYTHING other than 'dev', you will fail and not understand why.

The link I shared is just a sub-section of the same link Alexander shared but it addresses this gotcha. You must use that command in terminal to set the prefix on the instance associated with the URL prefix you created for it to recognize it appropriately. Without it, the only 'prefix' that will work by default is the name of the instance itself.

Hint, you can specify multiple prefixes (if desired) in comma-delimited format. It mentions that as well but a reason I bring it up is it helps tremendously cut down on the number of separate IIS configs to maintain in mirroring situations because I can setup the main prefix for the VIPA (Virtual IP Address) but also add prefixes for the individual server identifiers so if necessary, we can directly go to the individual servers of the mirror, even if they're not the current primary, for maintenance tasks (upgrade prep, security/task syncs, etc).  In this way, it totally negates any need for the private web server to remain running on your instances (big security win) and keeps the amount of maintenance relatively low on the IIS (or other web servers) side as well.

Nice! This post is definitely a good example of a plethora of ways to solve the same challenge. :-)

We have scripts similar to yours above that use SQL on Ens_Config.Item to iterate through for auditing component settings on a daily basis to ensure our engineers don't do anything too crazy (on purpose or not!) and it has worked well for us for several years now so confident in saying it should be a decent approach for you as well!

Best of luck!

That’s fair - another consideration then is something we also use - globals as “global parameters”. 
 

I.e. create a global like ^MyPkg(“DowntimeActive”) = <future $H value here> and then use a simple boolean check to see if current $H is less than the globals $H. Thus it automatically expires for us.  
 

We’ve evolved a bit beyond this now but for the source control reasons, it may be a simple solution. 

Hey @Jimmy Christian - Read through your responses to others to understand a bit more of what you're after. Short of creating a custom process class, there's no way to expose a setting on the Router in the Management Portal Production UX.

That said, if I understand what you are ultimately trying to achieve, might I suggest a simple Lookup Table named something like 'RouterDowntimeSettings' and then in that table, simply have some entries defined like:

Key Value
MyPkg.Rules.ClassName 1

Then inside your rules where you want to use this, you simply use the built-in Lookup function on that table and pass in the class name that you specified as the key. Might look something like this:

 

<rule name="Discard" disabled="false">
<when condition="Lookup('RouterDowntimeSettings','MyPkg.Rules.ClassName')=1">
<return></return>

Since Lookup returns empty string if the key is not found, this would only return true if a key was explicitly specified in the table, thus making it easy to implement as needed without causing unexpected errors or functionality.

Just an alternate approach that may work for you and reduce the amount of effort (no custom class creation that would need to be maintained). 

Edit: Fixed the boolean, didn't see your original was using returns.

While technically this could be written using a custom class extending Ens.BusinessService, what you describe has you playing the 'Operation' role more than the 'Service' role. We do this with many integrations and have a design pattern that works well for us.

In short, you need:

  • Custom Adapterless Trigger Service (extends Ens.BusinessService). Only purpose is to send a simple Ens.Request to a Busines Process (BPL of custom class that extends Ens.BusinessProcess) on a timed interval... either using a schedule or the call interval.
  • Custom Business Operation likely extending EnsLib.HTTP.GenericOperation or something similar.
  • Custom Business Process to handle the business logic flow...
    • at time of Ens.Request from the trigger service, it formats a request object and sends to the Business Operation that executes your GET call against the webservice to receive the JSON payload.
    • JSON Payload returned by Business Operation to the Business Process as a custom message object ideally (no longer JSON) and from there, any manner of normal Ensemble workflows can take place (data transforms, ID logic, call outs to other business operations and so forth.)

You appear to be on a very old version of Ensemble so not sure how much recent documentation will be relevant to your use case and you likely will face a lot more difficultly using Ensemble 2014.1 than you would with something 2019.x or newer, but here's a few reference links to get the thought processes going:

Using the HTTP Outbound Adapter | Using HTTP Adapters in Productions | InterSystems IRIS for Health 2021.2

Creating REST Operations in Productions | Using REST Services and Operations in Productions | InterSystems IRIS for Health 2021.2
 

Sounds like the vendor is giving you bad information then. They need to be producing better error output on their side instead of just {'Message': 'An error has occurred.' }

Likely something in the formatting of what of the fields in the JSON package is wrong - incorrect field name or bad value - but without some better error messages from the vendor or more guidance from their end, you're kind of stuck.

Apologies! Missed that line earlier! Your method of making the JSON is different than how I do it though - and execute it successfully - so could try this? 

set JsonArray = []
set SignaletiquePat = {}
        
set SignaletiquePat.internalID = "050522001"
set SignaletiquePat.lastName = "Tata"
set SignaletiquePat.firstName = "Silva"
set SignaletiquePat.dateOfBirth = "05/05/2022"
set SignaletiquePat.gender = "1"
set SignaletiquePat.clinicId = 22
set SignaletiquePat.mothersName = "Anne Dede"
set SignaletiquePat.address = "Rue des prés, 50"
set SignaletiquePat.postalCode = "4620"
set SignaletiquePat.place = "fléron"
set SignaletiquePat.telephone1 = "0499998855"

Do JsonArray.%Push(SignaletiquePat)
set JsonArrayOBJ = JsonArray.%ToJSON()

Your code:

Do HTTPRequestPat.EntityBody.Write(JsonArrayOBJ) // like this ? didnt work
          set ..Adapter.HTTPServer = "185.36.164.222"
          set ..Adapter.URL = UrlPats_"/api/patient/signaletic"
        set st = ..Adapter.SendFormDataURL(..Adapter.URL,.callResponsePat,"POST",HTTPRequestPat,,JsonArrayOBJ)

Try this instead:

Set json=JSONArrayOBJ.%ToJSON()
Do HTTPRequestPat.EntityBody.Write(json)
set ..Adapter.HTTPServer = "185.36.164.222"
set ..Adapter.URL = UrlPats_"/api/patient/signaletic"
set st = ..Adapter.SendFormDataURL(..Adapter.URL,.callResponsePat,"POST",HTTPRequestPat)

I was able to get it going...

Only real difference was the addition of 

command:
      - --check-caps false

For the iris service (probably should be added to the default release per Using InterSystems IRIS containers with Docker 20.10.14+ | InterSystem )

Then adjusting start.sh's final line to look like this:

# old line: docker-compose -p sam up -d
podman-compose -p sam up -d

stop.sh

podman-compose -p sam down

(Alternatively, could alias docker-compose podman-compose)

I can get into SAM now and add a cluster but for some reason it's not able to talk to it (no URL prefix or authentication involved but is going through a Web Gateway and SAM doesn't seem to allow specifying HTTPS instead of HTTP but maybe it does it in the background.

Are there plans to support URL prefix and authenticated endpoints? Having an unauthenticated endpoint is a bit of a no-no around here and to get around the URL Prefix thing with some of my instances, I'll have to start up the private web server just for SAM which is also not desired.