No worries, OAuth is a complicated subject. If the OAuth server you're trying to define in IRIS doesn't support the "well-known" endpoint, then I think you'll have to enter the server description manually, as opposed to having IRIS fetch it by discovery. You can read about how to do that here:
https://docs.intersystems.com/iris20221/csp/docbook/DocBook.UI.Page.cls?...

It looks like the "Unexpected issuer claim" error is caused by the "issuer" property of the discovery response body not matching the "Issuer endpoint" value of the server description.  So my question then would be, when you submit a discovery request from your REST client of choice, does the response "issuer" value match the issuer endpoint?  If not, could it be an issue with the way the OAuth server is configured?  Or are you accessing the OAuth server through a proxy so that the endpoint you're hitting is not the actual URL of the OAuth server?

The URL that the discovery request gets sent to is:

[issuer-endpoint]/.well-known/openid-configuration

So you might try hitting that endpoint in Postman to see what comes back.

You might also try turning on ISCLOG to log what's happening.  I'm not seeing anything in the doc specifically on ISCLOG, though it is mentioned in the context of other topics, for example:

https://docs.intersystems.com/iris20221/csp/docbook/Doc.View.cls?KEY=GRE...

The value of ^%ISCLOG corresponds to the verbosity of the logs, where higher is more verbose.  I'm not sure how high the scale goes, but I'm pretty sure it's less than 10, so I always just set it to 10.

You can skip setting the "Category" subscript to log all categories.

And don't forget to turn off ISCLOG when you're done!

kill ^%ISCLOG

or

set ^%ISCLOG=0

Just wanted to give an update on option "b" in this comment, in case anyone reads this going forward. 

I think at one time ^%ISCLOG was used to both set the log level and store the log data.  Now ^%ISCLOG is only used to set the log level.  The log data is stored in ^ISCLOG (no "%") in the %SYS namespace.  So instead of doing:

zwrite ^%ISCLOG    // This is no longer the log global!!

You would instead do:

zwrite ^ISCLOG

In the %SYS namespace. (Or look at it in the Management Portal, I find it much easier to read that way.)

I couldn't find anything in the docs on ISCLOG specifically, but it is mentioned in the context of other subjects, for example:

https://docs.intersystems.com/iris20212/csp/docbook/DocBook.UI.Page.cls?...

Can you be more specific about how your implementation isn't working?  For example, do you have a known signature for known inputs that you are comparing against?  Or is there some external system you are trying to authenticate to?

When I ran your IRIS code (hardcoding values for requestTimeStamp and nonce), the tSignature value matched the output of this tool: https://www.freeformatter.com/hmac-generator.html#ad-output

("to_hmac" is not a valid variable name in ObjectScript, but I assume that's a copy-paste error.)

I wasn't familiar with the CryptoJS library before looking into this, but in the examples I was able to find, it looks like the 2nd arg to HmacSHA256() is typically passed in as a string, rather than a byte array: https://www.jokecamp.com/blog/examples-of-creating-base64-hashes-using-h...

One thing I noticed when I tried running your JS code is that if I replace:

var signatureBytes = CryptoJS.HmacSHA256(signature, secretByteArray)

With:

var signatureBytes = CryptoJS.HmacSHA256(signature, APIKey)

Then then value of requestSignatureBase64String in your JS example matches the value of tSignatureBase64 in your IRIS example.

So my guess is that you're getting the correct raw signature value in IRIS, however in your IRIS example you are using the hex representation of it while in your JS example you are using the base64 representation.  Also, if you are using your JS example for reference, you may want to look into the expected format of the 2nd arg to HmacSHA256().

Without being able to see your environment, it's difficult to say where the disconnect is or what would need to be tweaked to decode those characters correctly.  However, if you have an opportunity to manually process the HL7 data at any point as it flows through the system, then you may be able to call $ZConvert/$ZCVT on the encoded data to decode it:

USER>s str = $C(90,111,108,195,173,118,97,114,101,115)
 
USER>w str
Zolívares
USER>w $ZCVT(str, "I", "UTF8")
Zolívares
USER>

https://docs.intersystems.com/iris20212/csp/docbook/Doc.View.cls?KEY=RCO...

However, there should be a way to specify to your business service the encoding of input data so that it can decode the data for you.  I would have thought that this would be done with either the "Charset" or "Default Char Encoding" settings, but it sounds like you've already tried that.  I'm not sure why this wouldn't be working, but I'm fairly confident that this is how encoded data is supposed to be decoded, so it may be worth another look.

Hi Carl, To expand on Tim's answer:

Keys in the HS Configuration Registry can either be set by a user in the Management Portal, or they can be set programmatically.  I am only seeing one place in the HS codebase where these keys get set, in HS.Util.Installer.Kit.IHE.XDM.Direct:AddHub():

Do ##class(HS.Registry.Config).AddNewKey("\ZipUtility\ZipCommand",$S($zv["Win":"""c:\program files\7-zip\7z"" a %1 . -r",1:"zip -rm %1 ."))
Do ##class(HS.Registry.Config).AddNewKey("\ZipUtility\UnZipCommand",$S($zv["Win":"""c:\program files\7-zip\7z"" x %1 -o. -r",1:"unzip %1 -d ."))

However this class appears to be related to setting up support for the XDM IHE profile, so unless that is something that's important to you, you shouldn't worry about getting that code to run.  Rather, this is just an example of the values for these keys that is expected by the code that uses them.

Hi Clayton, I don't know how you would map from one set of units to another in the data in UCR (that would probably need to be done prior to ingestion into HealthShare), but there is a mechanism in the HS Clinical Viewer that enables you to define a "calculated" observation, the value of which is calculated from other observations. The value of this observation is calculated as the patient's data is loaded into the Viewer Cache and only exists in the Viewer Cache, not in the patient's data in UCR.  If you look at <install-dir>\distlib\trak\misc\HS-Default-ObservationItem.txt, you'll see an example of how this can be done:

[...]
8302-2^Height^^^^N
[...]
3141-9^Weight Measured^^^^N
// Body mass index calulations
// BMI = Weight (lb) / (Height (in) x Height (in)) x 703
39156-5^Body mass index^^^^C^([3141-9]/([8302-2]*[8302-2]))*703^2^
// Metric Calculation
// BMI = Weight (kg) / (Height (cm) / 100 x Height (cm) / 100) 
// 39156-5^Body mass index^^^^C^([3141-9]/([8302-2]/100*[8302-2]/100)) 

This file is used to pre-populate the "observation item" code table in the Viewer Cache when the Viewer namespace is reset.  What the file is doing here is creating a couple of standard observation items for weight and height.  Then it defines a "calculated" item for BMI that is defined as "(weight / height2)*703", where the "height" and "weight" inputs are references to items the appear elsewhere in this file.

So you should be able to use this to convert from one set of units to another, however - it sounds like your use case is to be able to normalize multiple input formats into a single output format, which I'm not sure if you can do with this mechanism.  That is, while it should be possible to define an item that converts inches to feet/inches and another that converts centimeters into feet/inches, I don't know if there is a way to define a single item that can convert both inches and centimeters into feet/inches.

I haven't used this feature extensively, though, so maybe it can do some things (like conditional logic) that I'm not aware of.  Someone from Trak might now.  The HS-Default-ObservationItem.txt file is a uniquely HealthShare concept, but the code table that it populates (User.MRCObservationItem) belongs to Trak.

Hi Bill, I was facing a similar (but not identical) issue not too long ago where I was trying set %response.Status in my REST handler class, only to have a different status code reflected in the response to the client.

The issue ended up being that the request to the REST handler class that I was working on was being forwarded from another REST handler class that extends EnsLib.REST.Service rather than %CSP.REST.  The EnsLib REST handler works a little differently.  The user code is supposed to write the output to a response stream, with the response headers being set as attributes of the stream, rather than setting headers of %response directly.

So my question is, is your REST handler downstream from an EnsLib.REST.Service REST handler?  I also see that your REST handler extends Ens.BusinessService.  I wonder if something in that class is overriding your %response headers.  Is there any way you can test your class with that superclass removed?

Assuming that tSC is a %Library.Status, I don't think this will work the way you intend it to.  The argument to a THROW has to be an oref to a class that extends %Exception.AbstractException:

https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=RCOS_cthrow
If you have a %Status that you need to throw as an exception, you can use $$$ThrowOnError or $$$ThrowStatus, both defined in %occStatus.inc. They both call ##class(%Exception.StatusException).ThrowIfInterrupt(), but $$$ThrowOnError only does so if the status is an error, so it's slightly more efficient if you haven't checked the status yet.

I'm not sure in what version %JSON.Adaptor was introduced, but if it's in your version, you can have the class modeling your object extend %JSON.Adaptor, which will give you access to the %JSONImport() method, which can de-serialize a JSON payload into an object.

One caveat to this is that I believe any serial objects referenced by the top-level object will also need to extend %JSON.Adaptor for the import to work properly.

Hi Cyriak, the HS_SDA3_Streamlet.Abstract table is used on *both* the Edge *and* the Access Gateway.  When a patient's data is fetched on the Access Gateway, it is stored in this table temporarily, until the CSP session that initiated the fetch has ended, plus however long it takes for the session cleanup process to fire (usually a few minutes).  So once you've loaded a patient's record into the Viewer and you have their Aggregation Key, you can use that to query for their data on the Access Gateway.

Hi Cyriak, you do not need to generate the Aggregation Key, as that is done automatically when the Access Gateway fetches a patient's data.

I believe the simplest way for custom code running in the Viewer to get the Aggregation Key value would be to first get the ID of the patient object in the Viewer Cache from the CSP session:

Set tViewerPatientID = $listget(%session.Data("lastPatientId"))

(For historical reasons the value of %session.Data("lastPatientId") is a $list, though it should only ever have a single value.)

Then you would call web.SDA3.Loader:GetAgKey(), passing it the ID of the patient object to get the Aggregation Key:

Set tAgKey = ##class(web.SDA3.Loader).GetAgKey(tViewerPatientId)

GetAgKey() is documented as an API method, so it should be safe to use.

You are correct that you can then use the Aggregation Key to query for the current patient's data in the Aggregation Cache.

Does that answer your question?

Hi, with regard to your second question, if your property "CODE" is unique and indexed, ie, your class definition includes something like:

Index CodeIndex on CODE [Unique];

 Then there is a generated method you can use to open the object that has CODE=Xparameter:

set myObject = ##class(User.MyPersistentClass).CodeIndexOpen(Xparameter, concurrency, .sc)

For any unique index, the method name is always "[index-name]Open".  You can read more about auto-generated methods here:

https://community.intersystems.com/post/useful-auto-generated-methods

With regard to your first question, I'm not aware of any system method that returns a list of all saved objects for a class, but you could implement that by creating a class method that creates and executes the appropriate query, then iterates over the results and copies them to a list.  I would be cautious about doing something like this with large tables though, since you would essentially be loading the entire table into memory.  This could lead to <STORE> errors:

https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=RERR_system

A better solution might be to implement some kind of iterator method that only loads one object at a time.

How about saving the values of a, b, and c in whatever way makes sense given the context - global, PPG, %-var, %session, or have the method return them, and then also make them arguments to the method.  When calling the method again, check for them in whatever way they were saved, and if not found initialize them to "".

If you do find them, you might have to "rewind" each subscript by one to make sure you don't miss anything, which is easy to do with $ORDER:

set a = $ORDER(^data(a), -1)

I tend to use "if" rather than postconditionals because it makes the code read more like spoken language, and I think it's easier to understand for those who are not seasoned Caché veterans.

However I make exceptions for "quit" and "continue":

quit:$$$ISERR(tSC)

continue:x'=y

I like to have these out in front, rather than buried in a line somewhere, so that when I'm scanning code I can easily see "this is where the execution will branch if a certain condition is met."