Find

Article
· Apr 16, 2023 4m read

Tuples ahead

Overview

Cross-Skilling from IRIS objectScript to Python it becomes clear there are some fascinating differences in syntax.

One of these areas was how Python returns Tuples from a method with automatic unpacking.

Effectively this presents as a method that returns multiple values. What an awesome invention :)

out1, out2 = some_function(in1, in2)

ObjectScript has an alternative approach with ByRef and Output parameters.

Do ##class(some_class).SomeMethod(.inAndOut1, in2, .out2)

Where:

  • inAndOut1 is ByRef
  • out2 is Output

The leading dot (".") in front of the variable name passes ByRef and for Output.

The purpose of this article is to describe how the community PyHelper utility has been enhanced to give a pythonic way to take advantage of ByRef and Output parameters. Gives access to %objlasterror and has an approach for Python None type handling.
 

    Example ByRef

    Normal invocation for embedded python would be:

    oHL7=iris.cls("EnsLib.HL7.Message")._OpenId('er12345')

    When this method fails to open, variable "oHL7" is an empty string.
    In the signature of this method there is a status parameter that is available to object script that gives an explanation of the exact problem.
    For example:

    • The record may not exist
    • The record couldn't be opened in default exclusive concurrency mode ("1"), within timeout
    ClassMethod %OpenId(id As %String = "", concurrency As %Integer = -1, ByRef sc As %Status = {$$$OK}) As %ObjectHandle

    The TupleOut method can assist returning the value of argument sc, back to a python context.
     

    > oHL7,tsc=iris.cls("alwo.PyHelper").TupleOut("EnsLib.HL7.Message","%OpenId",['sc'],1,'er145999', 0)
    > oHL7
    ''
    > iris.cls("%SYSTEM.Status").DisplayError(tsc)
    ERROR #5809: Object to Load not found, class 'EnsLib.HL7.Message', ID 'er145999'1
    ```

    The list ['sc'] contains a single item in this case. It can return multiple ByRef values, and in the order specified. Which is useful to automatically unpack to the intended python variables.

    Example Output parameter handling

    Python code:

    > oHL7=iris.cls("EnsLib.HL7.Message")._OpenId('145')
    > oHL7.GetValueAt('<%MSH:9.1')
    ''

    The returned string is empty but is this because the element is actually empty OR because something went wrong.
    In object script there is also an output status parameter (pStatus) that can be accessed to determine this condition.

    Object script code:

    > write oHL7.GetValueAt("<%MSH:9.1",,.pStatus)
    ''
    > Do $System.Status.DisplayError(pStatus)
    ERROR <Ens>ErrGeneral: No segment found at path '<%MSH'

    With TupleOut the equivalent functionality can be attained by returning and unpacking both the method return value AND the status output parameter.

    Python code:

    > hl7=iris.cls("EnsLib.HL7.Message")._OpenId(145,0)
    > val, status = iris.cls("alwo.PyHelper").TupleOut(hl7,"GetValueAt",['pStatus'],1,"<&$BadMSH:9.1")
    > val==''
    True
    > iris.cls("%SYSTEM.Status").IsError(status)
    1
    > iris.cls("%SYSTEM.Status").DisplayError(status)
    ERROR <Ens>ErrGeneral: No segment found at path '<&$BadMSH'1


    Special variable %objlasterror

    In objectscript there is access to percent variables across method scope.
    There are scenarios where detecting or accessing special variable %objlasterror is useful after calling a CORE or third party API
    The TupleOut method allows access to %objlasterror, as though it has been defined as an Output parameter, when invoking methods from Python

    > del _objlasterror
    
    > out,_objlasterror=iris.cls("alwo.PyHelper").TupleOut("EnsLib.HL7.Message","%OpenId",['%objlasterror'],1,'er145999', 0) 
    
    > iris.cls("%SYSTEM.Status").DisplayError(_objlasterror)
    ERROR #5809: Object to Load not found, class 'EnsLib.HL7.Message', ID 'er145999'1

    When None is not a String

    TupleOut handles python None references as objectscript undefined. This allows parameters to default and methods behave consistently.
    This is significant for example with %Persistent::%OnNew where the %OnNew method is not triggered when None is supplied for initvalue, but would be triggered if an empty string was supplied.

    In objectscript the implementation might say:

    do oHL7.myMethod("val1",,,"val2")

    Note the lack of variables between commas.

    TupleOut facilitates the same behavior with:

    Python:

    iris.cls("alwo.PyHelper").TupleOut(oHL7,"myMethod",[],0,"val1",None,None,"val2")

    Another way to consider this, is being able to have one line implementation of invocation code, that behaves flexibly depending on pre-setup of variables:

    Object Script:

    set arg1="val1"
    kill arg2
    kill arg3
    set arg4="val2"
    do oHL7.myMethod(.arg1, .arg2, .arg3, .arg4)

    TupleOut facilitates the same behavior with:

    Python:

    arg1="val1"
    arg2=None
    arg3=None
    arg4="val2"
    iris.cls("alwo.PyHelper").TupleOut(oHL7,"myMethod",[],0,arg1,arg2,arg3,arg4)

    List and Dictionaries

    When handling parameters for input, ByRef and Output, TupleOut utilizes PyHelper automatic mapping between:
    IRIS Lists and Python Lists
    IRIS Arrays and Python Arrays
    Where it takes care to always use strings to represent dictionary keys when moving from IRIS Arrays to Python Dict types.

    Conclusion

    Hope this article helps inspire new ideas and discussion for embedded Python ideas and suggestions.

    Hope also it gives encouragement to explore the flexibility for how IRIS can easily bend to meet new challenges.

    Discussion (0)1
    Log in or sign up to continue
    InterSystems Official
    · Apr 14, 2023

    IKO (InterSystems Kubernetes Operator) 3.5 Release Announcement

    InterSystems Kubernetes Operator (IKO) 3.5 is now Generally Available.  IKO 3.5 adds significant new functionality along with numerous bug fixes.  Highlights include:

    • Simplified setup of TLS across the Web Gateway, ECP, Mirroring, Super Server, and IAM
    • The ability to run container sidecars along with compute or data nodes – perfect for scaling web gateways with your compute nodes.
    • Changes to the CPF configmap and IRIS key secret are automatically processed by the IRIS instances when using IKO 3.5 with IRIS 2023.1 and up.
    • The initContainer is now configurable with both the UID/GID and image.
    • IKO supports topologySpreadConstraints to let you more easily control scheduling of pods
    • Compatibility Version to support a wider breadth of IRIS instances
    • Autoscale of compute nodes (Experimental)
    • IKO is now available for ARM

     

    Follow the Installation Guide for guidance on how to download, install, and get started with IKO.  The complete IKO 3.5 documentation gives you more information about IKO and using it with InterSystems IRIS and InterSystems IRIS for Health.  IKO can be downloaded from the WRC download page (search for Kubernetes).  The container is available from the InterSystems Container Registry.

    IKO simplifies working with InterSystems IRIS or InterSystems IRIS for Health in Kubernetes by providing an easy-to-use irisCluster resource definition. See the documentation for a full list of features, including easy sharding, mirroring, and configuration of ECP.

    Discussion (0)1
    Log in or sign up to continue
    Article
    · Apr 10, 2023 9m read

    Sending DICOM files between IRIS for Health and PACS software

    Welcome community members to a new article! this time we are going to test the interoperability capabilities of IRIS for Health to work with DICOM files.

    Let's go to configure a short workshop using Docker. You'll find at the end of the article the URL to access to GitHub if you want to make it run in your own computer.

    Previously to any configuration we are going to explain what is DICOM:

    • DICOM is the acronym of Digital Imaging and Communication in Medicine and it's a images and medic data transmission standard. In this protocol is included the format of the DICOM file and the communication protocol based on TCP/IP.
    • DICOM files support images and clinical documentation (you can include in a DICOM file images or documents "dicomized" as images).
    • DICOM protocol define services/operations for the DICOM files. You can request the storages of an image (C-STORE), execute queries (C-FIND) o move this images among the systems of the medical organizations (C-MOVE). You can review all these available services from this URL .
    • All the systems involved in a DICOM based communication request a DICOM message as response.

    You can see here a typical example of the architecture for a system designed to work with DICOM:

    General scheme of DICOM Network architecture

    We have some "modalities" (these modalities could be machines as scanners, MRIs or just the software that will store it) identified by the AE Title o AET (Application Entity Title). This AET will be unique for each modality and must be configured in those other modalities or systems that are going to communicate with it, in such a way that communication between both modalities is allowed.

    As you can see in the graph, the modalities are configured to store their images in a DICOM file server that may or may not belong to a PACS (Picture Archiving and Communication System) that is later consulted from a PACS web interface. It is increasingly common to include a VNA (Vendor Neutral Archive) system in organizations that is responsible for centralized storage and viewing of all DICOM files used by the organization.

    In general, in the most modern modalities, the destination of the generated images can be configured, but on many occasions it may be necessary or to carry out some type of action on the DICOM image fields (modify the patient identifier, include the clinical episode to the one with which it is related, etc) or, due to the inability of the modality, to take charge of capturing and forwarding the generated image to the system responsible for archiving. It is in these cases that the existence of an integration engine that provides us with such functionality is necessary, and there is none better than IRIS for Health!

    For our example we will consider the following scenario:

    • A certain modality is generating images that need to be sent to a PACS for registration.
    • Our DICOM or PACS server will receive these images and must forward them to a specific VNA.


    To simulate our PACS we will use Orthanc, an open source tool that will provide us with the basic functionalities for archiving and viewing DICOM images (more information here). Orthanc is kind enough to provide us with its use through an image that we can mount in Docker without any complications. Finally we will deploy an IRIS for Health container (it depends on when you read this article, the license may have expired, in that case you just have to update the docker-compose file of the code) in which we can mount our production.

    Let's take a look at the docker-compose we've configured:

    version: '3.1'  # Secrets are only available since this version of Docker Compose
    services:
      orthanc:
        image: jodogne/orthanc-plugins:1.11.0
        command: /run/secrets/  # Path to the configuration files (stored as secrets)
        ports:
          - 4242:4242
          - 8042:8042
        secrets:
          - orthanc.json
        environment:
          - ORTHANC_NAME=orthanc
        volumes:
          - /tmp/orthanc-db/:/var/lib/orthanc/db/
        hostname: orthanc
      iris:
        container_name: iris
        build:
          context: .
          dockerfile: iris/Dockerfile
        ports:
        - "52773:52773"
        - "2010:2010"
        - "23:2323"
        - "1972:1972"
        volumes:
        - ./shared:/shared
        command:
          --check-caps false
        hostname: iris
    secrets:
      orthanc.json:
        file: orthanc.json

    Access to the Orthanc web viewer will be done through port 8042 (http://localhost:8042), the IP destined to receive images via TCP/IP will be 4242 and its configuration will be done from the orthanc.json file. The management portal of our IRIS for Health will be 52773.

    Let's see what orthanc.json contains:

    {
        "Name" : "${ORTHANC_NAME} in Docker Compose",
        "RemoteAccessAllowed" : true,
        "AuthenticationEnabled": true,
        "RegisteredUsers": {
            "demo": "demo-pwd"
        },
        "DicomAssociationCloseDelay": 0,
        "DicomModalities" : {
            "iris" : [ "IRIS", "host.docker.internal", 2010 ]
          }
    }

     

    As you can see we have defined a demo user with a password demo-pwd and we have declared a mode called IRIS that will use port 2010 to receive images from Orthanc, "host.docker.internal" is the mask used by Docker to access other deployed containers.

    Let's check that after running the docker-compose build and docker-compose up -d we can access our IRIS for Health and Orthanc without problems:

    IRIS for Health is successfully deployed.

    Orthanc works too, so come on, get messy!

    Let's access the namespace called DICOM and open its production. We can see in it the following business components:

    We are going to review just the necessary components to manage the first case that we have presented for now. A modality that generates DICOM images but from which we cannot send them to our PACS. To do this we will use a Business Service of the standard class EnsLib.DICOM.Service.File configured to read all the .dcm files stored in the /shared/durable/in/ directory and send them to the Business Process of the Workshop.DICOM.Production.StorageFile class.

    Let's take a closer look at the main method of this Business Process:

    /// Messages received here are instances of EnsLib.DICOM.Document sent to this
    /// process by the service or operation config items. In this demo, the process is ever
    /// in one of two states, the Operation is connected or not.
    Method OnMessage(pSourceConfigName As %String, pInput As %Library.Persistent) As %Status
    {
        #dim tSC As %Status = $$$OK
        #dim tMsgType As %String
        do {
            
            If pInput.%Extends("Ens.AlarmResponse") {
                
                #; We are retrying, simulate 1st call
                #; Make sure we have a document
                Set pInput=..DocumentFromService
                $$$ASSERT(..CurrentState="OperationNotConnected")
            }
                
            #; If its a document sent from the service
            If pSourceConfigName'=..OperationDuplexName {
                
                #; If the operation has not been connected yet
                If ..CurrentState="OperationNotConnected" {
                    
                    #; We need to establish a connection to the operation,
                    #; Keep hold of the incoming document
                    Set ..DocumentFromService=pInput
                    
                    #; We will be called back at OnAssociationEstablished()
                    Set tSC=..EstablishAssociation(..OperationDuplexName)
                    
                } elseif ..CurrentState="OperationConnected" {
                    
                    #; The Operation is connected
                    #; Get the CommandField, it contains the type of request, it should ALWAYS be present
                    Set tMsgType=$$$MsgTyp2Str(pInput.GetValueAt("CommandSet.CommandField",,.tSC))
                    If $$$ISERR(tSC) Quit
                    #; We are only handling storage requests at present
                    $$$ASSERT(tMsgType="C-STORE-RQ")
            		
            		// set patientId = pInput.GetValueAt("DataSet.PatientID",,.tSC)
            		// Set ^PatientImageReceived(patientId) = pInput.GetValueAt("DataSet.PatientName",,.tSC)
                    #; We can forward the document to the operation
                    Set tSC=..SendRequestAsync(..OperationDuplexName,pInput,0)
                }
                
            } elseif pSourceConfigName=..OperationDuplexName {
                
                #; We have received a document from the operation
                Set tMsgType=$$$MsgTyp2Str(pInput.GetValueAt("CommandSet.CommandField",,.tSC))
                If $$$ISERR(tSC) Quit
                #; Should only EVER get a C-STORE-RSP
                $$$ASSERT(tMsgType="C-STORE-RSP")
    
                #; Now close the Association with the operation, we will be called back at
                #; OnAssociationReleased()
                Set tSC=..ReleaseAssociation(..OperationDuplexName)
                
                #; Finished with this document
                Set ..DocumentFromService="",..OriginatingMessageID=""
            }
        } while (0)
        
        Quit tSC
    }

    As we can see, this class is configured to check the origin of the DICOM file, if it does not come from the Business Operation defined in the OperationDuplexName parameter, it will mean that we must forward it to the PACS and therefore the metadata of the DICOM message located in the CommandSet section under the name CommandField shall be of type C-STORE-RQ (store request) prior to connection establishment. In this URL you can check the different values ​​that this metadata can take (in hexadecimal).

    In the case that the message comes from the indicated Business Operation, it is a sign that it corresponds to a DICOM response message to our previously sent DICOM, therefore it is validating that the CommandField of said message is of type C-STORE-RSP.

    Let's analyze a little more in detail the key configuration of the Business Operation EnsLib.DICOM.Operation.TCP used to send our DICOM to our PACS via TCP/IP:

    We have declared as IP the name of the hostname specified in the docker-compose in which Orthanc is deployed, as well as the port.

    We have configured two key elements for sending to PACS: the AET of our IRIS for Health (IRIS) and the AET of our PACS (ORTHANC). Without this configuration, no image sending is possible, as both IRIS and ORTHANC will validate that the sending/receiving modality has permission to do so.

    Where do we configure which modalities can send images from IRIS and which modalities can send images to us? It's very simple: we have access to the DICOM configuration functionality from the IRIS management portal:

    From this menu we can not only indicate which modalities can send us and to which we can send DICOM images, we can also indicate what type of images we will be able to send and receive, in such a way that we can reject any image that falls outside of this parameterization. As you can see in the image above we have configured connections both from IRIS to Orthanc and from Orthanc to IRIS. By default Orthanc supports any type of image, so we don't need to modify anything in its configuration.

    In order not to have problems with the images that we can send and receive from IRIS, we will configure the "Presentation Context" call, made up of "Abstract Syntax" made up of the combination of DICOM services (Store, Get, Find...) and an object (MR images , CT, etc...) and the "Transfer Syntax" that defines how information is exchanged and how data is represented.

    Well, we already have configured any possible connection between IRIS and Orthanc and vice versa. Let's proceed to launch a test including a DICOM file in the path defined in our Business Service:


    Very good! Here we have registered our DICOM files and we can see how they have gone through our production until they are sent to Orthanc. Let's go into more detail by checking out a message.

    Here we have our message with its CommandField set to 1, corresponding to C-STORE-RQ, now let's review the response we received from Orthanc:

    We can see that the value of CommandFile 32769 corresponds in hexadecimal to 8001, which, as we have seen in this URL, is equivalent to type C-STORE-RSP. We can also see that the response message is a DICOM message that only contains the values ​​defined in the Command Set.

    Let's check from Orthanc that we have received the messages correctly:

    Here are our messages successfully archived in our PACS. Goal achieved! We can now store the DICOM images of our modality in our PACS without any problem.

    In the next article we will deal with the opposite direction of communication, sending from the PACS to our modality configured in IRIS.

    Here you have available the code used for this article: https://github.com/intersystems-ib/workshop-dicom-orthanc

    Discussion (0)1
    Log in or sign up to continue
    Article
    · Apr 10, 2023 4m read

    テーブル定義のデータが格納されるグローバル変数名について

    これは InterSystems FAQ サイトの記事です。

    バージョン2017.2以降から、CREATE TABLE文で作成したテーブル定義のデータを格納するグローバル変数の命名ルールが変わり ^EPgS.D8T6.1 のようなハッシュ化したグローバル変数名が設定されます。(この変更はパフォーマンス向上のために追加されました。)

    ※ バージョン2017.1以前については、永続クラス定義のルールと同一です。詳細は関連記事「永続クラス定義のデータが格納されるグローバル変数名について」をご参照ください。

    以下のテーブル定義を作成すると、同名の永続クラス定義が作成されます。

    CREATE TABLE Test.Product(
        ProductID VARCHAR(10) PRIMARY KEY,
        ProductName VARCHAR(50),
        Price INTEGER
    )

     永続クラス:Test.Productの定義は以下の通りです。(パラメータ:USEEXTENTSETに1が設定されます) 

    Class Test.Product Extends %Persistent [ ClassType = persistent, DdlAllowed, Final, Owner = {SuperUser}, ProcedureBlock, SqlRowIdPrivate, SqlTableName = Product ]
    {
    Property ProductID As %Library.String(MAXLEN = 10) [ SqlColumnNumber = 2 ];

    Property ProductName As %Library.String(MAXLEN = 50) [ SqlColumnNumber = 3 ];

    Property Price As %Library.Integer(MAXVAL = 2147483647, MINVAL = -2147483648) [ SqlColumnNumber = 4 ];

    Parameter USEEXTENTSET = 1;

    /// Bitmap Extent Index auto-generated by DDL CREATE TABLE statement.  Do not edit the SqlName of this index.
    Index DDLBEIndex [ Extent, SqlName = "%%DDLBEIndex", Type = bitmap ];

    /// DDL Primary Key Specification
    Index PRODUCTPKEY1 On ProductID [ PrimaryKey, SqlName = PRODUCT_PKEY1, Type = index, Unique ];

    Storage Default
    {
    <Data name="ProductDefaultData">
    <Value name="1">
    <Value>ProductID</Value>
    </Value>
    <Value name="2">
    <Value>ProductName</Value>
    </Value>
    <Value name="3">
    <Value>Price</Value>
    </Value>
    </Data>
    <DataLocation>^CCar.Wt3i.1</DataLocation>
    <DefaultData>ProductDefaultData</DefaultData>
    <ExtentLocation>^CCar.Wt3i</ExtentLocation>
    <ExtentSize>0</ExtentSize>
    <IdFunction>sequence</IdFunction>
    <IdLocation>^CCar.Wt3i.1</IdLocation>
    <Index name="DDLBEIndex">
    <Location>^CCar.Wt3i.2</Location>
    </Index>
    <Index name="IDKEY">
    <Location>^CCar.Wt3i.1</Location>
    </Index>
    <Index name="PRODUCTPKEY1">
    <Location>^CCar.Wt3i.3</Location>
    </Index>

    <IndexLocation>^CCar.Wt3i.I</IndexLocation>
    <StreamLocation>^CCar.Wt3i.S</StreamLocation>

    <Type>%Storage.Persistent</Type>
    }

    }

     

    ExtentLocation:このクラスのグローバル名の生成に使用されるハッシュ値

    DataLocation:レコードデータが登録されるグローバル変数名です。

    Location:各インデックス固有のグローバル変数名が指定されます。

    IndexLocation:この定義では、多くの場合使用されません。

    StreamLocation:ストリームプロパティのデータが格納される変数です。

     

    ストレージ定義に表示される情報について詳細は、ドキュメント「グローバル命名方法:USEEXTENTSET=1 の場合」をご参照ください。

    2023.1以前のドキュメントは「ハッシュ化したグローバル名」をご参照ください。

    2017.1以前と同様の命名ルール(^スキーマ名.テーブル名D、I、S のグローバル変数名)を使用する場合は、CREATE TABLE文実行時に以下のクラスパラメータを指定します。

    WITH %CLASSPARAMETER USEEXTENTSET = 0

    CREATE TABLE Test2.Product(
        ProductID VARCHAR(10) PRIMARY KEY,
        ProductName VARCHAR(50),
        Price INTEGER
    )
    WITH %CLASSPARAMETER USEEXTENTSET = 0

    永続クラス定義:Test2.Productのパラメータ:USEEXTENTSETは以下のように定義されます。

    Parameter USEEXTENTSET = 0;

     

    永続クラス定義:Test2.Productのストレージ定義は以下の通りです。

    Storage Default
    {
    <Data name="ProductDefaultData">
    <Value name="1">
    <Value>ProductID</Value>
    </Value>
    <Value name="2">
    <Value>ProductName</Value>
    </Value>
    <Value name="3">
    <Value>Price</Value>
    </Value>
    </Data>
    <DataLocation>^Test2.ProductD</DataLocation>
    <DefaultData>ProductDefaultData</DefaultData>
    <IdFunction>sequence</IdFunction>
    <IdLocation>^Test2.ProductD</IdLocation>
    <IndexLocation>^Test2.ProductI</IndexLocation>
    <StreamLocation>^Test2.ProductS</StreamLocation>

    <Type>%Storage.Persistent</Type>
    }

     

    WITHで指定したテーブルのオプションについては詳しくは、ドキュメント「テーブルのオプション」をご参照ください。


    関連記事:永続クラス定義のデータが格納されるグローバル変数名について

    Discussion (0)1
    Log in or sign up to continue
    Article
    · Apr 7, 2023 2m read

    Autoscaling IRIS Workloads. My adventure with IKO, HPA, and Traffic Cop

    This week I was able to demo a proof of concept for our FMS interface on traffic cop architecture to my team. We are working on modernizing an Interoperability production running on mirrored Health Connect instances. We deploy IRIS workloads on Red Hat OpenShift Container Platform using InterSystems Kubernetes Operator (IKO). We can define any number of replicas for the compute stateful set where each compute pod runs our Interoperability production. We introduced Horizontal Pod Autoscaler (HPA) to scale up the number of compute pods based on memory or CPU utilization. But IKO scaled down because it wanted to keep the defined replicas. When compute pods receive shutdown signal while they are busy, messages in queues do not get processed right away. 
    We are transitioning to "Traffic Cop" architecture to enable us to autoscale our workloads. Instead of deploying interoperability production on multiple compute pods, we deploy it on a mirrored data pod which functions as a traffic cop. We will create more REST interfaces where the message processing happens on stateless compute pods which can be deployed without IKO and no interoperability production will be on stateless computes. 
    Compute and webgateway containers run as sidecar containers in one pod where webgateway receives requests to be processed in its paired compute. 
    Along the way I have created a REST service running on our stateless compute pods which was started with a Swagger API specification. InterSystems IRIS API Management tools generated the code for the REST interface instead of manually coding it.

    Discussion (0)1
    Log in or sign up to continue