<scope xpos='200' ypos='250' xend='200' yend='800' >
<code name='Transform' xpos='200' ypos='350' >
<![CDATA[ 
    
    /*
     If I use a <transform> activity with indirection (@$parameter(request,"%DTL")), and 
     assign the returned object to the context, and an error is produced when the context is
     saved because the returned object has, say, missing properties, IRIS will only see this error
     too late and the scope we defined will not be able to help us to deal with this fault.
     
     So, in order to avoid this bug, we need to call the transformation from a <code> activity instead.
     And try to save the returned object before assigning it to the context and returning. If we can't
     save it, we must assign the error to the status variable and leave the context alone. This way
     the <scope> will capture the problem and take us to the <catchall> activity. 
    */ 
    
    Set tTransformClass=$parameter(request,"%DTL")
    Set status = $classmethod(tTransformClass, "Transform", request, .normalizedRequest)
    If $$$ISERR(status) Quit
    
    Set status = normalizedRequest.%Save()
    If $$$ISERR(status) Quit
    
    Set context.NormalizedData=normalizedRequest
]]>
</code> <assign name="Done" property="context.Action" value="&quot;Done&quot;" action="set" xpos='200' ypos='550' /> <faulthandlers>    <catchall name='Normalization error' xpos='200' ypos='650' xend='200' yend='1150' >
      <trace name='Normalization problem' value='"Normalization problem"' xpos='200' ypos='250' />
   </catchall>
</faulthandlers>

"What files/directories should I keep track of from the durable %SYS directory? (e.g: I want to bind-mount those files/directories and import the code and see the Namespace and all the instance configuration )."

Answer: None.

Your configuration needs to be code. Right now, the best approach to have your namespace/databases created/configured, CSP applications, security, etc. is by using %Installer Manifests during the build process of the Dockerfile. I personally don't use Durable %SYS on my development machine. I prefer to use %Installer to configure everything and if I need to load pre-tables with data, I load it from CSV files that are on GitHub as well as source code. 

That allows you to source control your Code Table contents as well. Test records can be inserted as well with the same procedure.

For an example of this, look at this demo (based on IRIS):

https://github.com/intersystems-community/irisdemo-demo-fraudprevention

Look at the normalized_datalake image. It loads CSV files into the tables as part of the Dockerfile build process. You will notice that this image is based on a base image that has some standard reusable code. This source for this base image is here:

https://github.com/intersystems-community/irisdemo-base-irisdb-community

I was using Atelier when I built this image. But the principle is the same. I am now using VSCode doing the same.

This is another demo based on IRIS for Health:

https://github.com/intersystems-community/irisdemo-demo-readmission

Look at the riskengine image. It loads data from JSON files into the data model as part of the build process. The JSON files are created by synthea. An open source tool for generating synthetic patient data.

If you use this method, any developer will be able to jump between versions of your software very quickly. If you need to fix a problem on an old version, you can just checkout that version tag, build the image (which will load the tables) and make the changes you want looking at that exact version with the exact data needed for it to work.

When you are done, you can go back to your previous branch, rebuild the image again using the current source code (and data in CSV/JSON files) and keep going with your new features.

Just to be clear: I don't mean you shouldn't use Durable %SYS. You must use Durable %SYS on Production!

But I have strong reservations about using it on your PC (developer environment). That's all. Even the central development environment (where Unit Tests could be run) wouldn't need it.

But your UAT/QA and Production should definitely use Durable %SYS and you should come up with your own DevOps approach to deploying your software on these environments so you can test the upgrade procedure.

Kind regards,

AS

Hi!

It is not hard to build a task that will search for your message headers from a specific service, process or operation in a date range and delete the headers and the messages. You can start by searching on Ens.MessageHeader. You will notice that there are indices that will allow you to search by date range and source config item pretty quickly.

BUT: 

I always recommend to my customers to create a production for each different business needs. That allow you to have different back up strategies, purging strategies and even maintenance strategies (some times you want to touch one production and be sure that the others will not be touched). You are also able to take these productions to other servers when you need them to scale independently (ops! now this new service is receiving a big volume of messages and is disturbing the others!). On different servers, you may be able to patch one production (one server) while the others are running. This is specially more feasible now with InterSystems IRIS. You can easily migrate from Ensemble to IRIS and have a production running on a VM with 2 cores, another on another VM with 4 cores, and so on. You don't have to start with 8 cores and scale by 4 like ensemble. So you can pay the same you are paying today, but with the possibility of having more governance.

PS: Mirroring on IRIS is paid differently from mirroring with Ensemble. With IRIS, you do have to pay for the mirrors as well while with Ensemble you don't have to.

So, sincerely I believe you will be happier having another production. It's a good start to have more governance.

Kind regards,

Amir Samary

Senior Sales Engineer - InterSystems

Hi!

Are you calling the workflow using an asynchronous <call> activity and using a <sync> activity to wait on it? For how long is your <sync> activity waiting? Beware that a workflow task will only stay on the workflow inbox while that <sync> activity is not done. So, if you don't have a <sync> activity or if the timeout on the <sync> activity is too short, the task will not appear on the workflow inbox or appear only momentarily.

The idea is that the BPL can put a timeout as an SLA on that task. If no one finish that task until that timeout expires, you will receive whatever (incomplete) response is there and, maybe, escalate to other roles, send alerts, etc.

Kind regards,

AS

Hi Eduard!

Without going too deep into your code and trying to enhance it, I can suggest you to:

  1. Put this method on a utility class
  2. Recode it so that it is a method generator. The method would use a variation of your current code to produce a $Case() that would simply return the position of a given property instead of querying the class definition in runtime.
  3. Make the class you want to use it (ex: Sample.Address) inherit from it.

Of course, if this is a one-time thing, you could simply implement number 2. Or you could simply forget about my suggestion at all if performance isn't an issue for this scenario. By using a method generator, you eliminates from the runtime any inefficiency of the code used to (pre)compute the positions.

Kind regards,

AS

Hi!

HL7 has an XML Schema that you can import as Ensemble classes. You can use these classes to implement your business services and operations web services. Here is the XML schema link.

Another way of using this XML Schema is not to import them as classes. Instead, use the XML Schema importing tool we have on Ensemble Management Portal and use EnsLib.EDI.XML.Document with it. You could receive plain XML text and transform it into an EnsLib.EDI.XML.Document. It will be faster to process than to transform it to objects. If you are interested, I can share some code.

All that being said, I strongly suggest NOT TO DO ANY OF THIS.

HL7v2 XML encoding isn't used anywhere and the natural reasoning behind using it is just ridiculous. If you are asking someone to support HL7v2 and they are willing to spend their time implementing the standard, then explain to them that it is best for them to simply implement it the way 99% of the world uses it: with the normal |^~& separators. It's a text string that they can simply embed into their web service call.

There are these two XML encoding versions:

  •  EnsLib.HL7.Util.FormatSimpleXMLv2 - provided by Ensemble (example you have already on this post)
  • The XML encoding from HL7 itself (link I gave you above) 

Both encodings are horrible since they won't give you pretty field names that you can simply transform into an object and understand the meaning of the fields just by reading their names. Here is what an class generated from the standard HL7v2 XML Schema looks like (the PID segment):

Class HL7251Schema.PID.CONTENT Extends (%Persistent, %XML.Adaptor) [ ProcedureBlock ]
{

Parameter XMLNAME = "PID.CONTENT";

Parameter XMLSEQUENCE = 1;

Parameter XMLTYPE = "PID.CONTENT";

Parameter XMLIGNORENULL = 1;

Property PID1 As HL7251Schema.PID.X1.CONTENT(XMLNAME = "PID.1", XMLREF = 1);

Property PID2 As HL7251Schema.PID.X2.CONTENT(XMLNAME = "PID.2", XMLREF = 1);

Property PID3 As list Of HL7251Schema.PID.X3.CONTENT(XMLNAME = "PID.3", XMLPROJECTION = "ELEMENT", XMLREF = 1) [ Required ];

Property PID4 As list Of HL7251Schema.PID.X4.CONTENT(XMLNAME = "PID.4", XMLPROJECTION = "ELEMENT", XMLREF = 1);

You see? No semantics. Just structure. So, why not simply send the HL7v2 standard text through your Web Service or HTTP service? Here are the advantages of using the standard HL7v2 text encoding instead of the XML encoding:

  • You will all be working with the HL7 standard as 99% of the world that uses HL7v2 does instead of trying to use something that never caught up (XML encoding)
  • Ensemble brings Business Services and Business Operations out of the box to receive/send HL7v2 text messages through TCP, FTP, File, HTTP and SOAP. So you wouldn't have to code much in Ensemble to process them.
  • It takes less space (XML takes a lot of space)
  • It takes less processing (XML is heavier to parse)
  • Using the XML schema won't really help you with anything since both XML encoding systems provide just a dumb structure for your data, without semantics.

Kind regards,

AS

Class HL7251Schema.PID.CONTENT Extends (%Persistent, %XML.Adaptor) [ ProcedureBlock ]
{

Parameter XMLNAME = "PID.CONTENT";

Parameter XMLSEQUENCE = 1;

Parameter XMLTYPE = "PID.CONTENT";

Parameter XMLIGNORENULL = 1;

Property PID1 As HL7251Schema.PID.X1.CONTENT(XMLNAME = "PID.1", XMLREF = 1);

Property PID2 As HL7251Schema.PID.X2.CONTENT(XMLNAME = "PID.2", XMLREF = 1);

Property PID3 As list Of HL7251Schema.PID.X3.CONTENT(XMLNAME = "PID.3", XMLPROJECTION = "ELEMENT", XMLREF = 1) [ Required ];

Property PID4 As list Of HL7251Schema.PID.X4.CONTENT(XMLNAME = "PID.4", XMLPROJECTION = "ELEMENT", XMLREF = 1);

Hi!

I would add that they exist for compatibility with systems migrated from other databases. On other SQL databases, it's a common practice to use as the primary key a meaningful attribute of the tuple such as SSN, Code, Part Number, etc. I think this is a very bad practice even on normal SQL databases since a Primary Key/IdKey, once created, can't be easily changed. If you want to have one of your meaningful fields of your tuple as your primary key, use IdKey. I strongly advise against it though.

So, instead of having as Primary Key one of the attributes of the tuple (using IdKey), it's much better to have a sequential number. This is provided by different databases in different ways. Oracle has it's SEQUENCES. SQL Server has its AUTO_INCREMENT. Continuing with Oracle and SQL Server examples, many fields/columns on a table can be populated with values from a sequence (Oracle) or be auto incremented (SQL server). But only one of the fields can be the Primary Key. The Primary Key is used by other rows on the table or other tables to reference this row. It can't be changed just because there may be other rows relying on it and that is fine.

In Caché, we have our ID field that is an auto_increment kind of primary key. Other rows/objects from this or other tables will be referencing this object through it's ID. You can create your own unique fields. As many as you want. Code, PartNum, SSN, etc. You can define them Unique indices for them all. Don't use IdKey to do that. That is not its purpose. IdKey will make that field THE Primary Key of the class. It will make its ID be the field you picked. That is very bad (IMHO).

On the other hand, there are cases where performance can be increased by using IdKey. It will save you a trip to the indices global to get the real id for that field before going to the data global to get the other fields you need. If we are talking about millions of accesses per second and you are pretty sure you won't need to change the value of that field once it was created, use IdKey. It will give you better performance. But if you do, beware that by choosing you own IdKey, you may not be able to use special kinds of indices like bitmap indices or even iFind. It will depend on the type of IdKey you pick. If it's a string, for instance, iFind and BitMap indices won't work with your class anymore. They rely on having a numeric sequential ID to work.

Kind regards,

AS

If you have a flag on your database that you set after reading the records, and this flag has an Index, you should be fine. Why bother? I mean: You could set this to be run every hour and process the records that have the right flag...

But if you really need to run your service on a specific schedule I would suggest  changing your architecture. Try creating a Business Service without an adapter (ADAPTER=Ens.InboundAdapter) so you can call it manually through Ens.Director.CreateBusinessService(). On the Business Service, call a Business Process that will call a Business Operation and grab the data for you. Then your Business Process will receive the data and do whatever needs to be done with it, maybe calling other Business Operations.

If that works, create a task (a class than inherits from %SYS.Task.Definition) that will create an instance of your business service using Ens.Director and call it. Configure the task to run on whatever schedule you need such as "Once a day at 12PM" or "Every hour" or "On monday, at 12AM", etc.

Kind regards,

AS

I think you are looking for the %MATCHES SQL extension of ours:

http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...

But it looks like this is a simplified regex not real regular expressions. But it may be powerful enough to accomplish what you need. Another option is the %PATTERN that allows you to use Caché's native pattern matching.

I use GitHub's issues that allows you to create issues of all sorts (bugs, enhancements, tasks, etc.) and assign them to someone or to a group of people. The issues are associated with the source code repository and there is also a functionality for projects. You create a project and then list the tasks needed to accomplish that project. Then you can make every task an issue and assign it to people. Then you can drag and drop the tasks from one stage to the next like Specification > Development > Testing > Product.

GitFlow is a good and flexible workflow. But I don't use Git to deploy on Pre-Live or LIVE. I normally would have four environments:

  • Development - Your machine
  • Development - where you integrate the development branch from Git with the work from all developers. Downloading the code from GitHub can be done automatically (when a change is integrated back into the develop branch) or manually.
  • QA - This environment is where you download code from GitHub's master branch with a new release. Your users can test the new release here without being bothered. 
  • Pre-Production/Pre-LIVE - This environment is periodically overwritten with a copy of LIVE. It is where you try and test applying your new release.
  • Production

GitFlow's hotfix may be used depending on your evironment. Depending on the change and on the urgency, it will be a pain to actually test the fix on your develop machine. Your local globals may not match the storage definition of what is in production because you may have been working on a new version of your classes with different global structures. You may need larges amount of data or specific data to reproduce the problem on your developer machine, etc. You can do it, but every hot fix will be a different workflow. Depending on the urgency you may simply not have the time to prepare your develop environment with the data and conditions to reproduce the problem, fix it and produce the hot fix. But it can be done.  On the other hand, as pre-production is a copy of LIVE, you can safely fix the problem there manually (forget GitHub), apply the change to LIVE and then incorporate these changes into your next release. I think this is cleaner. Everytime you have problem in LIVE, you can investigate it on PRE-LIVE. If PRE-LIVE is outdated, you can ask Operations for an emergency unscheduled refresh of PRE-LIVE to work on it.

About patching

I recommend always creating your namespaces with two databases: One for CODE and another for DATA.

That allows you to implement patching with a simple copy of the CODE database. You stop the instance, copy the database and start the instance. Simple like that. Every release may have an associated Class that has code to be run to rebuild indices, fix some global structures that may have changed, do some other kind of maintenance, etc.

If you are using Ensemble and don't want to stop your instance, you can generate your patch as a normal XML package and a Word document explaining how to apply it. Test this on your Pre-LIVE environment. Fix the document and/or the XML package if necessary, and try again until patching works. The run it on LIVE.

Before patching applying new releases to PRE-LIVE or LIVE, execute a full snapshot of your servers' virtual machine. If the patching procedure fails for some reason, you may need to rollback to that point in time. This is specially useful on PRE-LIVE where you are still testing the patching procedure and will most likely break things until you get it right. To be able to quickly go back in time and try it again and again and again will give you the freedom you need to produce a high quality patching procedure.

If you can afford downtime, use it. Don't try to push a zero downtime policy if you don't really need it. It will only make things unnecessarily complex and risky. You can patch Ensemble integrations without down time with the right procedure though. I micro services architecture may also help you to eliminate downtime but it is complex and requires a lot of engineering.

Using External Service Registry

I recommend using External Service Registry so that when you generate your XML project with the new production definition and etc. no references to End Points, folders, etc. are there. Even if you don't send your entire production class, this will help you with the periodic refreshing of the pre-live databases from live. The External Registry Service would store the end point configurations outside your databases and they would be different on LIVE, PRE-LIVE, QA, DEV and on the developer's machine (that may be using local mock services, new versions of services elsewhere, etc.).

Hi!

I am not sure if I understood your questions. But here is an explanation that may help you...

If you want to run a SQL query filtering by a date

Let's take Sample.Person class on the SAMPLES namespace as an example. There is a DOB (date of birth) field of type %Date. This stores dates in the $Horolog format of Caché (an integer that counts the number of dates since 12/32/1940.

If your date is in the format DD/MM/YYYY (for instance), you can use TO_DATE() function to run your query and convert this date string to the $Horolog number:

select * from Sample.Person where DOB=TO_DATE('27/11/1950','DD/MM/YYYY')

That will work independently of the runtime mode you are on (Display, ODBC or Logical).

On the other hand, if you are running your query with Runtime Select Mode ODBC, you could reformat your date string to the ODBC format (YYYY-MM-DD) and don't use TO_DATE():

select * from Sample.Person where DOB='1950-11-27'

That still is converting the string '1950-11-27' to the internal $Horolog number that is:

USER>w $ZDateH("1950-11-27",3)

40142

If you already has the date on the internal $Horolog format you could run your query using Runtime Select Mode Logical:

select * from Sample.Person where DOB=40142

You can try these queries on the management portal. Just remember changing the 

If you are using dynamic queries with %Library.ResultSet or %SQL.Statement, set the Runtime Mode (%SelectMode property on %SQL.Statement) before running your query.

If you want to find records from a moving window of 30 days

The previous query brought, on my system, the person "Jafari,Zeke K.".  He was born on 1950-11-27. The following query will bring all people that was born on '1950-11-27' and 30 days before '1950-11-27'. I will use DATE_ADD function to calculate this window. I have also selected ODBC Runtime Select Mode to run the query:

select Name, DOB from Sample.Person where DOB between DATEADD(dd,-30,'1950-11-27') and '1950-11-27'

Two people will appear on my system: Jafari and Quixote. Quixote was born '1950-11-04'. That is inside the window. 

Moving window with current_date

You can use current_date to write queries such as "who has been born between today and 365 days ago?":

select Name, DOB from Sample.Person where DOB between DATEADD(dd,-365,current_date) and current_date

Using greater than or less than

You can also use >, >=, < or <= with dates like this:

select Name, DOB from Sample.Person where DOB >= DATEADD(dd,-365,current_date) 

Just be careful with the Runtime Select Mode. The following works with ODBC Runtime Select Mode, but won't work with Display or Logical Mode:

select Name, DOB from Sample.Person where DOB >= DATEADD(dd,-30,'1950-11-27') and DOB<='1950-11-27'

To make this work with Logical Mode, you would have to apply TO_DATE to the dates first:

select Name, DOB from Sample.Person where DOB >= DATEADD(dd,-30,TO_DATE('1950-11-27','YYYY-MM-DD')) and DOB<=TO_DATE('1950-11-27','YYYY-MM-DD')

To make it work with display mode, format the date accordingly to your NLS configuration. Mine would be 'DD/MM/YYYY' because I am using a spanish location.

This is a quick an dirty code I just wrote that can convert simple JSON strings to XML. Sometimes, the JSON will be simple enough for simple code like this... I am not a JSON expert but maybe this can be a good starting point for something better.

This will work only on Caché 2015.2+.

Call the Test() method of the following class:

Class Util.JSONToXML Extends %RegisteredObject
{

ClassMethod Test()
{
    Set tSC = $System.Status.OK()
    Try
    {
        Set oJSON={"Prop1":"Value1","Prop2":2}
        Set tSC = ..JSONToXML(oJSON.%ToJSON(), "Test1", .tXML1)
        Quit:$System.Status.IsError(tSC)
        Write tXML1
        
        Write !!
        Set oJSON2={"Prop1":"Value1","Prop2":2,"List":["Item1","Item2","Item3"]}
        Set tSC = ..JSONToXML(oJSON2.%ToJSON(), "Test2", .tXML2)
        Quit:$System.Status.IsError(tSC)
        Write tXML2
        
        Write !!
        Set oJSON3={
                "name":"John",
                "age":30,
                "cars": [
                    { "name":"Ford", "models":[ "Fiesta", "Focus", "Mustang" ] },
                    { "name":"BMW", "models":[ "320", "X3", "X5" ] },
                    { "name":"Fiat", "models":[ "500", "Panda" ] }
                ]
             }
        Set tSC = ..JSONToXML(oJSON3.%ToJSON(), "Test3", .tXML3)
        Quit:$System.Status.IsError(tSC)
        Write tXML3

    }
    Catch (oException)
    {
        Set tSC =oException.AsStatus()
    }
    
    Do $System.Status.DisplayError(tSC)
}

ClassMethod JSONToXML(pJSONString As %String, pRootElementName As %String, Output pXMLString As %String) As %Status
{
        Set tSC = $System.Status.OK()
        Try
        {
            Set oJSON = ##class(%Library.DynamicObject).%FromJSON(pJSONString)
            
            Set pXMLString="<?xml version=""1.0"" encoding=""utf-8""?>"_$C(13,10)
            Set pXMLString=pXMLString_"<"_pRootElementName_">"_$C(13,10)
            
            Set tSC = ..ConvertFromJSONObjectToXMLString(oJSON, .pXMLString)
            Quit:$System.Status.IsError(tSC)
            
            Set pXMLString=pXMLString_"</"_pRootElementName_">"_$C(13,10)
        }
        Catch (oException)
        {
            Set tSC = oException.AsStatus()
        }
        
        Quit tSC
}

ClassMethod ConvertFromJSONObjectToXMLString(pJSONObject As %Library.DynamicAbstractObject, Output pXMLString As %String) As %Status
{
        Set tSC = $System.Status.OK()
        Try
        {
            Set iterator = pJSONObject.%GetIterator()
            
            While iterator.%GetNext(.key, .value)
            {
                Set tXMLKey=$TR(key," ")
                Set pXMLString=pXMLString_"<"_tXMLKey_">"
                
                If value'=""
                {
                    If '$IsObject(value)
                    {
                        Set pXMLString=pXMLString_value
                    }
                    Else
                    {
                        Set pXMLString=pXMLString_$C(13,10)
                        If value.%ClassName()="%DynamicObject"
                        {
                            Set tSC = ..ConvertFromJSONObjectToXMLString(value, .pXMLString)
                            Quit:$System.Status.IsError(tSC)                            
                        }
                        ElseIf value.%ClassName()="%DynamicArray"
                        {
                            Set arrayIterator = value.%GetIterator()
                                        
                            While arrayIterator.%GetNext(.arrayKey, .arrayValue)
                            {
                                Set pXMLString=pXMLString_"<"_tXMLKey_"Item key="""_arrayKey_""">"
                                If '$IsObject(arrayValue)
                                {
                                    Set pXMLString=pXMLString_arrayValue
                                }
                                Else
                                {                                    
                                    Set tSC = ..ConvertFromJSONObjectToXMLString(arrayValue, .pXMLString)
                                    Quit:$System.Status.IsError(tSC)                            
                                }
                                Set pXMLString=pXMLString_"</"_tXMLKey_"Item>"_$C(13,10)
                            }
                            Quit:$System.Status.IsError(tSC)
                        }
                    }
                }
                
                Set pXMLString=pXMLString_"</"_tXMLKey_">"_$C(13,10)
            } //While
        }
        Catch (oException)
        {
            Set tSC = oException.AsStatus()
        }
        
        Quit tSC
}

}

Here is the output:

Do ##class(Util.JSONToXML).Test()
<?xml version="1.0" encoding="utf-8"?>
<Test1>
<Prop1>Value1</Prop1>
<Prop2>2</Prop2>
</Test1>
 
 
<?xml version="1.0" encoding="utf-8"?>
<Test2>
<Prop1>Value1</Prop1>
<Prop2>2</Prop2>
<List>
<ListItem key="0">Item1</ListItem>
<ListItem key="1">Item2</ListItem>
<ListItem key="2">Item3</ListItem>
</List>
</Test2>
 
 
<?xml version="1.0" encoding="utf-8"?>
<Test3>
<name>John</name>
<age>30</age>
<cars>
<carsItem key="0"><name>Ford</name>
<models>
<modelsItem key="0">Fiesta</modelsItem>
<modelsItem key="1">Focus</modelsItem>
<modelsItem key="2">Mustang</modelsItem>
</models>
</carsItem>
<carsItem key="1"><name>BMW</name>
<models>
<modelsItem key="0">320</modelsItem>
<modelsItem key="1">X3</modelsItem>
<modelsItem key="2">X5</modelsItem>
</models>
</carsItem>
<carsItem key="2"><name>Fiat</name>
<models>
<modelsItem key="0">500</modelsItem>
<modelsItem key="1">Panda</modelsItem>
</models>
</carsItem>
</cars>
</Test3>
I hope that helps!
Kind regards,
AS

Hi!

If you are not using OS single sign-on, this shell script should do it:

#!/bin/bash

csession AUPOLDEVENS <<EOFF
SuperUser
superuserpassword
ZN "%SYS"
Do ^SECURITY
1
3




halt
EOFF

Where:

  • SuperUser - Is your username
  • superuserpassword - Is your SuperUser password

I have chosen SECURITY menu options 1, then option 3. Then I hit ENTER until I exited ^SECURITY routine and terminated the session with the halt command.

If you are using OS single sign-on, remove these two first lines since Caché won't ask for them.

The blank lines after number 3 are the ENTERs you enter to go up into the menu hierarchy until you exit.

The halt is necessary to avoid an error such as the following:

ERROR: <ENDOFFILE>SYSTEMIMPORTALL+212^SECURITY
%SYS>
<ENDOFFILE>
<ERRTRAP>

You can do more complex stuff with this technique such as validate errors and return unix error codes to your shell so that you can know if the operation was successful or not:

#!/bin/bash

csession INSTANCENAME <<EOFF
ZN "MYNAMESPACE"

Set tSC = ##class(SomeClass).SomeMethod()
If $System.Status.IsError(tSC) Do $System.Status.DisplayError(tSC) Do $zu(4,$j,1) ;Failure!

Do $zu(4,$j,0) ;OK!
EOFF

The $zu(4,$j,rc) will halt the session and return the return code on rc to your shell script. As you can notice, the Halt command is not necessary when using this $zu function.

I hope that helps!

Kind regards,

AS

Hi!

     Assuming you meant "BPL" (Business Process Language) instead of "DTL" (Data Transformation Langue):

     If you simply want your Business Operation to try forever until it gets it done:

  •  On the BPL, make a synchronous call or make an asynchronous call with a sync activity for it
  •  On the BO, set FailureTime=-1. Also, try to understand the "Reply Code Actions" setting of your Business Operation. You don't want to retry for all kinds of errors. You probably want to retry for some errors and failure for others. If you set FailureTime=-1 and your Reply Code  Actions decides to retry for that kind of error, it will retry forever until it gets it done. If your Reply code Actions decides to fail for other types of errors, it will return an error to your Business Process.
  •  If you know that, for some errors, the BO will return a failure, protect the call you just did on your BPL with a scope action so you can capture this and take additional actions.

More about "Reply Code Actions" here.

Kind regards,

Amir Samary

Hi Eduard!

Here is a simple way of finding it out:

select top 1 TimeLogged from ens_util.log

where configname='ABC_HL7FileService' 

and SourceMethod='Start' 

and Type='4' --Info

order by %ID desc

You put the logical name of your component on the configname. There is a bitmap index on both Type and ConfigName so this should be blazing fast too! Although, for some reason, the query plan is not using Type:
 
Relative cost = 329.11
    Read bitmap index Ens_Util.Log.ConfigName, using the given %SQLUPPER(ConfigName), and looping on ID.

    For each row:
    - Read master map Ens_Util.Log.IDKEY, using the given idkey value.
    - Output the row.
     
    Kind regards,
    AS

    Ok... I think I have found how to do it.

    The problem was that I use a Main dispatcher %CSP.REST class that routes the REST calls to other %CSP.REST classes that I will call the delegates.

    I had the CHARSET parameter on the delegates but not on the main router class! I just added it to the main router class and it worked!

    So, in summary, to avoid doing $ZConvert everywhere with REST applications, make sure you have both parameters CONVERTINPUTSTREAM=1 and CHARSET="utf-8". It won't hurt having the CHARSET declarations on your CSP and HTML pages as well like:

    <!DOCTYPE html>
    <html>
    <head>
        <CSP:PARAMETER Name="CHARSET" Value="utf-8">
        <title>My tasks</title>
        <meta charset="utf-8" />
    </head>

    Kind regards,

    Amir Samary

    Hi!

    You don't actually need to configure a certificate on your Apache or even to encrypt the communication between Apache and the SuperServer with SSL/TLS.

    You can create a CSP Application that is Unauthenticated and give it privileges to do whatever your web services need to do (Application Roles - more info here). I would also configure a "Permitted Classes" with a pattern to only allow your specific web services to be called. I would also block CSP/ZEN and DeepSee on this CSP Application.

    More info on configuring CSP Applications here.

    Then, for each web service you want to publish on this application (that is mentioned on the Permitted Classes), you will create a Web Service Security Policy using an existing Caché Studio wizard for that (more info here).

    The wizard will allow you to choose from a set of options and several variations for each option on securing your web service. You may choose from the combobox "Mutual X.509 Certificates Security". Here is the description for this option:

    This policy requires all peers to sign the message body and timestamp, as well as WS-Addressing headers, if included. It also optionally encrypts the message body with the public key of the peer's certificate.

    You can configure Caché PKI (Public Key Infrastructure) to have your own CA (Certificate Authority) and generate the certificates that your server and clients will use.

    This guarantees that only a client that has the certificate given by you will be able to authenticate and call this web service. The body of the call will be encrypted. 

    If you restrict the entry points of this "Unauthenticated" csp application using "Permitted Classes" and if these permitted classes are web services protected by these policies, you are good to go. Remember to give to this application the privileges (Application Roles) for your service to be able to run properly (privilege on the database resource, SQL tables, etc.).

    This doesn't require a username token. If you still want to use a username/password token, you can require that using the same wizard. Here is an additional description that the wizard provides:

    Include Encrypted UsernameToken: This policy may optionally require the client to send a Username Token (with username and password). The Username Token must be specified at runtime. To specify the Username Token, set the Username and Password properties or add an instance of %SOAP.Security.UsernameToken to the Security header with the default $$$SOAPWSPasswordText type.

    If you decide to do that, make sure your CSP application is configure for authentication  "Password" and do not check "Unauthenticated". 

    Also, don't forget to use a real Apache web server. My point is that you don't need to configure your apache or its connection to the super server with a SSL certificate for all this to work. Caché will do the work, not Apache. Apache will receive a SOAP call that won't be totally encrypted. But If you look into it, you will notice that the body is encrypted, the header includes a signed timestamp, the username/password token will be encrypted, etc, etc. So, although this is not HTTPS, the certificates are being used to do all sort of things in the header and the body of the call that will give you a lot more protection that plain HTTPS.

    But please, don't get me wrong. You need HTTPS if you are building an HTML web application or if you are using other kinds of web services such as REST, that don't have all the alternative enterprise security provided by SOAP. SOAP can stand alone, secure, without HTTPS. Your web application can't.