<scope xpos='200' ypos='250' xend='200' yend='800' >
<code name='Transform' xpos='200' ypos='350' >
<![CDATA[ 
    
    /*
     If I use a <transform> activity with indirection (@$parameter(request,"%DTL")), and 
     assign the returned object to the context, and an error is produced when the context is
     saved because the returned object has, say, missing properties, IRIS will only see this error
     too late and the scope we defined will not be able to help us to deal with this fault.
     
     So, in order to avoid this bug, we need to call the transformation from a <code> activity instead.
     And try to save the returned object before assigning it to the context and returning. If we can't
     save it, we must assign the error to the status variable and leave the context alone. This way
     the <scope> will capture the problem and take us to the <catchall> activity. 
    */ 
    
    Set tTransformClass=$parameter(request,"%DTL")
    Set status = $classmethod(tTransformClass, "Transform", request, .normalizedRequest)
    If $$$ISERR(status) Quit
    
    Set status = normalizedRequest.%Save()
    If $$$ISERR(status) Quit
    
    Set context.NormalizedData=normalizedRequest
]]>
</code> <assign name="Done" property="context.Action" value="&quot;Done&quot;" action="set" xpos='200' ypos='550' /> <faulthandlers>    <catchall name='Normalization error' xpos='200' ypos='650' xend='200' yend='1150' >
      <trace name='Normalization problem' value='"Normalization problem"' xpos='200' ypos='250' />
   </catchall>
</faulthandlers>

"What files/directories should I keep track of from the durable %SYS directory? (e.g: I want to bind-mount those files/directories and import the code and see the Namespace and all the instance configuration )."

Answer: None.

Your configuration needs to be code. Right now, the best approach to have your namespace/databases created/configured, CSP applications, security, etc. is by using %Installer Manifests during the build process of the Dockerfile. I personally don't use Durable %SYS on my development machine. I prefer to use %Installer to configure everything and if I need to load pre-tables with data, I load it from CSV files that are on GitHub as well as source code. 

That allows you to source control your Code Table contents as well. Test records can be inserted as well with the same procedure.

For an example of this, look at this demo (based on IRIS):

https://github.com/intersystems-community/irisdemo-demo-fraudprevention

Look at the normalized_datalake image. It loads CSV files into the tables as part of the Dockerfile build process. You will notice that this image is based on a base image that has some standard reusable code. This source for this base image is here:

https://github.com/intersystems-community/irisdemo-base-irisdb-community

I was using Atelier when I built this image. But the principle is the same. I am now using VSCode doing the same.

This is another demo based on IRIS for Health:

https://github.com/intersystems-community/irisdemo-demo-readmission

Look at the riskengine image. It loads data from JSON files into the data model as part of the build process. The JSON files are created by synthea. An open source tool for generating synthetic patient data.

If you use this method, any developer will be able to jump between versions of your software very quickly. If you need to fix a problem on an old version, you can just checkout that version tag, build the image (which will load the tables) and make the changes you want looking at that exact version with the exact data needed for it to work.

When you are done, you can go back to your previous branch, rebuild the image again using the current source code (and data in CSV/JSON files) and keep going with your new features.

Just to be clear: I don't mean you shouldn't use Durable %SYS. You must use Durable %SYS on Production!

But I have strong reservations about using it on your PC (developer environment). That's all. Even the central development environment (where Unit Tests could be run) wouldn't need it.

But your UAT/QA and Production should definitely use Durable %SYS and you should come up with your own DevOps approach to deploying your software on these environments so you can test the upgrade procedure.

Kind regards,

AS

/// <p>This Reader is to be used with %ResultSet. It will allow you to scan a CSV file, record by record.
/// There are three queries available for reading:</p>
/// <ol>
/// <li>CSV - comma separated
/// <li>CSV2 - semicolon separated
/// <li>TSV - tab separated
/// </ol>
/// <p>Use the query accordingly to your file type! The reader expects to find a header on the first line with the name of the fields.
/// This will be used to name the columns when you use the method Get().</p>
/// <p>The reader can deal with fields quoted and unquoted automatically.</p>  
///
///    <p>This assumes that your CSV file has a header line.
/// This custom query supports Comman Separated Values (CSV), Semicolumn Separated Values (CSV2)
/// and Tab Separated Values (TSV).</p>    
///
/// <EXAMPLE>
///
///            ; Comma Separated Values
///            ;Set oCSVFileRS=##class(%ResultSet).%New("IRISDemo.Util.FileReader:CSV")
///            ;
///            ; Semicolon Separated Values
///            ;Set oCSVFileRS=##class(%ResultSet).%New("IRISDemo.Util.FileReader:CSV2")
///            ;
///            ; Tab Separated Values
///            Set oCSVFileRS=##class(%ResultSet).%New("IRISDemo.Util.FileReader:TSV")
///            
///            Set tSC = oCSVFileRS.Execute(pFileName)
///            Quit:$System.Status.IsError(tSC)
///            
///            While oCSVFileRS.Next()
///            {
///                Set tDataField1 = oCSVFileRS.Get("Column Name 1")
///                Set tDataField2 = oCSVFileRS.Get("Column Name 2")
///                Set tDataField4 = oCSVFileRS.GetData(3)
///                
///                // Your code goes here            
///            }
///    </EXAMPLE>                    
///
Class IRISDemo.Util.FileReader Extends %RegisteredObject
{

    /// Use this Query with %ResultSet class to read comma (,) separated files.
    Query CSV(pFileName As %String, pFileEncoding As %String = "UTF8") As %Query
    {
    }
    
    /// Use this Query with %ResultSet class to read semicolon (;) separated files.
    Query CSV2(pFileName As %String, pFileEncoding As %String = "UTF8") As %Query
    {
    }
    
    /// Use this Query with %ResultSet class to read tab separated files.
    Query TSV(pFileName As %String, pFileEncoding As %String = "UTF8") As %Query
    {
    }
    
    /// Index a header from a CSV file so we can later get every row of the file
    /// and read every column by it's name. This method supports fields quoted, double 
    /// quoted or not quoted or a mix of the three. The only thing that is required for this to work
    /// is that the correct field separator is specified (Comma, Semicolon or Tab).
    ClassMethod IndexHeader(pHeader As %String, pSeparator As %String = ",", Output pHeaderIndex)
    {
        //How many columns?
        Set pHeaderIndex=$Length(pHeader,pSeparator)
        
        Set tRegexSeparator=pSeparator
        
        If tRegexSeparator=$Char(9)
        {
            Set tRegexSeparator="\x09"
        }
        
        //Let's build a regular expression to read all the data without the quotes...
        Set pHeaderIndex("Regex")=""
        For i=1:1:pHeaderIndex
        {
            Set $Piece(pHeaderIndex("Regex"),tRegexSeparator,i)="\""?'?(.*?)\""?'?"
        }
        Set pHeaderIndex("Regex")="^"_pHeaderIndex("Regex")
        
        //Let's use this regular expression to index the column names...
        Set oMatcher = ##class(%Regex.Matcher).%New(pHeaderIndex("Regex"))
        Do oMatcher.Match(pHeader)
        
        //Now let's index the colum names
        For i=1:1:oMatcher.GroupCount
        {
            Set pHeaderIndex("Columns",i)=$ZStrip(oMatcher.Group(i),"<>W")
        }
    }

    ClassMethod IndexRow(pRow As %String, ByRef pHeaderIndex As %String, Output pIndexedRow As %String)
    {
        Set oMatcher = ##class(%Regex.Matcher).%New(pHeaderIndex("Regex"))
        Do oMatcher.Match(pRow)
        
        //Now let's index the colum names
        For i=1:1:oMatcher.GroupCount
        {
            Set $List(pIndexedRow,i)=oMatcher.Group(i)
        }
    }

    ClassMethod CSVGetInfo(colinfo As %List, parminfo As %List, idinfo As %List, qHandle As %Binary, extoption As %Integer = 0, extinfo As %List) As %Status
    {
            Quit ..FileGetInfo(.colinfo, .parminfo, .idinfo, .qHandle, .extoption, .extinfo)
    }
    
    ClassMethod CSV2GetInfo(colinfo As %List, parminfo As %List, idinfo As %List, qHandle As %Binary, extoption As %Integer = 0, extinfo As %List) As %Status
    {
            Quit ..FileGetInfo(.colinfo, .parminfo, .idinfo, .qHandle, .extoption, .extinfo)
    }
    
    ClassMethod TSVGetInfo(colinfo As %List, parminfo As %List, idinfo As %List, qHandle As %Binary, extoption As %Integer = 0, extinfo As %List) As %Status
    {
            Quit ..FileGetInfo(.colinfo, .parminfo, .idinfo, .qHandle, .extoption, .extinfo)
    }
    
    ClassMethod FileGetInfo(colinfo As %List, parminfo As %List, idinfo As %List, qHandle As %Binary, extoption As %Integer = 0, extinfo As %List) As %Status
    {
        Merge tHeaderIndex = qHandle("HeaderIndex")
        Set colinfo = ""
        For i=1:1:tHeaderIndex
        {
            Set tColName = tHeaderIndex("Columns",i)
            Set colinfo=colinfo_$LB($LB(tColName))    
        }
        
        Set parminfo=$ListBuild("pFileName","pFileEncoding")
        Set extinfo=""
        Set idinfo=""
        Quit $$$OK
    }

    ClassMethod CSVExecute(ByRef qHandle As %Binary, pFileName As %String, pFileEncoding As %String = "UTF8") As %Status
    {
        Set qHandle("Separator")=","
        Quit ..FileExecute(.qHandle, pFileName, pFileEncoding)
    }

    ClassMethod CSV2Execute(ByRef qHandle As %Binary, pFileName As %String, pFileEncoding As %String = "UTF8") As %Status
    {
        Set qHandle("Separator")=";"
        Quit ..FileExecute(.qHandle, pFileName, pFileEncoding)
    }

    ClassMethod TSVExecute(ByRef qHandle As %Binary, pFileName As %String, pFileEncoding As %String = "UTF8") As %Status
    {
        Set qHandle("Separator")=$Char(9)
        Quit ..FileExecute(.qHandle, pFileName, pFileEncoding)
    }

    ClassMethod FileExecute(ByRef qHandle As %Binary, pFileName As %String, pFileEncoding As %String = "UTF8") As %Status
    {
        Set tSC = $System.Status.OK()
        Try
        {
            Set oFile = ##class(%Stream.FileCharacter).%New()
            If pFileEncoding'="" Set oFile.TranslateTable=pFileEncoding
            Set oFile.Filename=pFileName
            
            Set tHeader=oFile.ReadLine()
            Do ..IndexHeader(tHeader, qHandle("Separator"), .tHeaderIndex)
            
            Merge qHandle("HeaderIndex")=tHeaderIndex
            Set qHandle("File")=oFile
        }
        Catch (oException)
        {
            Set tSC = oException.AsStatus()
        }
        
        Quit tSC
    }

    ClassMethod CSVClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileClose(.qHandle)
    }

    ClassMethod CSV2Close(ByRef qHandle As %Binary) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileClose(.qHandle)
    }

    ClassMethod TSVClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileClose(.qHandle)
    }

    ClassMethod FileClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = FileExecute ]
    {
        #Dim oFile As %Library.FileCharacterStream
        Set tSC = $System.Status.OK()
        Try
        {
            Kill qHandle("File")
            Kill qHandle("HeaderIndex")
        }
        Catch (oException)
        {
            Set tSC = oException.AsStatus()
        }
        
        Quit tSC
    }

    ClassMethod CSVFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileFetch(.qHandle, .Row, .AtEnd)
    }

    ClassMethod CSV2Fetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileFetch(.qHandle, .Row, .AtEnd)
    }

    ClassMethod TSVFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileFetch(.qHandle, .Row, .AtEnd)
    }

    ClassMethod FileFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = FileExecute ]
    {
        Set tSC = $System.Status.OK()
        Try
        {
            Set Row = ""
            
            Set oFile = qHandle("File")
            If oFile.AtEnd
            {
                Set AtEnd=1
                Quit
            }
    
            Merge tHeaderIndex=qHandle("HeaderIndex")
            
            While 'oFile.AtEnd
            {
                Set tRow=oFile.ReadLine()
                Continue:tRow=""
                Quit
            }
                    
            Do ..IndexRow(tRow, .tHeaderIndex, .Row)
            
            
        }
        Catch (oException)
        {
            Set tSC = oException.AsStatus()
        }
        
        Quit tSC
    }
}

Hi!

It is not hard to build a task that will search for your message headers from a specific service, process or operation in a date range and delete the headers and the messages. You can start by searching on Ens.MessageHeader. You will notice that there are indices that will allow you to search by date range and source config item pretty quickly.

BUT: 

I always recommend to my customers to create a production for each different business needs. That allow you to have different back up strategies, purging strategies and even maintenance strategies (some times you want to touch one production and be sure that the others will not be touched). You are also able to take these productions to other servers when you need them to scale independently (ops! now this new service is receiving a big volume of messages and is disturbing the others!). On different servers, you may be able to patch one production (one server) while the others are running. This is specially more feasible now with InterSystems IRIS. You can easily migrate from Ensemble to IRIS and have a production running on a VM with 2 cores, another on another VM with 4 cores, and so on. You don't have to start with 8 cores and scale by 4 like ensemble. So you can pay the same you are paying today, but with the possibility of having more governance.

PS: Mirroring on IRIS is paid differently from mirroring with Ensemble. With IRIS, you do have to pay for the mirrors as well while with Ensemble you don't have to.

So, sincerely I believe you will be happier having another production. It's a good start to have more governance.

Kind regards,

Amir Samary

Senior Sales Engineer - InterSystems

Hi again!

I was checking the documentation of ExternalFreeze() here and there is an option for not switching the journal file. The parameter is defaulted to 1 (to switch the journal file) but you can change it to 0. Maybe that would allow you to do the Freeze/Thaw on an Async mirror member without consequences. Maybe ExternalFreez() will do it without switching the journal file independently of what you pass to this parameter just because it's being called on an Async mirror member. The documentation is not clear though... 

Maybe someone with more knowledge about the internals of this could clarify? I believe each CACHE.DAT file knows what was the last journal entry applied to it and, during a restore procedure, it could simply start in the middle of a journal file and proceed to the newer journal files created during/after the backup.

I would like to understand why we switch the journal file if, during a Freeze, all new transactions that can't be written to the database (because of the freeze) will be on the current journal file. A new journal file is created after the ExternalThaw() but all those transactions executed during the Freeze will be there on the previous journal file. It seems to me that switching the journal file serves no purpose since we always have to pick the previous journal file anyway during a restore.

Kind regards,

AS

Hi!

IMHO, I don't think this is application dependent at all. When we do the Freeze on one of the failover members, we don't care about what is running on the instance. Please notice that after you call Freeze, you snapshot everything, not only the filesystems where the database files are but also the filesystems where journal files, WIJ files and application files are. So, when you restore the back up,  you are restoring a point in time that may or may not have a database file (CACHE.DAT) consistent, but also the journal and WIJ files that will make it consistent. 

Also it is important to notice that Freeze/Thaw will switch the journal file for you and this is where transaction consistency will be kept. I mean, Caché/Ensemble/IRIS will split the journal at a point where the entire transaction and probably what is on the WIJ file is there and consistent.

After restoring this backup, you must execute a Journal Restore to restore all journal files generated after the Freeze/Thaw to make you instance up to date.

Unfortunately I can't answer about doing the backup on the Async node. At first, I believe there is no problem with it. You just need to be careful not forgetting the Async node exists and forget to apply patches and configurations you have done on the failover members so you have a complete backup of your system (application code and configurations included). But I don't know what happens when you execute the Freeze/Thaw procedure on the Async. Supposedly, freezing writing to the databases and new journal file creation would be performed on all cluster members, but the documentation is not clear about what "all cluster members" means. It is not clear if "all cluster members" includes Async Mirror Members.

My opinion is that Backup on an Async Member is not supported and may be problematic. For it to work, it would still have to freeze both failover members to have consistent journal files members on all nodes. So there would be no gain on doing it on the Async node. But that is only my opinion. Let's see if someone else can confirm my suspicious.

Kind regards,

AS

IMHO, Minimal Security option should be completely eliminated from the product.

I saw this behavior of having /api/atelier application created with only Unauthenticated on Ensemble installations with Lock Down Security. But that was about a year ago and I thought that was because it was still beta. Is this happening on current Ensemble and IRIS installations as well? Did you install them with what security level?

Hi!

If you don't have problems with loosing the order of messages, just increase you pool size. But you should take a look at your transformation, why is it so heavy? A transformation should not slow down your system like this.

On the other hand, if you do care about the order of messages, you could use a message router to split the channel for more processes based on some criteria that won't affect the order of messages. For instance, at a client I was receiving an HL7 feed from single system that  was used on many facilities. On this feed I had messages to create a patient, update it, admit it, etc. If one messages arrived on its destination before the previous related ones, I would loose updates. The process that was transforming them was a bottleneck (and improve it to eliminate the bottleneck would take time). So we ended up creating a routing rule to split the HL7 feed by facility and created an instance of that business process for each facility. That allowed us to parallelize the processing while still keeping the order of messages (because I patient couldn't be on more than one facility at the same time).

Of course, we later took the time to improve that business process that was slow an retrofitted the production back to fast simplicity. ;)

Kind regards,

AS

HI!

If it's unidirectional and the network is protected and under control, I would definitely avoid using %SYNC and would use Asynchronous Mirroring instead.

On the other hand, there is a limit on the number of Asynchronous Mirroring you can have. I think it's currently 16. If this is not enough for you then you will have to think on another solution such as %SYNC or simply a process that periodically exports the globals and send them everywhere through a secure channel (a SOAP web service).

I have implemented a %SYNC over SOAP toolkit that makes things easier to setup and monitor. I am still finishing some aspects of it. %SYNC is a very good toolkit but it lacks a good communication and management infrastructure. I have the communication (through a protected SOAP communication) sorted out. I am now working on operational infrastructure such as purging ^OBJ.SYNC global, protecting the journal files from being purged if a node is lagging behind, some monitoring, etc.

If Asynchronous Mirroring doesn't work for you (that should be your first choice) and if you can build a simple task that periodically export your globals and send them through SOAP to your other nodes, I can share with you my code around %SYNC.

Kind regards,

Amir Samary

Hi!

Yes. There is definitely a rollback if you return an error. That won't ever work. If you want to prevent people from deleting records on this table, simply don't give them permission to do it.

If you configure your security right, you can have a role for the user that is being used to access the application (by JDBC, ODBC or dynamic result sets such as %SQL.Statement or %Library.ResultSet) and give that role the right permissions. That would be permissions on the database (say %DB_MYDATABASE), and specific GRANTs for SELECT, INSERT, UPDATE and DELETE on specific tables.

Then, you can let this specific table without the GRANT for DELETE and obligate everyone to UPDATE the Deleted property instead.

You can even use ROW Level Security to hide that record from "normal" users while letting it appear to "report" or "analytics" users.

Beware though that the enforcement of SQL privileges will only work through JDBC, ODBC or dynamic result set objects such as %SQL.Statement. If you issue a %DeleteId() on an object, it will be deleted. That is by design because if you are running object script code and have RW privileges on the database, you can do almost anything. So, it makes no sense to check for SQL privileges on every method call. The system would spend most of its time checking privileges instead of working.  So, what is normally done is that you set your security roles and users right and try to protect the entry points of your application. For instance, you can use custom resources to label and protect:

  • CSP Pages using the SECURITYRESOURCE parameter
  • General Methods - by checking only once if a user has a specific resource with $System.Security.Check() method
  • SOAP and REST services - by protecting the CSP application that exposes them or by using the SECURITYRESOURCE parameter on your %CSP.REST class or even by using $System.Security.Check() on each individual method.

The idea is to protect entire areas of the application instead of checking privileges on every single database access.

HTH!

Kind regards,

AS

Hi!

I like to use EnsLib.EDI.XML.Document class to move XML around because it will show nicely on Visual Message Trace and I can use message routing and DTL to route and transform it since it is a virtual document like HL7, ASTM or EDIFACT. Try looking into the documentation here:

http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=EXML_tools_classes

You will notice there are out of the box business services to grab a file on an FTP server and take it's XML content as EnsLib.EDI.XML.Document. If you can't use it directly you will, at least, have an example to start with and make one of your own.

That is the Business Service. 

For your Business Operation, you have two options:

  1. Use the SOAP Wizard to create a business operation for you based on your WSDL and use it without changes
  2. Use the SOAP Wizard to only create the proxy to the web service and create your own Business Operation around it yourself

Option 2 is nice if you want to hide the intricacies and details of using that web service. Some times, a web service will ask for a username and password on the body of the message, for instance, and you don't want to do that. You would create a credential property on the business operation and use it instead. So you are hiding these details from the user of that business operation. Another reason for implementing the business operation yourself is to change the data types and name of properties. That is our case. Your web service probably expects to receive a XML string. We are dealing with an EnsLib.EDI.XML.Document. Obviously, that won't work out of the box.

So, I suggest you create a business operation that receives the EnsLib.EDI.XML.Document and calls the Web Service proxy underneath, by extracting the XML string from the EnsLib.EDI.XML.Document request. You can use the SOAP Wizard to create the first version of your Business Operation and use it as a starting point for creating your own. Make the appropriate changes on it to receive an EnsLib.EDI.XML.Document as a Request.

Kind regards,

AS

Hi!

Are you calling the workflow using an asynchronous <call> activity and using a <sync> activity to wait on it? For how long is your <sync> activity waiting? Beware that a workflow task will only stay on the workflow inbox while that <sync> activity is not done. So, if you don't have a <sync> activity or if the timeout on the <sync> activity is too short, the task will not appear on the workflow inbox or appear only momentarily.

The idea is that the BPL can put a timeout as an SLA on that task. If no one finish that task until that timeout expires, you will receive whatever (incomplete) response is there and, maybe, escalate to other roles, send alerts, etc.

Kind regards,

AS

Hi Eduard,

I fail to follow your reasoning. If you are making other classes inherit from a class of yours, you are changing the class by definition. You are adding a new method to the class. Whether this method is a simple method or has its code computed during class compilation is transparent to whomever is calling that method.

And the resulting method could return the information in any format you choose, including $LB.

Kind regards,

AS

Hi Eduard!

Without going too deep into your code and trying to enhance it, I can suggest you to:

  1. Put this method on a utility class
  2. Recode it so that it is a method generator. The method would use a variation of your current code to produce a $Case() that would simply return the position of a given property instead of querying the class definition in runtime.
  3. Make the class you want to use it (ex: Sample.Address) inherit from it.

Of course, if this is a one-time thing, you could simply implement number 2. Or you could simply forget about my suggestion at all if performance isn't an issue for this scenario. By using a method generator, you eliminates from the runtime any inefficiency of the code used to (pre)compute the positions.

Kind regards,

AS

Hi Edouard!

Robert's solution works perfectly if you map the other database, through ECP, to the other host. ECP is very simple to configure. 

If this global belongs to a table, you could configure a ODBC/JDBC connection to the other system and created a linked table on system "TO" that is linked through ODBC/JDBC to the real table on system "FROM". And run a code similar to Robert's. But instead of an $Order, you would use %SQL.Statement and SELECT the records. 

ECP requires a Multi-Server license. That is why I am suggestion this alternative with SQL Gateway.

Kind regards,

AS