<scope xpos='200' ypos='250' xend='200' yend='800' >
<code name='Transform' xpos='200' ypos='350' >
<![CDATA[ 
    
    /*
     If I use a <transform> activity with indirection (@$parameter(request,"%DTL")), and 
     assign the returned object to the context, and an error is produced when the context is
     saved because the returned object has, say, missing properties, IRIS will only see this error
     too late and the scope we defined will not be able to help us to deal with this fault.
     
     So, in order to avoid this bug, we need to call the transformation from a <code> activity instead.
     And try to save the returned object before assigning it to the context and returning. If we can't
     save it, we must assign the error to the status variable and leave the context alone. This way
     the <scope> will capture the problem and take us to the <catchall> activity. 
    */ 
    
    Set tTransformClass=$parameter(request,"%DTL")
    Set status = $classmethod(tTransformClass, "Transform", request, .normalizedRequest)
    If $$$ISERR(status) Quit
    
    Set status = normalizedRequest.%Save()
    If $$$ISERR(status) Quit
    
    Set context.NormalizedData=normalizedRequest
]]>
</code> <assign name="Done" property="context.Action" value="&quot;Done&quot;" action="set" xpos='200' ypos='550' /> <faulthandlers>    <catchall name='Normalization error' xpos='200' ypos='650' xend='200' yend='1150' >
      <trace name='Normalization problem' value='"Normalization problem"' xpos='200' ypos='250' />
   </catchall>
</faulthandlers>

"What files/directories should I keep track of from the durable %SYS directory? (e.g: I want to bind-mount those files/directories and import the code and see the Namespace and all the instance configuration )."

Answer: None.

Your configuration needs to be code. Right now, the best approach to have your namespace/databases created/configured, CSP applications, security, etc. is by using %Installer Manifests during the build process of the Dockerfile. I personally don't use Durable %SYS on my development machine. I prefer to use %Installer to configure everything and if I need to load pre-tables with data, I load it from CSV files that are on GitHub as well as source code. 

That allows you to source control your Code Table contents as well. Test records can be inserted as well with the same procedure.

For an example of this, look at this demo (based on IRIS):

https://github.com/intersystems-community/irisdemo-demo-fraudprevention

Look at the normalized_datalake image. It loads CSV files into the tables as part of the Dockerfile build process. You will notice that this image is based on a base image that has some standard reusable code. This source for this base image is here:

https://github.com/intersystems-community/irisdemo-base-irisdb-community

I was using Atelier when I built this image. But the principle is the same. I am now using VSCode doing the same.

This is another demo based on IRIS for Health:

https://github.com/intersystems-community/irisdemo-demo-readmission

Look at the riskengine image. It loads data from JSON files into the data model as part of the build process. The JSON files are created by synthea. An open source tool for generating synthetic patient data.

If you use this method, any developer will be able to jump between versions of your software very quickly. If you need to fix a problem on an old version, you can just checkout that version tag, build the image (which will load the tables) and make the changes you want looking at that exact version with the exact data needed for it to work.

When you are done, you can go back to your previous branch, rebuild the image again using the current source code (and data in CSV/JSON files) and keep going with your new features.

Just to be clear: I don't mean you shouldn't use Durable %SYS. You must use Durable %SYS on Production!

But I have strong reservations about using it on your PC (developer environment). That's all. Even the central development environment (where Unit Tests could be run) wouldn't need it.

But your UAT/QA and Production should definitely use Durable %SYS and you should come up with your own DevOps approach to deploying your software on these environments so you can test the upgrade procedure.

Kind regards,

AS

Thank you! That helps a lot. The problem I was having is that I was implementing XXXGetInfo but not XXXGetODBCInfo(). 

Hi! 

I thought of that. But I really wanted to write a custom ObjectScript code instead of relying on a %SQL.Statement or %ResultSet. That is because the data I want to aggregate and return is not easily searchable with a single statement.

But I think I am going to be using %Dictionary.* to generate the code dynamically.

Kind regards,

AS

/// <p>This Reader is to be used with %ResultSet. It will allow you to scan a CSV file, record by record.
/// There are three queries available for reading:</p>
/// <ol>
/// <li>CSV - comma separated
/// <li>CSV2 - semicolon separated
/// <li>TSV - tab separated
/// </ol>
/// <p>Use the query accordingly to your file type! The reader expects to find a header on the first line with the name of the fields.
/// This will be used to name the columns when you use the method Get().</p>
/// <p>The reader can deal with fields quoted and unquoted automatically.</p>  
///
///    <p>This assumes that your CSV file has a header line.
/// This custom query supports Comman Separated Values (CSV), Semicolumn Separated Values (CSV2)
/// and Tab Separated Values (TSV).</p>    
///
/// <EXAMPLE>
///
///            ; Comma Separated Values
///            ;Set oCSVFileRS=##class(%ResultSet).%New("IRISDemo.Util.FileReader:CSV")
///            ;
///            ; Semicolon Separated Values
///            ;Set oCSVFileRS=##class(%ResultSet).%New("IRISDemo.Util.FileReader:CSV2")
///            ;
///            ; Tab Separated Values
///            Set oCSVFileRS=##class(%ResultSet).%New("IRISDemo.Util.FileReader:TSV")
///            
///            Set tSC = oCSVFileRS.Execute(pFileName)
///            Quit:$System.Status.IsError(tSC)
///            
///            While oCSVFileRS.Next()
///            {
///                Set tDataField1 = oCSVFileRS.Get("Column Name 1")
///                Set tDataField2 = oCSVFileRS.Get("Column Name 2")
///                Set tDataField4 = oCSVFileRS.GetData(3)
///                
///                // Your code goes here            
///            }
///    </EXAMPLE>                    
///
Class IRISDemo.Util.FileReader Extends %RegisteredObject
{

    /// Use this Query with %ResultSet class to read comma (,) separated files.
    Query CSV(pFileName As %String, pFileEncoding As %String = "UTF8") As %Query
    {
    }
    
    /// Use this Query with %ResultSet class to read semicolon (;) separated files.
    Query CSV2(pFileName As %String, pFileEncoding As %String = "UTF8") As %Query
    {
    }
    
    /// Use this Query with %ResultSet class to read tab separated files.
    Query TSV(pFileName As %String, pFileEncoding As %String = "UTF8") As %Query
    {
    }
    
    /// Index a header from a CSV file so we can later get every row of the file
    /// and read every column by it's name. This method supports fields quoted, double 
    /// quoted or not quoted or a mix of the three. The only thing that is required for this to work
    /// is that the correct field separator is specified (Comma, Semicolon or Tab).
    ClassMethod IndexHeader(pHeader As %String, pSeparator As %String = ",", Output pHeaderIndex)
    {
        //How many columns?
        Set pHeaderIndex=$Length(pHeader,pSeparator)
        
        Set tRegexSeparator=pSeparator
        
        If tRegexSeparator=$Char(9)
        {
            Set tRegexSeparator="\x09"
        }
        
        //Let's build a regular expression to read all the data without the quotes...
        Set pHeaderIndex("Regex")=""
        For i=1:1:pHeaderIndex
        {
            Set $Piece(pHeaderIndex("Regex"),tRegexSeparator,i)="\""?'?(.*?)\""?'?"
        }
        Set pHeaderIndex("Regex")="^"_pHeaderIndex("Regex")
        
        //Let's use this regular expression to index the column names...
        Set oMatcher = ##class(%Regex.Matcher).%New(pHeaderIndex("Regex"))
        Do oMatcher.Match(pHeader)
        
        //Now let's index the colum names
        For i=1:1:oMatcher.GroupCount
        {
            Set pHeaderIndex("Columns",i)=$ZStrip(oMatcher.Group(i),"<>W")
        }
    }

    ClassMethod IndexRow(pRow As %String, ByRef pHeaderIndex As %String, Output pIndexedRow As %String)
    {
        Set oMatcher = ##class(%Regex.Matcher).%New(pHeaderIndex("Regex"))
        Do oMatcher.Match(pRow)
        
        //Now let's index the colum names
        For i=1:1:oMatcher.GroupCount
        {
            Set $List(pIndexedRow,i)=oMatcher.Group(i)
        }
    }

    ClassMethod CSVGetInfo(colinfo As %List, parminfo As %List, idinfo As %List, qHandle As %Binary, extoption As %Integer = 0, extinfo As %List) As %Status
    {
            Quit ..FileGetInfo(.colinfo, .parminfo, .idinfo, .qHandle, .extoption, .extinfo)
    }
    
    ClassMethod CSV2GetInfo(colinfo As %List, parminfo As %List, idinfo As %List, qHandle As %Binary, extoption As %Integer = 0, extinfo As %List) As %Status
    {
            Quit ..FileGetInfo(.colinfo, .parminfo, .idinfo, .qHandle, .extoption, .extinfo)
    }
    
    ClassMethod TSVGetInfo(colinfo As %List, parminfo As %List, idinfo As %List, qHandle As %Binary, extoption As %Integer = 0, extinfo As %List) As %Status
    {
            Quit ..FileGetInfo(.colinfo, .parminfo, .idinfo, .qHandle, .extoption, .extinfo)
    }
    
    ClassMethod FileGetInfo(colinfo As %List, parminfo As %List, idinfo As %List, qHandle As %Binary, extoption As %Integer = 0, extinfo As %List) As %Status
    {
        Merge tHeaderIndex = qHandle("HeaderIndex")
        Set colinfo = ""
        For i=1:1:tHeaderIndex
        {
            Set tColName = tHeaderIndex("Columns",i)
            Set colinfo=colinfo_$LB($LB(tColName))    
        }
        
        Set parminfo=$ListBuild("pFileName","pFileEncoding")
        Set extinfo=""
        Set idinfo=""
        Quit $$$OK
    }

    ClassMethod CSVExecute(ByRef qHandle As %Binary, pFileName As %String, pFileEncoding As %String = "UTF8") As %Status
    {
        Set qHandle("Separator")=","
        Quit ..FileExecute(.qHandle, pFileName, pFileEncoding)
    }

    ClassMethod CSV2Execute(ByRef qHandle As %Binary, pFileName As %String, pFileEncoding As %String = "UTF8") As %Status
    {
        Set qHandle("Separator")=";"
        Quit ..FileExecute(.qHandle, pFileName, pFileEncoding)
    }

    ClassMethod TSVExecute(ByRef qHandle As %Binary, pFileName As %String, pFileEncoding As %String = "UTF8") As %Status
    {
        Set qHandle("Separator")=$Char(9)
        Quit ..FileExecute(.qHandle, pFileName, pFileEncoding)
    }

    ClassMethod FileExecute(ByRef qHandle As %Binary, pFileName As %String, pFileEncoding As %String = "UTF8") As %Status
    {
        Set tSC = $System.Status.OK()
        Try
        {
            Set oFile = ##class(%Stream.FileCharacter).%New()
            If pFileEncoding'="" Set oFile.TranslateTable=pFileEncoding
            Set oFile.Filename=pFileName
            
            Set tHeader=oFile.ReadLine()
            Do ..IndexHeader(tHeader, qHandle("Separator"), .tHeaderIndex)
            
            Merge qHandle("HeaderIndex")=tHeaderIndex
            Set qHandle("File")=oFile
        }
        Catch (oException)
        {
            Set tSC = oException.AsStatus()
        }
        
        Quit tSC
    }

    ClassMethod CSVClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileClose(.qHandle)
    }

    ClassMethod CSV2Close(ByRef qHandle As %Binary) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileClose(.qHandle)
    }

    ClassMethod TSVClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileClose(.qHandle)
    }

    ClassMethod FileClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = FileExecute ]
    {
        #Dim oFile As %Library.FileCharacterStream
        Set tSC = $System.Status.OK()
        Try
        {
            Kill qHandle("File")
            Kill qHandle("HeaderIndex")
        }
        Catch (oException)
        {
            Set tSC = oException.AsStatus()
        }
        
        Quit tSC
    }

    ClassMethod CSVFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileFetch(.qHandle, .Row, .AtEnd)
    }

    ClassMethod CSV2Fetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileFetch(.qHandle, .Row, .AtEnd)
    }

    ClassMethod TSVFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileFetch(.qHandle, .Row, .AtEnd)
    }

    ClassMethod FileFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = FileExecute ]
    {
        Set tSC = $System.Status.OK()
        Try
        {
            Set Row = ""
            
            Set oFile = qHandle("File")
            If oFile.AtEnd
            {
                Set AtEnd=1
                Quit
            }
    
            Merge tHeaderIndex=qHandle("HeaderIndex")
            
            While 'oFile.AtEnd
            {
                Set tRow=oFile.ReadLine()
                Continue:tRow=""
                Quit
            }
                    
            Do ..IndexRow(tRow, .tHeaderIndex, .Row)
            
            
        }
        Catch (oException)
        {
            Set tSC = oException.AsStatus()
        }
        
        Quit tSC
    }
}

Hi!

It is not hard to build a task that will search for your message headers from a specific service, process or operation in a date range and delete the headers and the messages. You can start by searching on Ens.MessageHeader. You will notice that there are indices that will allow you to search by date range and source config item pretty quickly.

BUT: 

I always recommend to my customers to create a production for each different business needs. That allow you to have different back up strategies, purging strategies and even maintenance strategies (some times you want to touch one production and be sure that the others will not be touched). You are also able to take these productions to other servers when you need them to scale independently (ops! now this new service is receiving a big volume of messages and is disturbing the others!). On different servers, you may be able to patch one production (one server) while the others are running. This is specially more feasible now with InterSystems IRIS. You can easily migrate from Ensemble to IRIS and have a production running on a VM with 2 cores, another on another VM with 4 cores, and so on. You don't have to start with 8 cores and scale by 4 like ensemble. So you can pay the same you are paying today, but with the possibility of having more governance.

PS: Mirroring on IRIS is paid differently from mirroring with Ensemble. With IRIS, you do have to pay for the mirrors as well while with Ensemble you don't have to.

So, sincerely I believe you will be happier having another production. It's a good start to have more governance.

Kind regards,

Amir Samary

Senior Sales Engineer - InterSystems

Hi again!

I was checking the documentation of ExternalFreeze() here and there is an option for not switching the journal file. The parameter is defaulted to 1 (to switch the journal file) but you can change it to 0. Maybe that would allow you to do the Freeze/Thaw on an Async mirror member without consequences. Maybe ExternalFreez() will do it without switching the journal file independently of what you pass to this parameter just because it's being called on an Async mirror member. The documentation is not clear though... 

Maybe someone with more knowledge about the internals of this could clarify? I believe each CACHE.DAT file knows what was the last journal entry applied to it and, during a restore procedure, it could simply start in the middle of a journal file and proceed to the newer journal files created during/after the backup.

I would like to understand why we switch the journal file if, during a Freeze, all new transactions that can't be written to the database (because of the freeze) will be on the current journal file. A new journal file is created after the ExternalThaw() but all those transactions executed during the Freeze will be there on the previous journal file. It seems to me that switching the journal file serves no purpose since we always have to pick the previous journal file anyway during a restore.

Kind regards,

AS

Hi!

IMHO, I don't think this is application dependent at all. When we do the Freeze on one of the failover members, we don't care about what is running on the instance. Please notice that after you call Freeze, you snapshot everything, not only the filesystems where the database files are but also the filesystems where journal files, WIJ files and application files are. So, when you restore the back up,  you are restoring a point in time that may or may not have a database file (CACHE.DAT) consistent, but also the journal and WIJ files that will make it consistent. 

Also it is important to notice that Freeze/Thaw will switch the journal file for you and this is where transaction consistency will be kept. I mean, Caché/Ensemble/IRIS will split the journal at a point where the entire transaction and probably what is on the WIJ file is there and consistent.

After restoring this backup, you must execute a Journal Restore to restore all journal files generated after the Freeze/Thaw to make you instance up to date.

Unfortunately I can't answer about doing the backup on the Async node. At first, I believe there is no problem with it. You just need to be careful not forgetting the Async node exists and forget to apply patches and configurations you have done on the failover members so you have a complete backup of your system (application code and configurations included). But I don't know what happens when you execute the Freeze/Thaw procedure on the Async. Supposedly, freezing writing to the databases and new journal file creation would be performed on all cluster members, but the documentation is not clear about what "all cluster members" means. It is not clear if "all cluster members" includes Async Mirror Members.

My opinion is that Backup on an Async Member is not supported and may be problematic. For it to work, it would still have to freeze both failover members to have consistent journal files members on all nodes. So there would be no gain on doing it on the Async node. But that is only my opinion. Let's see if someone else can confirm my suspicious.

Kind regards,

AS

Hi Antonio!

The examples I have given show how to use the returning data. I show how to do it by

  • Direct using |CPIPE| and
  • By using a handy method on class %Net.Remote.Utility. 

Have you seen it above?

AS

IMHO, Minimal Security option should be completely eliminated from the product.

I saw this behavior of having /api/atelier application created with only Unauthenticated on Ensemble installations with Lock Down Security. But that was about a year ago and I thought that was because it was still beta. Is this happening on current Ensemble and IRIS installations as well? Did you install them with what security level?