Amir Samary · Jul 11, 2020 go to post
<scope xpos='200' ypos='250' xend='200' yend='800' >
<code name='Transform' xpos='200' ypos='350' >
<![CDATA[ 
    
    /*
     If I use a <transform> activity with indirection (@$parameter(request,"%DTL")), and 
     assign the returned object to the context, and an error is produced when the context is
     saved because the returned object has, say, missing properties, IRIS will only see this error
     too late and the scope we defined will not be able to help us to deal with this fault.
     
     So, in order to avoid this bug, we need to call the transformation from a <code> activity instead.
     And try to save the returned object before assigning it to the context and returning. If we can't
     save it, we must assign the error to the status variable and leave the context alone. This way
     the <scope> will capture the problem and take us to the <catchall> activity. 
    */ 
    
    Set tTransformClass=$parameter(request,"%DTL")
    Set status = $classmethod(tTransformClass, "Transform", request, .normalizedRequest)
    If $$$ISERR(status) Quit
    
    Set status = normalizedRequest.%Save()
    If $$$ISERR(status) Quit
    
    Set context.NormalizedData=normalizedRequest
]]>
</code> <assign name="Done" property="context.Action" value="&quot;Done&quot;" action="set" xpos='200' ypos='550' /> <faulthandlers>    <catchall name='Normalization error' xpos='200' ypos='650' xend='200' yend='1150' >
      <trace name='Normalization problem' value='"Normalization problem"' xpos='200' ypos='250' />
   </catchall>
</faulthandlers>
Amir Samary · Feb 25, 2020 go to post

"What files/directories should I keep track of from the durable %SYS directory? (e.g: I want to bind-mount those files/directories and import the code and see the Namespace and all the instance configuration )."

Answer: None.

Your configuration needs to be code. Right now, the best approach to have your namespace/databases created/configured, CSP applications, security, etc. is by using %Installer Manifests during the build process of the Dockerfile. I personally don't use Durable %SYS on my development machine. I prefer to use %Installer to configure everything and if I need to load pre-tables with data, I load it from CSV files that are on GitHub as well as source code. 

That allows you to source control your Code Table contents as well. Test records can be inserted as well with the same procedure.

For an example of this, look at this demo (based on IRIS):

https://github.com/intersystems-community/irisdemo-demo-fraudprevention

Look at the normalized_datalake image. It loads CSV files into the tables as part of the Dockerfile build process. You will notice that this image is based on a base image that has some standard reusable code. This source for this base image is here:

https://github.com/intersystems-community/irisdemo-base-irisdb-community

I was using Atelier when I built this image. But the principle is the same. I am now using VSCode doing the same.

This is another demo based on IRIS for Health:

https://github.com/intersystems-community/irisdemo-demo-readmission

Look at the riskengine image. It loads data from JSON files into the data model as part of the build process. The JSON files are created by synthea. An open source tool for generating synthetic patient data.

If you use this method, any developer will be able to jump between versions of your software very quickly. If you need to fix a problem on an old version, you can just checkout that version tag, build the image (which will load the tables) and make the changes you want looking at that exact version with the exact data needed for it to work.

When you are done, you can go back to your previous branch, rebuild the image again using the current source code (and data in CSV/JSON files) and keep going with your new features.

Just to be clear: I don't mean you shouldn't use Durable %SYS. You must use Durable %SYS on Production!

But I have strong reservations about using it on your PC (developer environment). That's all. Even the central development environment (where Unit Tests could be run) wouldn't need it.

But your UAT/QA and Production should definitely use Durable %SYS and you should come up with your own DevOps approach to deploying your software on these environments so you can test the upgrade procedure.

Kind regards,

AS

Amir Samary · May 8, 2019 go to post

Thank you! That helps a lot. The problem I was having is that I was implementing XXXGetInfo but not XXXGetODBCInfo(). 

Amir Samary · May 7, 2019 go to post

Hi! 

I thought of that. But I really wanted to write a custom ObjectScript code instead of relying on a %SQL.Statement or %ResultSet. That is because the data I want to aggregate and return is not easily searchable with a single statement.

But I think I am going to be using %Dictionary.* to generate the code dynamically.

Kind regards,

AS

Amir Samary · Nov 2, 2018 go to post
/// <p>This Reader is to be used with %ResultSet. It will allow you to scan a CSV file, record by record.
/// There are three queries available for reading:</p>
/// <ol>
/// <li>CSV - comma separated
/// <li>CSV2 - semicolon separated
/// <li>TSV - tab separated
/// </ol>
/// <p>Use the query accordingly to your file type! The reader expects to find a header on the first line with the name of the fields.
/// This will be used to name the columns when you use the method Get().</p>
/// <p>The reader can deal with fields quoted and unquoted automatically.</p>  
///
///    <p>This assumes that your CSV file has a header line.
/// This custom query supports Comman Separated Values (CSV), Semicolumn Separated Values (CSV2)
/// and Tab Separated Values (TSV).</p>    
///
/// <EXAMPLE>
///
///            ; Comma Separated Values
///            ;Set oCSVFileRS=##class(%ResultSet).%New("IRISDemo.Util.FileReader:CSV")
///            ;
///            ; Semicolon Separated Values
///            ;Set oCSVFileRS=##class(%ResultSet).%New("IRISDemo.Util.FileReader:CSV2")
///            ;
///            ; Tab Separated Values
///            Set oCSVFileRS=##class(%ResultSet).%New("IRISDemo.Util.FileReader:TSV")
///            
///            Set tSC = oCSVFileRS.Execute(pFileName)
///            Quit:$System.Status.IsError(tSC)
///            
///            While oCSVFileRS.Next()
///            {
///                Set tDataField1 = oCSVFileRS.Get("Column Name 1")
///                Set tDataField2 = oCSVFileRS.Get("Column Name 2")
///                Set tDataField4 = oCSVFileRS.GetData(3)
///                
///                // Your code goes here            
///            }
///    </EXAMPLE>                    
///
Class IRISDemo.Util.FileReader Extends %RegisteredObject
{

    /// Use this Query with %ResultSet class to read comma (,) separated files.
    Query CSV(pFileName As %String, pFileEncoding As %String = "UTF8") As %Query
    {
    }
    
    /// Use this Query with %ResultSet class to read semicolon (;) separated files.
    Query CSV2(pFileName As %String, pFileEncoding As %String = "UTF8") As %Query
    {
    }
    
    /// Use this Query with %ResultSet class to read tab separated files.
    Query TSV(pFileName As %String, pFileEncoding As %String = "UTF8") As %Query
    {
    }
    
    /// Index a header from a CSV file so we can later get every row of the file
    /// and read every column by it's name. This method supports fields quoted, double 
    /// quoted or not quoted or a mix of the three. The only thing that is required for this to work
    /// is that the correct field separator is specified (Comma, Semicolon or Tab).
    ClassMethod IndexHeader(pHeader As %String, pSeparator As %String = ",", Output pHeaderIndex)
    {
        //How many columns?
        Set pHeaderIndex=$Length(pHeader,pSeparator)
        
        Set tRegexSeparator=pSeparator
        
        If tRegexSeparator=$Char(9)
        {
            Set tRegexSeparator="\x09"
        }
        
        //Let's build a regular expression to read all the data without the quotes...
        Set pHeaderIndex("Regex")=""
        For i=1:1:pHeaderIndex
        {
            Set $Piece(pHeaderIndex("Regex"),tRegexSeparator,i)="\""?'?(.*?)\""?'?"
        }
        Set pHeaderIndex("Regex")="^"_pHeaderIndex("Regex")
        
        //Let's use this regular expression to index the column names...
        Set oMatcher = ##class(%Regex.Matcher).%New(pHeaderIndex("Regex"))
        Do oMatcher.Match(pHeader)
        
        //Now let's index the colum names
        For i=1:1:oMatcher.GroupCount
        {
            Set pHeaderIndex("Columns",i)=$ZStrip(oMatcher.Group(i),"<>W")
        }
    }

    ClassMethod IndexRow(pRow As %String, ByRef pHeaderIndex As %String, Output pIndexedRow As %String)
    {
        Set oMatcher = ##class(%Regex.Matcher).%New(pHeaderIndex("Regex"))
        Do oMatcher.Match(pRow)
        
        //Now let's index the colum names
        For i=1:1:oMatcher.GroupCount
        {
            Set $List(pIndexedRow,i)=oMatcher.Group(i)
        }
    }

    ClassMethod CSVGetInfo(colinfo As %List, parminfo As %List, idinfo As %List, qHandle As %Binary, extoption As %Integer = 0, extinfo As %List) As %Status
    {
            Quit ..FileGetInfo(.colinfo, .parminfo, .idinfo, .qHandle, .extoption, .extinfo)
    }
    
    ClassMethod CSV2GetInfo(colinfo As %List, parminfo As %List, idinfo As %List, qHandle As %Binary, extoption As %Integer = 0, extinfo As %List) As %Status
    {
            Quit ..FileGetInfo(.colinfo, .parminfo, .idinfo, .qHandle, .extoption, .extinfo)
    }
    
    ClassMethod TSVGetInfo(colinfo As %List, parminfo As %List, idinfo As %List, qHandle As %Binary, extoption As %Integer = 0, extinfo As %List) As %Status
    {
            Quit ..FileGetInfo(.colinfo, .parminfo, .idinfo, .qHandle, .extoption, .extinfo)
    }
    
    ClassMethod FileGetInfo(colinfo As %List, parminfo As %List, idinfo As %List, qHandle As %Binary, extoption As %Integer = 0, extinfo As %List) As %Status
    {
        Merge tHeaderIndex = qHandle("HeaderIndex")
        Set colinfo = ""
        For i=1:1:tHeaderIndex
        {
            Set tColName = tHeaderIndex("Columns",i)
            Set colinfo=colinfo_$LB($LB(tColName))    
        }
        
        Set parminfo=$ListBuild("pFileName","pFileEncoding")
        Set extinfo=""
        Set idinfo=""
        Quit $$$OK
    }

    ClassMethod CSVExecute(ByRef qHandle As %Binary, pFileName As %String, pFileEncoding As %String = "UTF8") As %Status
    {
        Set qHandle("Separator")=","
        Quit ..FileExecute(.qHandle, pFileName, pFileEncoding)
    }

    ClassMethod CSV2Execute(ByRef qHandle As %Binary, pFileName As %String, pFileEncoding As %String = "UTF8") As %Status
    {
        Set qHandle("Separator")=";"
        Quit ..FileExecute(.qHandle, pFileName, pFileEncoding)
    }

    ClassMethod TSVExecute(ByRef qHandle As %Binary, pFileName As %String, pFileEncoding As %String = "UTF8") As %Status
    {
        Set qHandle("Separator")=$Char(9)
        Quit ..FileExecute(.qHandle, pFileName, pFileEncoding)
    }

    ClassMethod FileExecute(ByRef qHandle As %Binary, pFileName As %String, pFileEncoding As %String = "UTF8") As %Status
    {
        Set tSC = $System.Status.OK()
        Try
        {
            Set oFile = ##class(%Stream.FileCharacter).%New()
            If pFileEncoding'="" Set oFile.TranslateTable=pFileEncoding
            Set oFile.Filename=pFileName
            
            Set tHeader=oFile.ReadLine()
            Do ..IndexHeader(tHeader, qHandle("Separator"), .tHeaderIndex)
            
            Merge qHandle("HeaderIndex")=tHeaderIndex
            Set qHandle("File")=oFile
        }
        Catch (oException)
        {
            Set tSC = oException.AsStatus()
        }
        
        Quit tSC
    }

    ClassMethod CSVClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileClose(.qHandle)
    }

    ClassMethod CSV2Close(ByRef qHandle As %Binary) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileClose(.qHandle)
    }

    ClassMethod TSVClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileClose(.qHandle)
    }

    ClassMethod FileClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = FileExecute ]
    {
        #Dim oFile As %Library.FileCharacterStream
        Set tSC = $System.Status.OK()
        Try
        {
            Kill qHandle("File")
            Kill qHandle("HeaderIndex")
        }
        Catch (oException)
        {
            Set tSC = oException.AsStatus()
        }
        
        Quit tSC
    }

    ClassMethod CSVFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileFetch(.qHandle, .Row, .AtEnd)
    }

    ClassMethod CSV2Fetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileFetch(.qHandle, .Row, .AtEnd)
    }

    ClassMethod TSVFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileFetch(.qHandle, .Row, .AtEnd)
    }

    ClassMethod FileFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = FileExecute ]
    {
        Set tSC = $System.Status.OK()
        Try
        {
            Set Row = ""
            
            Set oFile = qHandle("File")
            If oFile.AtEnd
            {
                Set AtEnd=1
                Quit
            }
    
            Merge tHeaderIndex=qHandle("HeaderIndex")
            
            While 'oFile.AtEnd
            {
                Set tRow=oFile.ReadLine()
                Continue:tRow=""
                Quit
            }
                    
            Do ..IndexRow(tRow, .tHeaderIndex, .Row)
            
            
        }
        Catch (oException)
        {
            Set tSC = oException.AsStatus()
        }
        
        Quit tSC
    }
}
Amir Samary · Oct 26, 2018 go to post

Hi!

It is not hard to build a task that will search for your message headers from a specific service, process or operation in a date range and delete the headers and the messages. You can start by searching on Ens.MessageHeader. You will notice that there are indices that will allow you to search by date range and source config item pretty quickly.

BUT: 

I always recommend to my customers to create a production for each different business needs. That allow you to have different back up strategies, purging strategies and even maintenance strategies (some times you want to touch one production and be sure that the others will not be touched). You are also able to take these productions to other servers when you need them to scale independently (ops! now this new service is receiving a big volume of messages and is disturbing the others!). On different servers, you may be able to patch one production (one server) while the others are running. This is specially more feasible now with InterSystems IRIS. You can easily migrate from Ensemble to IRIS and have a production running on a VM with 2 cores, another on another VM with 4 cores, and so on. You don't have to start with 8 cores and scale by 4 like ensemble. So you can pay the same you are paying today, but with the possibility of having more governance.

PS: Mirroring on IRIS is paid differently from mirroring with Ensemble. With IRIS, you do have to pay for the mirrors as well while with Ensemble you don't have to.

So, sincerely I believe you will be happier having another production. It's a good start to have more governance.

Kind regards,

Amir Samary

Senior Sales Engineer - InterSystems

Amir Samary · Mar 6, 2018 go to post

Hi again!

I was checking the documentation of ExternalFreeze() here and there is an option for not switching the journal file. The parameter is defaulted to 1 (to switch the journal file) but you can change it to 0. Maybe that would allow you to do the Freeze/Thaw on an Async mirror member without consequences. Maybe ExternalFreez() will do it without switching the journal file independently of what you pass to this parameter just because it's being called on an Async mirror member. The documentation is not clear though... 

Maybe someone with more knowledge about the internals of this could clarify? I believe each CACHE.DAT file knows what was the last journal entry applied to it and, during a restore procedure, it could simply start in the middle of a journal file and proceed to the newer journal files created during/after the backup.

I would like to understand why we switch the journal file if, during a Freeze, all new transactions that can't be written to the database (because of the freeze) will be on the current journal file. A new journal file is created after the ExternalThaw() but all those transactions executed during the Freeze will be there on the previous journal file. It seems to me that switching the journal file serves no purpose since we always have to pick the previous journal file anyway during a restore.

Kind regards,

AS

Amir Samary · Mar 4, 2018 go to post

Hi!

IMHO, I don't think this is application dependent at all. When we do the Freeze on one of the failover members, we don't care about what is running on the instance. Please notice that after you call Freeze, you snapshot everything, not only the filesystems where the database files are but also the filesystems where journal files, WIJ files and application files are. So, when you restore the back up,  you are restoring a point in time that may or may not have a database file (CACHE.DAT) consistent, but also the journal and WIJ files that will make it consistent. 

Also it is important to notice that Freeze/Thaw will switch the journal file for you and this is where transaction consistency will be kept. I mean, Caché/Ensemble/IRIS will split the journal at a point where the entire transaction and probably what is on the WIJ file is there and consistent.

After restoring this backup, you must execute a Journal Restore to restore all journal files generated after the Freeze/Thaw to make you instance up to date.

Unfortunately I can't answer about doing the backup on the Async node. At first, I believe there is no problem with it. You just need to be careful not forgetting the Async node exists and forget to apply patches and configurations you have done on the failover members so you have a complete backup of your system (application code and configurations included). But I don't know what happens when you execute the Freeze/Thaw procedure on the Async. Supposedly, freezing writing to the databases and new journal file creation would be performed on all cluster members, but the documentation is not clear about what "all cluster members" means. It is not clear if "all cluster members" includes Async Mirror Members.

My opinion is that Backup on an Async Member is not supported and may be problematic. For it to work, it would still have to freeze both failover members to have consistent journal files members on all nodes. So there would be no gain on doing it on the Async node. But that is only my opinion. Let's see if someone else can confirm my suspicious.

Kind regards,

AS

Amir Samary · Feb 20, 2018 go to post

Hi Antonio!

The examples I have given show how to use the returning data. I show how to do it by

  • Direct using |CPIPE| and
  • By using a handy method on class %Net.Remote.Utility. 

Have you seen it above?

AS

Amir Samary · Feb 12, 2018 go to post

IMHO, Minimal Security option should be completely eliminated from the product.

I saw this behavior of having /api/atelier application created with only Unauthenticated on Ensemble installations with Lock Down Security. But that was about a year ago and I thought that was because it was still beta. Is this happening on current Ensemble and IRIS installations as well? Did you install them with what security level?

Amir Samary · Dec 29, 2017 go to post

Hi!

If you don't have problems with loosing the order of messages, just increase you pool size. But you should take a look at your transformation, why is it so heavy? A transformation should not slow down your system like this.

On the other hand, if you do care about the order of messages, you could use a message router to split the channel for more processes based on some criteria that won't affect the order of messages. For instance, at a client I was receiving an HL7 feed from single system that  was used on many facilities. On this feed I had messages to create a patient, update it, admit it, etc. If one messages arrived on its destination before the previous related ones, I would loose updates. The process that was transforming them was a bottleneck (and improve it to eliminate the bottleneck would take time). So we ended up creating a routing rule to split the HL7 feed by facility and created an instance of that business process for each facility. That allowed us to parallelize the processing while still keeping the order of messages (because I patient couldn't be on more than one facility at the same time).

Of course, we later took the time to improve that business process that was slow an retrofitted the production back to fast simplicity. ;)

Kind regards,

AS

Amir Samary · Dec 7, 2017 go to post

HI!

If it's unidirectional and the network is protected and under control, I would definitely avoid using %SYNC and would use Asynchronous Mirroring instead.

On the other hand, there is a limit on the number of Asynchronous Mirroring you can have. I think it's currently 16. If this is not enough for you then you will have to think on another solution such as %SYNC or simply a process that periodically exports the globals and send them everywhere through a secure channel (a SOAP web service).

I have implemented a %SYNC over SOAP toolkit that makes things easier to setup and monitor. I am still finishing some aspects of it. %SYNC is a very good toolkit but it lacks a good communication and management infrastructure. I have the communication (through a protected SOAP communication) sorted out. I am now working on operational infrastructure such as purging ^OBJ.SYNC global, protecting the journal files from being purged if a node is lagging behind, some monitoring, etc.

If Asynchronous Mirroring doesn't work for you (that should be your first choice) and if you can build a simple task that periodically export your globals and send them through SOAP to your other nodes, I can share with you my code around %SYNC.

Kind regards,

Amir Samary

Amir Samary · Dec 5, 2017 go to post

Hi!

I can see Auditing is enabled on your system. Have you checked under "Configure System Events" that the event "%Ensemble/%Production/StartStop" is enabled?

Kind regards,

Amir Samary

Amir Samary · Nov 1, 2017 go to post

Hi!

Yes. There is definitely a rollback if you return an error. That won't ever work. If you want to prevent people from deleting records on this table, simply don't give them permission to do it.

If you configure your security right, you can have a role for the user that is being used to access the application (by JDBC, ODBC or dynamic result sets such as %SQL.Statement or %Library.ResultSet) and give that role the right permissions. That would be permissions on the database (say %DB_MYDATABASE), and specific GRANTs for SELECT, INSERT, UPDATE and DELETE on specific tables.

Then, you can let this specific table without the GRANT for DELETE and obligate everyone to UPDATE the Deleted property instead.

You can even use ROW Level Security to hide that record from "normal" users while letting it appear to "report" or "analytics" users.

Beware though that the enforcement of SQL privileges will only work through JDBC, ODBC or dynamic result set objects such as %SQL.Statement. If you issue a %DeleteId() on an object, it will be deleted. That is by design because if you are running object script code and have RW privileges on the database, you can do almost anything. So, it makes no sense to check for SQL privileges on every method call. The system would spend most of its time checking privileges instead of working.  So, what is normally done is that you set your security roles and users right and try to protect the entry points of your application. For instance, you can use custom resources to label and protect:

  • CSP Pages using the SECURITYRESOURCE parameter
  • General Methods - by checking only once if a user has a specific resource with $System.Security.Check() method
  • SOAP and REST services - by protecting the CSP application that exposes them or by using the SECURITYRESOURCE parameter on your %CSP.REST class or even by using $System.Security.Check() on each individual method.

The idea is to protect entire areas of the application instead of checking privileges on every single database access.

HTH!

Kind regards,

AS

Amir Samary · Oct 30, 2017 go to post

Hi!

I like to use EnsLib.EDI.XML.Document class to move XML around because it will show nicely on Visual Message Trace and I can use message routing and DTL to route and transform it since it is a virtual document like HL7, ASTM or EDIFACT. Try looking into the documentation here:

http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=EXML_tools_classes

You will notice there are out of the box business services to grab a file on an FTP server and take it's XML content as EnsLib.EDI.XML.Document. If you can't use it directly you will, at least, have an example to start with and make one of your own.

That is the Business Service. 

For your Business Operation, you have two options:

  1. Use the SOAP Wizard to create a business operation for you based on your WSDL and use it without changes
  2. Use the SOAP Wizard to only create the proxy to the web service and create your own Business Operation around it yourself

Option 2 is nice if you want to hide the intricacies and details of using that web service. Some times, a web service will ask for a username and password on the body of the message, for instance, and you don't want to do that. You would create a credential property on the business operation and use it instead. So you are hiding these details from the user of that business operation. Another reason for implementing the business operation yourself is to change the data types and name of properties. That is our case. Your web service probably expects to receive a XML string. We are dealing with an EnsLib.EDI.XML.Document. Obviously, that won't work out of the box.

So, I suggest you create a business operation that receives the EnsLib.EDI.XML.Document and calls the Web Service proxy underneath, by extracting the XML string from the EnsLib.EDI.XML.Document request. You can use the SOAP Wizard to create the first version of your Business Operation and use it as a starting point for creating your own. Make the appropriate changes on it to receive an EnsLib.EDI.XML.Document as a Request.

Kind regards,

AS

Amir Samary · Oct 27, 2017 go to post

Hi!

Are you calling the workflow using an asynchronous <call> activity and using a <sync> activity to wait on it? For how long is your <sync> activity waiting? Beware that a workflow task will only stay on the workflow inbox while that <sync> activity is not done. So, if you don't have a <sync> activity or if the timeout on the <sync> activity is too short, the task will not appear on the workflow inbox or appear only momentarily.

The idea is that the BPL can put a timeout as an SLA on that task. If no one finish that task until that timeout expires, you will receive whatever (incomplete) response is there and, maybe, escalate to other roles, send alerts, etc.

Kind regards,

AS

Amir Samary · Oct 27, 2017 go to post

Hi Eduard,

I fail to follow your reasoning. If you are making other classes inherit from a class of yours, you are changing the class by definition. You are adding a new method to the class. Whether this method is a simple method or has its code computed during class compilation is transparent to whomever is calling that method.

And the resulting method could return the information in any format you choose, including $LB.

Kind regards,

AS

Amir Samary · Oct 26, 2017 go to post

Hi Eduard!

Without going too deep into your code and trying to enhance it, I can suggest you to:

  1. Put this method on a utility class
  2. Recode it so that it is a method generator. The method would use a variation of your current code to produce a $Case() that would simply return the position of a given property instead of querying the class definition in runtime.
  3. Make the class you want to use it (ex: Sample.Address) inherit from it.

Of course, if this is a one-time thing, you could simply implement number 2. Or you could simply forget about my suggestion at all if performance isn't an issue for this scenario. By using a method generator, you eliminates from the runtime any inefficiency of the code used to (pre)compute the positions.

Kind regards,

AS

Amir Samary · Oct 25, 2017 go to post

Hi Edouard!

Robert's solution works perfectly if you map the other database, through ECP, to the other host. ECP is very simple to configure. 

If this global belongs to a table, you could configure a ODBC/JDBC connection to the other system and created a linked table on system "TO" that is linked through ODBC/JDBC to the real table on system "FROM". And run a code similar to Robert's. But instead of an $Order, you would use %SQL.Statement and SELECT the records. 

ECP requires a Multi-Server license. That is why I am suggestion this alternative with SQL Gateway.

Kind regards,

AS

Amir Samary · Oct 18, 2017 go to post

Hi!

The method %FileIndices(id) may help you. The problem is that you won't know which ID you must use to call it. Have new objects been added? Which have been changed?

I assume you have taken an old style global and mapped it to classes and have created an index for it. That's ok. Normally old style applications will have its own indices on the same global or another. They probably have an index on Name already that they use and you should try to map that existing subscript as your name index on your mapped class instead of creating a new index.

But if you really want to go on and create a new index, you have only two choices:

  1. To change the old style code to use objects/sql and stop setting the globals directly for that specific entity
  2. To change the old style code to set the new index global together with the data global (this is very common practice on old style applications).
  3. Ask them (the programmers of the old style application) to call %FileIndices(id) themselves after adding/changing a record so that all the indices defined on your mapped class would be (re)computed (including the indices that they have already set manually and you have mapped on your class). You could offer them to call this method to populate the indices global for them instead of setting the indices globals themselves to eliminate the redundant work. But you would have to have your mappings very well defined to replace their code, with all their indices.

As you can see, all options imply changing the old style code. I would push for option number 1 as this can be the start of some real good improvements to your old style application. 

There is no middle ground here, unless you want to rebuild all your indices every day.

Depending on how many places this global is referred on your application, the way the application is compartmentalized, and if you are using indirection or not throughout your application, these changes can be very simple or very complex to do and test.

Another way of going about this is to try to use the journals APIs to read the journal files, detect changes on this specific global and update the index... That would prevent you from touching the old style application. I have never done this before, but I believe it is pretty feasible.

Kind regards,

AS

Amir Samary · Oct 11, 2017 go to post

Hi!

I wouldn't recommend putting globals or routines on your %SYS namespace for two reasons:

  • This will be shared with all namespaces. This may be what you want today, but may not be what you want in the future.
  • One nice thing to organize your code, globals and classes on different databases is that you can simplify your patching procedures. Instead of shipping classes, globals and routines from DEV to TEST or LIVE you can ship the necessary databases. So, if you have a database for all your routines/classes/compiled_pages, and you have a new release of your application, you can copy the entire CODE database from DEV to TEST or LIVE and "upgrade" that environment. Of course upgrading an environment may involve other things like rebuilding some indices, running some patching code, etc. But again, you can backup your databases, move the new databases to replace the old ones, run your custom code and rebuild your indices (if necessary) and test it. If something fails, you can always revert back to the previous set of databases. Well, you can't do that if you have custom code and globals on %SYS since copying the CACHESYS database from one system to another is taking more than what you would like to take from that system to the other.
  • With the new InterSystems Container Manager (ICM) that comes with InterSystems IRIS, there is this concept of "durable %SYS" where %SYS namespace is extracted from inside the container and put into an external volume. That is good because you can replace your container without loose all the system configurations you have stored on %SYS. But IRIS is not meant to be compatible with old practices such as taking care of your custom globals, routines and classes on %SYS. InterSystems may decide to change this behavior and make %SYS less custom code friendly.

 

So, you can perfectly create more databases to organize your libraries and globals and map them to your namespace without problems. One database for code tables, one database for some libraries you share between all your namespaces, one database for CODE another for DATA, etc. That is what namespaces are for: To rule them all.

 

Kind regards,

AS

Amir Samary · Oct 2, 2017 go to post

Hi!

HL7 has an XML Schema that you can import as Ensemble classes. You can use these classes to implement your business services and operations web services. Here is the XML schema link.

Another way of using this XML Schema is not to import them as classes. Instead, use the XML Schema importing tool we have on Ensemble Management Portal and use EnsLib.EDI.XML.Document with it. You could receive plain XML text and transform it into an EnsLib.EDI.XML.Document. It will be faster to process than to transform it to objects. If you are interested, I can share some code.

All that being said, I strongly suggest NOT TO DO ANY OF THIS.

HL7v2 XML encoding isn't used anywhere and the natural reasoning behind using it is just ridiculous. If you are asking someone to support HL7v2 and they are willing to spend their time implementing the standard, then explain to them that it is best for them to simply implement it the way 99% of the world uses it: with the normal |^~& separators. It's a text string that they can simply embed into their web service call.

There are these two XML encoding versions:

  •  EnsLib.HL7.Util.FormatSimpleXMLv2 - provided by Ensemble (example you have already on this post)
  • The XML encoding from HL7 itself (link I gave you above) 

Both encodings are horrible since they won't give you pretty field names that you can simply transform into an object and understand the meaning of the fields just by reading their names. Here is what an class generated from the standard HL7v2 XML Schema looks like (the PID segment):

Class HL7251Schema.PID.CONTENT Extends (%Persistent, %XML.Adaptor) [ ProcedureBlock ]
{

Parameter XMLNAME = "PID.CONTENT";

Parameter XMLSEQUENCE = 1;

Parameter XMLTYPE = "PID.CONTENT";

Parameter XMLIGNORENULL = 1;

Property PID1 As HL7251Schema.PID.X1.CONTENT(XMLNAME = "PID.1", XMLREF = 1);

Property PID2 As HL7251Schema.PID.X2.CONTENT(XMLNAME = "PID.2", XMLREF = 1);

Property PID3 As list Of HL7251Schema.PID.X3.CONTENT(XMLNAME = "PID.3", XMLPROJECTION = "ELEMENT", XMLREF = 1) [ Required ];

Property PID4 As list Of HL7251Schema.PID.X4.CONTENT(XMLNAME = "PID.4", XMLPROJECTION = "ELEMENT", XMLREF = 1);

You see? No semantics. Just structure. So, why not simply send the HL7v2 standard text through your Web Service or HTTP service? Here are the advantages of using the standard HL7v2 text encoding instead of the XML encoding:

  • You will all be working with the HL7 standard as 99% of the world that uses HL7v2 does instead of trying to use something that never caught up (XML encoding)
  • Ensemble brings Business Services and Business Operations out of the box to receive/send HL7v2 text messages through TCP, FTP, File, HTTP and SOAP. So you wouldn't have to code much in Ensemble to process them.
  • It takes less space (XML takes a lot of space)
  • It takes less processing (XML is heavier to parse)
  • Using the XML schema won't really help you with anything since both XML encoding systems provide just a dumb structure for your data, without semantics.

Kind regards,

AS

Class HL7251Schema.PID.CONTENT Extends (%Persistent, %XML.Adaptor) [ ProcedureBlock ]
{
Parameter XMLNAME = "PID.CONTENT";
Parameter XMLSEQUENCE = 1;
Parameter XMLTYPE = "PID.CONTENT";
Parameter XMLIGNORENULL = 1;
Property PID1 As HL7251Schema.PID.X1.CONTENT(XMLNAME = "PID.1", XMLREF = 1);
Property PID2 As HL7251Schema.PID.X2.CONTENT(XMLNAME = "PID.2", XMLREF = 1);
Property PID3 As list Of HL7251Schema.PID.X3.CONTENT(XMLNAME = "PID.3", XMLPROJECTION = "ELEMENT", XMLREF = 1) [ Required ];
Property PID4 As list Of HL7251Schema.PID.X4.CONTENT(XMLNAME = "PID.4", XMLPROJECTION = "ELEMENT", XMLREF = 1);

Amir Samary · Sep 26, 2017 go to post

Hi!

I would add that they exist for compatibility with systems migrated from other databases. On other SQL databases, it's a common practice to use as the primary key a meaningful attribute of the tuple such as SSN, Code, Part Number, etc. I think this is a very bad practice even on normal SQL databases since a Primary Key/IdKey, once created, can't be easily changed. If you want to have one of your meaningful fields of your tuple as your primary key, use IdKey. I strongly advise against it though.

So, instead of having as Primary Key one of the attributes of the tuple (using IdKey), it's much better to have a sequential number. This is provided by different databases in different ways. Oracle has it's SEQUENCES. SQL Server has its AUTO_INCREMENT. Continuing with Oracle and SQL Server examples, many fields/columns on a table can be populated with values from a sequence (Oracle) or be auto incremented (SQL server). But only one of the fields can be the Primary Key. The Primary Key is used by other rows on the table or other tables to reference this row. It can't be changed just because there may be other rows relying on it and that is fine.

In Caché, we have our ID field that is an auto_increment kind of primary key. Other rows/objects from this or other tables will be referencing this object through it's ID. You can create your own unique fields. As many as you want. Code, PartNum, SSN, etc. You can define them Unique indices for them all. Don't use IdKey to do that. That is not its purpose. IdKey will make that field THE Primary Key of the class. It will make its ID be the field you picked. That is very bad (IMHO).

On the other hand, there are cases where performance can be increased by using IdKey. It will save you a trip to the indices global to get the real id for that field before going to the data global to get the other fields you need. If we are talking about millions of accesses per second and you are pretty sure you won't need to change the value of that field once it was created, use IdKey. It will give you better performance. But if you do, beware that by choosing you own IdKey, you may not be able to use special kinds of indices like bitmap indices or even iFind. It will depend on the type of IdKey you pick. If it's a string, for instance, iFind and BitMap indices won't work with your class anymore. They rely on having a numeric sequential ID to work.

Kind regards,

AS

Amir Samary · Aug 29, 2017 go to post

If you have a flag on your database that you set after reading the records, and this flag has an Index, you should be fine. Why bother? I mean: You could set this to be run every hour and process the records that have the right flag...

But if you really need to run your service on a specific schedule I would suggest  changing your architecture. Try creating a Business Service without an adapter (ADAPTER=Ens.InboundAdapter) so you can call it manually through Ens.Director.CreateBusinessService(). On the Business Service, call a Business Process that will call a Business Operation and grab the data for you. Then your Business Process will receive the data and do whatever needs to be done with it, maybe calling other Business Operations.

If that works, create a task (a class than inherits from %SYS.Task.Definition) that will create an instance of your business service using Ens.Director and call it. Configure the task to run on whatever schedule you need such as "Once a day at 12PM" or "Every hour" or "On monday, at 12AM", etc.

Kind regards,

AS

Amir Samary · Aug 10, 2017 go to post

Hi Robert,

You are right. Now I see there are methods on the datatype for LogicalToODBC and ODBCToLogical. 

But I insist that the methods LogicalTo*, ODBCTo* and DisplayTo* should not be called directly. Although they will work, the correct way of dealing with data type conversions are using normal functions such as $ZDateH() with the proper formatting code.

If one wants to store names in a format where the surname should be separated from the name by a comma, I would instead simply use %String for that or subclass it to create a new datatype that will make sure there is a comma in there (although I think this is too much).

Kind regards,

AS

Amir Samary · Aug 10, 2017 go to post

Hi Mike,

%List is not supposed to be used that way. It doesn't have a projection to SQL, so you can't create a property on a persistent class with it and expect it to work as a %String or %Integer property will. You could have it there as a private property for some other purpose of your class but not to be used as any other property as you expect.

I think %List was created to be used on the definition of method parameters and return types of such methods to represent a $List parameter. Then one would know that such datatype must be used and this would also be used when projecting those methods to Java, .NET, etc. for the same reason (there is a Java/.NET/etc. $List like datatype for every language binding we support).

If you need to represent names in a format such as "Surname,Name", try using %String or creating another datatype that inherits from %String and validates that the string is in such format by implementing the IsValid() method of your datatype.

Also, don't try to use LogicalTo*, ODBCTo* or DisplayTo* methods directly. Those are called by the SQL and Object engine accordingly. For instance ODBCToLogical is called when storing data from ODBC/JDBC connections. You shouldn't have to call these methods. 

Kind regards,

AS

Amir Samary · Aug 8, 2017 go to post

I use GitHub's issues that allows you to create issues of all sorts (bugs, enhancements, tasks, etc.) and assign them to someone or to a group of people. The issues are associated with the source code repository and there is also a functionality for projects. You create a project and then list the tasks needed to accomplish that project. Then you can make every task an issue and assign it to people. Then you can drag and drop the tasks from one stage to the next like Specification > Development > Testing > Product.

GitFlow is a good and flexible workflow. But I don't use Git to deploy on Pre-Live or LIVE. I normally would have four environments:

  • Development - Your machine
  • Development - where you integrate the development branch from Git with the work from all developers. Downloading the code from GitHub can be done automatically (when a change is integrated back into the develop branch) or manually.
  • QA - This environment is where you download code from GitHub's master branch with a new release. Your users can test the new release here without being bothered. 
  • Pre-Production/Pre-LIVE - This environment is periodically overwritten with a copy of LIVE. It is where you try and test applying your new release.
  • Production

GitFlow's hotfix may be used depending on your evironment. Depending on the change and on the urgency, it will be a pain to actually test the fix on your develop machine. Your local globals may not match the storage definition of what is in production because you may have been working on a new version of your classes with different global structures. You may need larges amount of data or specific data to reproduce the problem on your developer machine, etc. You can do it, but every hot fix will be a different workflow. Depending on the urgency you may simply not have the time to prepare your develop environment with the data and conditions to reproduce the problem, fix it and produce the hot fix. But it can be done.  On the other hand, as pre-production is a copy of LIVE, you can safely fix the problem there manually (forget GitHub), apply the change to LIVE and then incorporate these changes into your next release. I think this is cleaner. Everytime you have problem in LIVE, you can investigate it on PRE-LIVE. If PRE-LIVE is outdated, you can ask Operations for an emergency unscheduled refresh of PRE-LIVE to work on it.

About patching

I recommend always creating your namespaces with two databases: One for CODE and another for DATA.

That allows you to implement patching with a simple copy of the CODE database. You stop the instance, copy the database and start the instance. Simple like that. Every release may have an associated Class that has code to be run to rebuild indices, fix some global structures that may have changed, do some other kind of maintenance, etc.

If you are using Ensemble and don't want to stop your instance, you can generate your patch as a normal XML package and a Word document explaining how to apply it. Test this on your Pre-LIVE environment. Fix the document and/or the XML package if necessary, and try again until patching works. The run it on LIVE.

Before patching applying new releases to PRE-LIVE or LIVE, execute a full snapshot of your servers' virtual machine. If the patching procedure fails for some reason, you may need to rollback to that point in time. This is specially useful on PRE-LIVE where you are still testing the patching procedure and will most likely break things until you get it right. To be able to quickly go back in time and try it again and again and again will give you the freedom you need to produce a high quality patching procedure.

If you can afford downtime, use it. Don't try to push a zero downtime policy if you don't really need it. It will only make things unnecessarily complex and risky. You can patch Ensemble integrations without down time with the right procedure though. I micro services architecture may also help you to eliminate downtime but it is complex and requires a lot of engineering.

Using External Service Registry

I recommend using External Service Registry so that when you generate your XML project with the new production definition and etc. no references to End Points, folders, etc. are there. Even if you don't send your entire production class, this will help you with the periodic refreshing of the pre-live databases from live. The External Registry Service would store the end point configurations outside your databases and they would be different on LIVE, PRE-LIVE, QA, DEV and on the developer's machine (that may be using local mock services, new versions of services elsewhere, etc.).

Amir Samary · Aug 8, 2017 go to post

If you need to monitor your productions, try checking out the Ensemble Activity Monitor:

http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY…

 

It will give you the information you need with minimum performance impact since the data will be stored on a cube and your queries won't affect your runtime system. You can see the counts on the last hour, week, month and year with trending graphs. You will also get the same for queuing and wait time. It's great stuff!