<scope xpos='200' ypos='250' xend='200' yend='800' >
<code name='Transform' xpos='200' ypos='350' >
<![CDATA[ 
    
    /*
     If I use a <transform> activity with indirection (@$parameter(request,"%DTL")), and 
     assign the returned object to the context, and an error is produced when the context is
     saved because the returned object has, say, missing properties, IRIS will only see this error
     too late and the scope we defined will not be able to help us to deal with this fault.
     
     So, in order to avoid this bug, we need to call the transformation from a <code> activity instead.
     And try to save the returned object before assigning it to the context and returning. If we can't
     save it, we must assign the error to the status variable and leave the context alone. This way
     the <scope> will capture the problem and take us to the <catchall> activity. 
    */ 
    
    Set tTransformClass=$parameter(request,"%DTL")
    Set status = $classmethod(tTransformClass, "Transform", request, .normalizedRequest)
    If $$$ISERR(status) Quit
    
    Set status = normalizedRequest.%Save()
    If $$$ISERR(status) Quit
    
    Set context.NormalizedData=normalizedRequest
]]>
</code> <assign name="Done" property="context.Action" value="&quot;Done&quot;" action="set" xpos='200' ypos='550' /> <faulthandlers>    <catchall name='Normalization error' xpos='200' ypos='650' xend='200' yend='1150' >
      <trace name='Normalization problem' value='"Normalization problem"' xpos='200' ypos='250' />
   </catchall>
</faulthandlers>

"What files/directories should I keep track of from the durable %SYS directory? (e.g: I want to bind-mount those files/directories and import the code and see the Namespace and all the instance configuration )."

Answer: None.

Your configuration needs to be code. Right now, the best approach to have your namespace/databases created/configured, CSP applications, security, etc. is by using %Installer Manifests during the build process of the Dockerfile. I personally don't use Durable %SYS on my development machine. I prefer to use %Installer to configure everything and if I need to load pre-tables with data, I load it from CSV files that are on GitHub as well as source code. 

That allows you to source control your Code Table contents as well. Test records can be inserted as well with the same procedure.

For an example of this, look at this demo (based on IRIS):

https://github.com/intersystems-community/irisdemo-demo-fraudprevention

Look at the normalized_datalake image. It loads CSV files into the tables as part of the Dockerfile build process. You will notice that this image is based on a base image that has some standard reusable code. This source for this base image is here:

https://github.com/intersystems-community/irisdemo-base-irisdb-community

I was using Atelier when I built this image. But the principle is the same. I am now using VSCode doing the same.

This is another demo based on IRIS for Health:

https://github.com/intersystems-community/irisdemo-demo-readmission

Look at the riskengine image. It loads data from JSON files into the data model as part of the build process. The JSON files are created by synthea. An open source tool for generating synthetic patient data.

If you use this method, any developer will be able to jump between versions of your software very quickly. If you need to fix a problem on an old version, you can just checkout that version tag, build the image (which will load the tables) and make the changes you want looking at that exact version with the exact data needed for it to work.

When you are done, you can go back to your previous branch, rebuild the image again using the current source code (and data in CSV/JSON files) and keep going with your new features.

Just to be clear: I don't mean you shouldn't use Durable %SYS. You must use Durable %SYS on Production!

But I have strong reservations about using it on your PC (developer environment). That's all. Even the central development environment (where Unit Tests could be run) wouldn't need it.

But your UAT/QA and Production should definitely use Durable %SYS and you should come up with your own DevOps approach to deploying your software on these environments so you can test the upgrade procedure.

Kind regards,

AS

Hi!

It is not hard to build a task that will search for your message headers from a specific service, process or operation in a date range and delete the headers and the messages. You can start by searching on Ens.MessageHeader. You will notice that there are indices that will allow you to search by date range and source config item pretty quickly.

BUT: 

I always recommend to my customers to create a production for each different business needs. That allow you to have different back up strategies, purging strategies and even maintenance strategies (some times you want to touch one production and be sure that the others will not be touched). You are also able to take these productions to other servers when you need them to scale independently (ops! now this new service is receiving a big volume of messages and is disturbing the others!). On different servers, you may be able to patch one production (one server) while the others are running. This is specially more feasible now with InterSystems IRIS. You can easily migrate from Ensemble to IRIS and have a production running on a VM with 2 cores, another on another VM with 4 cores, and so on. You don't have to start with 8 cores and scale by 4 like ensemble. So you can pay the same you are paying today, but with the possibility of having more governance.

PS: Mirroring on IRIS is paid differently from mirroring with Ensemble. With IRIS, you do have to pay for the mirrors as well while with Ensemble you don't have to.

So, sincerely I believe you will be happier having another production. It's a good start to have more governance.

Kind regards,

Amir Samary

Senior Sales Engineer - InterSystems

Hi!

Are you calling the workflow using an asynchronous <call> activity and using a <sync> activity to wait on it? For how long is your <sync> activity waiting? Beware that a workflow task will only stay on the workflow inbox while that <sync> activity is not done. So, if you don't have a <sync> activity or if the timeout on the <sync> activity is too short, the task will not appear on the workflow inbox or appear only momentarily.

The idea is that the BPL can put a timeout as an SLA on that task. If no one finish that task until that timeout expires, you will receive whatever (incomplete) response is there and, maybe, escalate to other roles, send alerts, etc.

Kind regards,

AS

Hi Eduard!

Without going too deep into your code and trying to enhance it, I can suggest you to:

  1. Put this method on a utility class
  2. Recode it so that it is a method generator. The method would use a variation of your current code to produce a $Case() that would simply return the position of a given property instead of querying the class definition in runtime.
  3. Make the class you want to use it (ex: Sample.Address) inherit from it.

Of course, if this is a one-time thing, you could simply implement number 2. Or you could simply forget about my suggestion at all if performance isn't an issue for this scenario. By using a method generator, you eliminates from the runtime any inefficiency of the code used to (pre)compute the positions.

Kind regards,

AS

Hi!

HL7 has an XML Schema that you can import as Ensemble classes. You can use these classes to implement your business services and operations web services. Here is the XML schema link.

Another way of using this XML Schema is not to import them as classes. Instead, use the XML Schema importing tool we have on Ensemble Management Portal and use EnsLib.EDI.XML.Document with it. You could receive plain XML text and transform it into an EnsLib.EDI.XML.Document. It will be faster to process than to transform it to objects. If you are interested, I can share some code.

All that being said, I strongly suggest NOT TO DO ANY OF THIS.

HL7v2 XML encoding isn't used anywhere and the natural reasoning behind using it is just ridiculous. If you are asking someone to support HL7v2 and they are willing to spend their time implementing the standard, then explain to them that it is best for them to simply implement it the way 99% of the world uses it: with the normal |^~& separators. It's a text string that they can simply embed into their web service call.

There are these two XML encoding versions:

  •  EnsLib.HL7.Util.FormatSimpleXMLv2 - provided by Ensemble (example you have already on this post)
  • The XML encoding from HL7 itself (link I gave you above) 

Both encodings are horrible since they won't give you pretty field names that you can simply transform into an object and understand the meaning of the fields just by reading their names. Here is what an class generated from the standard HL7v2 XML Schema looks like (the PID segment):

Class HL7251Schema.PID.CONTENT Extends (%Persistent, %XML.Adaptor) [ ProcedureBlock ]
{

Parameter XMLNAME = "PID.CONTENT";

Parameter XMLSEQUENCE = 1;

Parameter XMLTYPE = "PID.CONTENT";

Parameter XMLIGNORENULL = 1;

Property PID1 As HL7251Schema.PID.X1.CONTENT(XMLNAME = "PID.1", XMLREF = 1);

Property PID2 As HL7251Schema.PID.X2.CONTENT(XMLNAME = "PID.2", XMLREF = 1);

Property PID3 As list Of HL7251Schema.PID.X3.CONTENT(XMLNAME = "PID.3", XMLPROJECTION = "ELEMENT", XMLREF = 1) [ Required ];

Property PID4 As list Of HL7251Schema.PID.X4.CONTENT(XMLNAME = "PID.4", XMLPROJECTION = "ELEMENT", XMLREF = 1);

You see? No semantics. Just structure. So, why not simply send the HL7v2 standard text through your Web Service or HTTP service? Here are the advantages of using the standard HL7v2 text encoding instead of the XML encoding:

  • You will all be working with the HL7 standard as 99% of the world that uses HL7v2 does instead of trying to use something that never caught up (XML encoding)
  • Ensemble brings Business Services and Business Operations out of the box to receive/send HL7v2 text messages through TCP, FTP, File, HTTP and SOAP. So you wouldn't have to code much in Ensemble to process them.
  • It takes less space (XML takes a lot of space)
  • It takes less processing (XML is heavier to parse)
  • Using the XML schema won't really help you with anything since both XML encoding systems provide just a dumb structure for your data, without semantics.

Kind regards,

AS

Class HL7251Schema.PID.CONTENT Extends (%Persistent, %XML.Adaptor) [ ProcedureBlock ]
{

Parameter XMLNAME = "PID.CONTENT";

Parameter XMLSEQUENCE = 1;

Parameter XMLTYPE = "PID.CONTENT";

Parameter XMLIGNORENULL = 1;

Property PID1 As HL7251Schema.PID.X1.CONTENT(XMLNAME = "PID.1", XMLREF = 1);

Property PID2 As HL7251Schema.PID.X2.CONTENT(XMLNAME = "PID.2", XMLREF = 1);

Property PID3 As list Of HL7251Schema.PID.X3.CONTENT(XMLNAME = "PID.3", XMLPROJECTION = "ELEMENT", XMLREF = 1) [ Required ];

Property PID4 As list Of HL7251Schema.PID.X4.CONTENT(XMLNAME = "PID.4", XMLPROJECTION = "ELEMENT", XMLREF = 1);

Hi!

I would add that they exist for compatibility with systems migrated from other databases. On other SQL databases, it's a common practice to use as the primary key a meaningful attribute of the tuple such as SSN, Code, Part Number, etc. I think this is a very bad practice even on normal SQL databases since a Primary Key/IdKey, once created, can't be easily changed. If you want to have one of your meaningful fields of your tuple as your primary key, use IdKey. I strongly advise against it though.

So, instead of having as Primary Key one of the attributes of the tuple (using IdKey), it's much better to have a sequential number. This is provided by different databases in different ways. Oracle has it's SEQUENCES. SQL Server has its AUTO_INCREMENT. Continuing with Oracle and SQL Server examples, many fields/columns on a table can be populated with values from a sequence (Oracle) or be auto incremented (SQL server). But only one of the fields can be the Primary Key. The Primary Key is used by other rows on the table or other tables to reference this row. It can't be changed just because there may be other rows relying on it and that is fine.

In Caché, we have our ID field that is an auto_increment kind of primary key. Other rows/objects from this or other tables will be referencing this object through it's ID. You can create your own unique fields. As many as you want. Code, PartNum, SSN, etc. You can define them Unique indices for them all. Don't use IdKey to do that. That is not its purpose. IdKey will make that field THE Primary Key of the class. It will make its ID be the field you picked. That is very bad (IMHO).

On the other hand, there are cases where performance can be increased by using IdKey. It will save you a trip to the indices global to get the real id for that field before going to the data global to get the other fields you need. If we are talking about millions of accesses per second and you are pretty sure you won't need to change the value of that field once it was created, use IdKey. It will give you better performance. But if you do, beware that by choosing you own IdKey, you may not be able to use special kinds of indices like bitmap indices or even iFind. It will depend on the type of IdKey you pick. If it's a string, for instance, iFind and BitMap indices won't work with your class anymore. They rely on having a numeric sequential ID to work.

Kind regards,

AS

If you have a flag on your database that you set after reading the records, and this flag has an Index, you should be fine. Why bother? I mean: You could set this to be run every hour and process the records that have the right flag...

But if you really need to run your service on a specific schedule I would suggest  changing your architecture. Try creating a Business Service without an adapter (ADAPTER=Ens.InboundAdapter) so you can call it manually through Ens.Director.CreateBusinessService(). On the Business Service, call a Business Process that will call a Business Operation and grab the data for you. Then your Business Process will receive the data and do whatever needs to be done with it, maybe calling other Business Operations.

If that works, create a task (a class than inherits from %SYS.Task.Definition) that will create an instance of your business service using Ens.Director and call it. Configure the task to run on whatever schedule you need such as "Once a day at 12PM" or "Every hour" or "On monday, at 12AM", etc.

Kind regards,

AS

I think you are looking for the %MATCHES SQL extension of ours:

http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...

But it looks like this is a simplified regex not real regular expressions. But it may be powerful enough to accomplish what you need. Another option is the %PATTERN that allows you to use Caché's native pattern matching.

I use GitHub's issues that allows you to create issues of all sorts (bugs, enhancements, tasks, etc.) and assign them to someone or to a group of people. The issues are associated with the source code repository and there is also a functionality for projects. You create a project and then list the tasks needed to accomplish that project. Then you can make every task an issue and assign it to people. Then you can drag and drop the tasks from one stage to the next like Specification > Development > Testing > Product.

GitFlow is a good and flexible workflow. But I don't use Git to deploy on Pre-Live or LIVE. I normally would have four environments:

  • Development - Your machine
  • Development - where you integrate the development branch from Git with the work from all developers. Downloading the code from GitHub can be done automatically (when a change is integrated back into the develop branch) or manually.
  • QA - This environment is where you download code from GitHub's master branch with a new release. Your users can test the new release here without being bothered. 
  • Pre-Production/Pre-LIVE - This environment is periodically overwritten with a copy of LIVE. It is where you try and test applying your new release.
  • Production

GitFlow's hotfix may be used depending on your evironment. Depending on the change and on the urgency, it will be a pain to actually test the fix on your develop machine. Your local globals may not match the storage definition of what is in production because you may have been working on a new version of your classes with different global structures. You may need larges amount of data or specific data to reproduce the problem on your developer machine, etc. You can do it, but every hot fix will be a different workflow. Depending on the urgency you may simply not have the time to prepare your develop environment with the data and conditions to reproduce the problem, fix it and produce the hot fix. But it can be done.  On the other hand, as pre-production is a copy of LIVE, you can safely fix the problem there manually (forget GitHub), apply the change to LIVE and then incorporate these changes into your next release. I think this is cleaner. Everytime you have problem in LIVE, you can investigate it on PRE-LIVE. If PRE-LIVE is outdated, you can ask Operations for an emergency unscheduled refresh of PRE-LIVE to work on it.

About patching

I recommend always creating your namespaces with two databases: One for CODE and another for DATA.

That allows you to implement patching with a simple copy of the CODE database. You stop the instance, copy the database and start the instance. Simple like that. Every release may have an associated Class that has code to be run to rebuild indices, fix some global structures that may have changed, do some other kind of maintenance, etc.

If you are using Ensemble and don't want to stop your instance, you can generate your patch as a normal XML package and a Word document explaining how to apply it. Test this on your Pre-LIVE environment. Fix the document and/or the XML package if necessary, and try again until patching works. The run it on LIVE.

Before patching applying new releases to PRE-LIVE or LIVE, execute a full snapshot of your servers' virtual machine. If the patching procedure fails for some reason, you may need to rollback to that point in time. This is specially useful on PRE-LIVE where you are still testing the patching procedure and will most likely break things until you get it right. To be able to quickly go back in time and try it again and again and again will give you the freedom you need to produce a high quality patching procedure.

If you can afford downtime, use it. Don't try to push a zero downtime policy if you don't really need it. It will only make things unnecessarily complex and risky. You can patch Ensemble integrations without down time with the right procedure though. I micro services architecture may also help you to eliminate downtime but it is complex and requires a lot of engineering.

Using External Service Registry

I recommend using External Service Registry so that when you generate your XML project with the new production definition and etc. no references to End Points, folders, etc. are there. Even if you don't send your entire production class, this will help you with the periodic refreshing of the pre-live databases from live. The External Registry Service would store the end point configurations outside your databases and they would be different on LIVE, PRE-LIVE, QA, DEV and on the developer's machine (that may be using local mock services, new versions of services elsewhere, etc.).