Hi!

The method %FileIndices(id) may help you. The problem is that you won't know which ID you must use to call it. Have new objects been added? Which have been changed?

I assume you have taken an old style global and mapped it to classes and have created an index for it. That's ok. Normally old style applications will have its own indices on the same global or another. They probably have an index on Name already that they use and you should try to map that existing subscript as your name index on your mapped class instead of creating a new index.

But if you really want to go on and create a new index, you have only two choices:

  1. To change the old style code to use objects/sql and stop setting the globals directly for that specific entity
  2. To change the old style code to set the new index global together with the data global (this is very common practice on old style applications).
  3. Ask them (the programmers of the old style application) to call %FileIndices(id) themselves after adding/changing a record so that all the indices defined on your mapped class would be (re)computed (including the indices that they have already set manually and you have mapped on your class). You could offer them to call this method to populate the indices global for them instead of setting the indices globals themselves to eliminate the redundant work. But you would have to have your mappings very well defined to replace their code, with all their indices.

As you can see, all options imply changing the old style code. I would push for option number 1 as this can be the start of some real good improvements to your old style application. 

There is no middle ground here, unless you want to rebuild all your indices every day.

Depending on how many places this global is referred on your application, the way the application is compartmentalized, and if you are using indirection or not throughout your application, these changes can be very simple or very complex to do and test.

Another way of going about this is to try to use the journals APIs to read the journal files, detect changes on this specific global and update the index... That would prevent you from touching the old style application. I have never done this before, but I believe it is pretty feasible.

Kind regards,

AS

Hi!

I wouldn't recommend putting globals or routines on your %SYS namespace for two reasons:

  • This will be shared with all namespaces. This may be what you want today, but may not be what you want in the future.
  • One nice thing to organize your code, globals and classes on different databases is that you can simplify your patching procedures. Instead of shipping classes, globals and routines from DEV to TEST or LIVE you can ship the necessary databases. So, if you have a database for all your routines/classes/compiled_pages, and you have a new release of your application, you can copy the entire CODE database from DEV to TEST or LIVE and "upgrade" that environment. Of course upgrading an environment may involve other things like rebuilding some indices, running some patching code, etc. But again, you can backup your databases, move the new databases to replace the old ones, run your custom code and rebuild your indices (if necessary) and test it. If something fails, you can always revert back to the previous set of databases. Well, you can't do that if you have custom code and globals on %SYS since copying the CACHESYS database from one system to another is taking more than what you would like to take from that system to the other.
  • With the new InterSystems Container Manager (ICM) that comes with InterSystems IRIS, there is this concept of "durable %SYS" where %SYS namespace is extracted from inside the container and put into an external volume. That is good because you can replace your container without loose all the system configurations you have stored on %SYS. But IRIS is not meant to be compatible with old practices such as taking care of your custom globals, routines and classes on %SYS. InterSystems may decide to change this behavior and make %SYS less custom code friendly.

 

So, you can perfectly create more databases to organize your libraries and globals and map them to your namespace without problems. One database for code tables, one database for some libraries you share between all your namespaces, one database for CODE another for DATA, etc. That is what namespaces are for: To rule them all.

 

Kind regards,

AS

Hi!

HL7 has an XML Schema that you can import as Ensemble classes. You can use these classes to implement your business services and operations web services. Here is the XML schema link.

Another way of using this XML Schema is not to import them as classes. Instead, use the XML Schema importing tool we have on Ensemble Management Portal and use EnsLib.EDI.XML.Document with it. You could receive plain XML text and transform it into an EnsLib.EDI.XML.Document. It will be faster to process than to transform it to objects. If you are interested, I can share some code.

All that being said, I strongly suggest NOT TO DO ANY OF THIS.

HL7v2 XML encoding isn't used anywhere and the natural reasoning behind using it is just ridiculous. If you are asking someone to support HL7v2 and they are willing to spend their time implementing the standard, then explain to them that it is best for them to simply implement it the way 99% of the world uses it: with the normal |^~& separators. It's a text string that they can simply embed into their web service call.

There are these two XML encoding versions:

  •  EnsLib.HL7.Util.FormatSimpleXMLv2 - provided by Ensemble (example you have already on this post)
  • The XML encoding from HL7 itself (link I gave you above) 

Both encodings are horrible since they won't give you pretty field names that you can simply transform into an object and understand the meaning of the fields just by reading their names. Here is what an class generated from the standard HL7v2 XML Schema looks like (the PID segment):

Class HL7251Schema.PID.CONTENT Extends (%Persistent, %XML.Adaptor) [ ProcedureBlock ]
{

Parameter XMLNAME = "PID.CONTENT";

Parameter XMLSEQUENCE = 1;

Parameter XMLTYPE = "PID.CONTENT";

Parameter XMLIGNORENULL = 1;

Property PID1 As HL7251Schema.PID.X1.CONTENT(XMLNAME = "PID.1", XMLREF = 1);

Property PID2 As HL7251Schema.PID.X2.CONTENT(XMLNAME = "PID.2", XMLREF = 1);

Property PID3 As list Of HL7251Schema.PID.X3.CONTENT(XMLNAME = "PID.3", XMLPROJECTION = "ELEMENT", XMLREF = 1) [ Required ];

Property PID4 As list Of HL7251Schema.PID.X4.CONTENT(XMLNAME = "PID.4", XMLPROJECTION = "ELEMENT", XMLREF = 1);

You see? No semantics. Just structure. So, why not simply send the HL7v2 standard text through your Web Service or HTTP service? Here are the advantages of using the standard HL7v2 text encoding instead of the XML encoding:

  • You will all be working with the HL7 standard as 99% of the world that uses HL7v2 does instead of trying to use something that never caught up (XML encoding)
  • Ensemble brings Business Services and Business Operations out of the box to receive/send HL7v2 text messages through TCP, FTP, File, HTTP and SOAP. So you wouldn't have to code much in Ensemble to process them.
  • It takes less space (XML takes a lot of space)
  • It takes less processing (XML is heavier to parse)
  • Using the XML schema won't really help you with anything since both XML encoding systems provide just a dumb structure for your data, without semantics.

Kind regards,

AS

Class HL7251Schema.PID.CONTENT Extends (%Persistent, %XML.Adaptor) [ ProcedureBlock ]
{

Parameter XMLNAME = "PID.CONTENT";

Parameter XMLSEQUENCE = 1;

Parameter XMLTYPE = "PID.CONTENT";

Parameter XMLIGNORENULL = 1;

Property PID1 As HL7251Schema.PID.X1.CONTENT(XMLNAME = "PID.1", XMLREF = 1);

Property PID2 As HL7251Schema.PID.X2.CONTENT(XMLNAME = "PID.2", XMLREF = 1);

Property PID3 As list Of HL7251Schema.PID.X3.CONTENT(XMLNAME = "PID.3", XMLPROJECTION = "ELEMENT", XMLREF = 1) [ Required ];

Property PID4 As list Of HL7251Schema.PID.X4.CONTENT(XMLNAME = "PID.4", XMLPROJECTION = "ELEMENT", XMLREF = 1);

Hi!

I would add that they exist for compatibility with systems migrated from other databases. On other SQL databases, it's a common practice to use as the primary key a meaningful attribute of the tuple such as SSN, Code, Part Number, etc. I think this is a very bad practice even on normal SQL databases since a Primary Key/IdKey, once created, can't be easily changed. If you want to have one of your meaningful fields of your tuple as your primary key, use IdKey. I strongly advise against it though.

So, instead of having as Primary Key one of the attributes of the tuple (using IdKey), it's much better to have a sequential number. This is provided by different databases in different ways. Oracle has it's SEQUENCES. SQL Server has its AUTO_INCREMENT. Continuing with Oracle and SQL Server examples, many fields/columns on a table can be populated with values from a sequence (Oracle) or be auto incremented (SQL server). But only one of the fields can be the Primary Key. The Primary Key is used by other rows on the table or other tables to reference this row. It can't be changed just because there may be other rows relying on it and that is fine.

In Caché, we have our ID field that is an auto_increment kind of primary key. Other rows/objects from this or other tables will be referencing this object through it's ID. You can create your own unique fields. As many as you want. Code, PartNum, SSN, etc. You can define them Unique indices for them all. Don't use IdKey to do that. That is not its purpose. IdKey will make that field THE Primary Key of the class. It will make its ID be the field you picked. That is very bad (IMHO).

On the other hand, there are cases where performance can be increased by using IdKey. It will save you a trip to the indices global to get the real id for that field before going to the data global to get the other fields you need. If we are talking about millions of accesses per second and you are pretty sure you won't need to change the value of that field once it was created, use IdKey. It will give you better performance. But if you do, beware that by choosing you own IdKey, you may not be able to use special kinds of indices like bitmap indices or even iFind. It will depend on the type of IdKey you pick. If it's a string, for instance, iFind and BitMap indices won't work with your class anymore. They rely on having a numeric sequential ID to work.

Kind regards,

AS

If you have a flag on your database that you set after reading the records, and this flag has an Index, you should be fine. Why bother? I mean: You could set this to be run every hour and process the records that have the right flag...

But if you really need to run your service on a specific schedule I would suggest  changing your architecture. Try creating a Business Service without an adapter (ADAPTER=Ens.InboundAdapter) so you can call it manually through Ens.Director.CreateBusinessService(). On the Business Service, call a Business Process that will call a Business Operation and grab the data for you. Then your Business Process will receive the data and do whatever needs to be done with it, maybe calling other Business Operations.

If that works, create a task (a class than inherits from %SYS.Task.Definition) that will create an instance of your business service using Ens.Director and call it. Configure the task to run on whatever schedule you need such as "Once a day at 12PM" or "Every hour" or "On monday, at 12AM", etc.

Kind regards,

AS

Hi Robert,

You are right. Now I see there are methods on the datatype for LogicalToODBC and ODBCToLogical. 

But I insist that the methods LogicalTo*, ODBCTo* and DisplayTo* should not be called directly. Although they will work, the correct way of dealing with data type conversions are using normal functions such as $ZDateH() with the proper formatting code.

If one wants to store names in a format where the surname should be separated from the name by a comma, I would instead simply use %String for that or subclass it to create a new datatype that will make sure there is a comma in there (although I think this is too much).

Kind regards,

AS

Hi Mike,

%List is not supposed to be used that way. It doesn't have a projection to SQL, so you can't create a property on a persistent class with it and expect it to work as a %String or %Integer property will. You could have it there as a private property for some other purpose of your class but not to be used as any other property as you expect.

I think %List was created to be used on the definition of method parameters and return types of such methods to represent a $List parameter. Then one would know that such datatype must be used and this would also be used when projecting those methods to Java, .NET, etc. for the same reason (there is a Java/.NET/etc. $List like datatype for every language binding we support).

If you need to represent names in a format such as "Surname,Name", try using %String or creating another datatype that inherits from %String and validates that the string is in such format by implementing the IsValid() method of your datatype.

Also, don't try to use LogicalTo*, ODBCTo* or DisplayTo* methods directly. Those are called by the SQL and Object engine accordingly. For instance ODBCToLogical is called when storing data from ODBC/JDBC connections. You shouldn't have to call these methods. 

Kind regards,

AS

I think you are looking for the %MATCHES SQL extension of ours:

http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...

But it looks like this is a simplified regex not real regular expressions. But it may be powerful enough to accomplish what you need. Another option is the %PATTERN that allows you to use Caché's native pattern matching.

I use GitHub's issues that allows you to create issues of all sorts (bugs, enhancements, tasks, etc.) and assign them to someone or to a group of people. The issues are associated with the source code repository and there is also a functionality for projects. You create a project and then list the tasks needed to accomplish that project. Then you can make every task an issue and assign it to people. Then you can drag and drop the tasks from one stage to the next like Specification > Development > Testing > Product.

GitFlow is a good and flexible workflow. But I don't use Git to deploy on Pre-Live or LIVE. I normally would have four environments:

  • Development - Your machine
  • Development - where you integrate the development branch from Git with the work from all developers. Downloading the code from GitHub can be done automatically (when a change is integrated back into the develop branch) or manually.
  • QA - This environment is where you download code from GitHub's master branch with a new release. Your users can test the new release here without being bothered. 
  • Pre-Production/Pre-LIVE - This environment is periodically overwritten with a copy of LIVE. It is where you try and test applying your new release.
  • Production

GitFlow's hotfix may be used depending on your evironment. Depending on the change and on the urgency, it will be a pain to actually test the fix on your develop machine. Your local globals may not match the storage definition of what is in production because you may have been working on a new version of your classes with different global structures. You may need larges amount of data or specific data to reproduce the problem on your developer machine, etc. You can do it, but every hot fix will be a different workflow. Depending on the urgency you may simply not have the time to prepare your develop environment with the data and conditions to reproduce the problem, fix it and produce the hot fix. But it can be done.  On the other hand, as pre-production is a copy of LIVE, you can safely fix the problem there manually (forget GitHub), apply the change to LIVE and then incorporate these changes into your next release. I think this is cleaner. Everytime you have problem in LIVE, you can investigate it on PRE-LIVE. If PRE-LIVE is outdated, you can ask Operations for an emergency unscheduled refresh of PRE-LIVE to work on it.

About patching

I recommend always creating your namespaces with two databases: One for CODE and another for DATA.

That allows you to implement patching with a simple copy of the CODE database. You stop the instance, copy the database and start the instance. Simple like that. Every release may have an associated Class that has code to be run to rebuild indices, fix some global structures that may have changed, do some other kind of maintenance, etc.

If you are using Ensemble and don't want to stop your instance, you can generate your patch as a normal XML package and a Word document explaining how to apply it. Test this on your Pre-LIVE environment. Fix the document and/or the XML package if necessary, and try again until patching works. The run it on LIVE.

Before patching applying new releases to PRE-LIVE or LIVE, execute a full snapshot of your servers' virtual machine. If the patching procedure fails for some reason, you may need to rollback to that point in time. This is specially useful on PRE-LIVE where you are still testing the patching procedure and will most likely break things until you get it right. To be able to quickly go back in time and try it again and again and again will give you the freedom you need to produce a high quality patching procedure.

If you can afford downtime, use it. Don't try to push a zero downtime policy if you don't really need it. It will only make things unnecessarily complex and risky. You can patch Ensemble integrations without down time with the right procedure though. I micro services architecture may also help you to eliminate downtime but it is complex and requires a lot of engineering.

Using External Service Registry

I recommend using External Service Registry so that when you generate your XML project with the new production definition and etc. no references to End Points, folders, etc. are there. Even if you don't send your entire production class, this will help you with the periodic refreshing of the pre-live databases from live. The External Registry Service would store the end point configurations outside your databases and they would be different on LIVE, PRE-LIVE, QA, DEV and on the developer's machine (that may be using local mock services, new versions of services elsewhere, etc.).

If you need to monitor your productions, try checking out the Ensemble Activity Monitor:

http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...

 

It will give you the information you need with minimum performance impact since the data will be stored on a cube and your queries won't affect your runtime system. You can see the counts on the last hour, week, month and year with trending graphs. You will also get the same for queuing and wait time. It's great stuff!

Agreed. I believe this information should have come inside the main exception. Many developers probably have a hard time debugging errors without the real root cause. But then, the documentation explains how to get to the root cause and even give code snippets on how you should code to always have the root cause (that is on %objlasterror).

I believe this recommendation you linked on our documentation is outdated and wrong. One must use %objlasterror on several instances. Examples:

Java Gateway: http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...

.NET Gatewayhttp://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...

SOAP Error Handlinghttp://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...

Caché ActiveX Gatewayhttp://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...

%New() constructorhttp://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...

Using %Dictionary Classeshttp://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...

There are many instances where %objlasterror is the ONLY place where you can find out what REALLY happened. Not using this information on production is, IMHO, unwise.

Kind regards,

Amir Samary

Hi Dan!

I don't really like macros. :) But I love exceptions. It would be awesome if %SQL.Statement simply threw the Exception when an error occurs instead of returning a SQLCODE that must be checked and transformed to either an exception or a %Status... In this way, we could keep the number of ways we deal with errors reduced to two, instead of three.

Your explanation is indeed very compelling and I will start using %SQL.Statement from now on. I was thinking about building a macro named $$$THROWONSQLERROR(result) that will receive the resultset returned by %Prepare and check it's SQLCODE and, if there is an error, throw it using result.%SQLCODE and %Message just like CreateFromSQLCODE does. This would allow me to hide SQLCODE.

Kind regards,

AS

When you receive a <ZSOAP> or <ZGTW> error, throw it away and take whatever comes into %objlasterror as your final %Status code. For <ZSOAP>, %objlasterror will have the real error description such as the timeout, XML parsing errors, authentication errors, etc. For <ZGTW> errors, %objlasterror will have the complete Java stack trace of the error.

Kind regards,

AS

I started using $System.Status.* methods about 10 years ago when I wanted to demo how we could take code from Visual Basic 6, VBA or ASP and copy most of its logic into a Caché class method and use Language = basic.

If you need to call one of our API methods from this VBScript code you would probably receive a %Status. As VBScript doesn't use the $$$ macros, the only way to parse the error was by using $System.Status methods. I believe supporting other languages as VBScript was one of the reasons we put this code in there... But I may be wrong.

So, for consistency, I started using only $System.Status methods everywhere. I could write some code in COS that would parse an error with $System.Status.IsError() and I could rewrite the same method in VBScript using the same $System.Status methods without having to explain to people why, on the same product, we would make you deal with errors in different ways. We couldn't avoid "On Error" x "Try/Catch" though.

This also helps people notice $System and %SYSTEM package of classes and see what else is in there. Very useful.

I understand using macros will result in faster code. I also believe our compile could optimize $System.Status.IsError() and $System.Status.OK() method calls to produce the same byte code as the macros. We probably don't do this, but as a Sales Engineer, that is trying to show people how simple and powerful our technology can be, I prefer consistency and clarity over speed. I would also prefer consistency and clarity over some additional speed in any professional code that must be maintained by someone else in the future... 

I have strong feelings about &SQL() too. I would avoid it at all costs whilst I know that it will be the faster way to run a query in Caché. I prefer using %SQL.Statement or %ResultSet because I hate to make my code uglier just to accommodate SQLCODE error handling. Beside this, &SQL can't be used on other supported languages such as VBScript (that is not important anymore) and will force you to compile your classes if you decide to add a new index to your class or make more dramatic changes such as changing your class storage definition. You can change your storage definition, add indices, etc. without having to recompile your classes when using %SQL.Statement or %ResultSet because those cached routines will be automatically deleted for you... That is what most people would expect. I like when things look clear, simple and natural... So I also avoid using &SQL.

Finally, people tend to not even check for errors . If you make things complex, most people will produce bad code and blame you for having a complex programming language. Consistency makes people safer.

Kind regards,

AS

Hi Dan!

I have been using %ResultSet forever and my coding style is as follows:

/// Always return a %Status
ClassMethod SomeMethod() As %Status
{
     Set tSC = $System.Status.OK()
     Try
     {
          Set oRS = ##class(%ResultSet).%New()
          Set tSC = oRS.Prepare("Select ......")
          Quit:$System.Status.IsError(tSC)        

          Set tSC = oRS.Execute()
          Quit:$System.Status.IsError(tSC)

          While oRS.Next()
          {
              //Do something...
          }     
     }
     Catch (oException)
     {
          Set tSC = oException.AsStatus()
     }
     Quit tSC
}

As you can see, it is painful enough to have to deal with both Try/Catch and %Status ways of handling errors. I have used Try/Catch in the same way I used to use $ZT back in the days. We must protect the code from unpredictable errors such as <FILEFULL>, <STORE>, etc. On the other hand, most of our API return %Status. So, there is no choice but to use a similar structure for handling both ways of reporting errors.

With the new %SQL.Statement interface I am required to check yet another way of reporting errors (SQLCODE) and translate those errors to either %Status or an Exception. That makes my code look ugly and no so much object oriented as I would like. You see, when I am doing demos and coding in front of people I tend to code the same way I code when I am building something for real and vice-versa. Caché/Ensemble is really a formidable technology and one can build things with our technology that would take anyone else months on other technologies. But the first impression is key and when I am doing demos I want to show beautiful code that is easy to read and understand. That is why I keep using %ResultSet. It's true %Prepare() will return a %Status but %Execute won't and I would have to inspect %SQL.StatementResult for it's SQLCODE and transform it into a %Status/Exception.

I opened a prodlog for this some time ago (118943), requesting an enhancement for this class to support a %Status property as well as a SQLCODE. 

Kind regards,

AS

I understand the power of %SQL.Statement but as most of my queries are simple I continue using %ResultSet since the error treatment with %Status is more consistent.

It is bad enough that we have to deal with a mix of %Status and Exception handling. I don't like to have to check for %SQLCODE being negative after %Execute() and, if it is, having to transform it to a %Status to keep error handling consistent.

%ResultSet's Execute() method will return me a %Status while %SQL.Statement interface makes me have to deal with yet another type of error (%SQLCODE) making error handling code yet uglier... 

I like consistency, so I continue using %ResultSet. But when I need more functionality or more speed, I use %SQL.Statement instead.

Respectfully,

AS

Hi Daniel!

I tend to look at REST services as a SOA Web Service an, as such, it must have a "contract". Binding this contract to the internal implementation can be problematic. Normally, you would try to work with a JSON object that is more "natural" to your web application while dealing with CRUD operations related to it on the server. That would allow you to decouple the client from the server through the contract and change your server implementation while keeping you client untouched.

So, beware that this excessive coupling can indeed increase productivity right now but may become a nightmare in the future...

Kind regards,

AS