Stefan,

Thanks for your thoughtful response.  A few comments and questions on your clarifications:

  • Flexibility
    • I am certain there are some more extreme situations where migration is necessary but the majority of my experience has been around data evolution with Caché Persistent Objects which is trivial to achieve (even renaming a field is straightforward without moving any data).  But if someone has to do a massive data migration moving data from one set of objects to a refactored set of objects then there is certainly some work involved.  
  • Sparseness
    • Your point about the $lb performance is interesting.  I did some quick tests and it looks like looping over a $lb structure 10M times to add 2 elements in the 1st and 2nd positions sees a 40% increase when accessing the 1st and 20th position and a 270% increase when accessing the 1st and 60th position.  So there is something to be said about the hit for extremely sparse persistent objects that are being referenced many, many, many times in succession (even my 60th position test only took 1.9 sec to access the two elements 10M times).  In reality I am not sure about what type of data model would actually run into this type of consideration (that and the need to be concerned about the tiny waste of space for null placeholders in a sparse $listbuild).  But it is an interesting thing to think about
  • Hierarchical
    • Your statement about objections only being able to store an embedded object is not entirely correct.  One of the advantages of the Parent/Child relationship is that the children are colocated in the storage of the parent.  This has major performance implications (just like serial objects) and are a very powerful construct as a result.  The only places where you are not colocating the data are the one to many or if you have a linked class instance.  With a complex data model you would certainly have fewer reads with a document store.
  • Dynamic Types
    • Your example isn't quick accurate - since Caché stores everything as a string you can give yourself as much flexibility as you want by not using a constrained type when designing your object.  You could use name as a %String in a persistent Caché object and then store either the value of "Stefan Wittmann" or the value of "{'first':'Stefan','last':'Wittmann','middle':null}" :)  You have full flexibility if you choose to have it - or you can use typing to leverage the build-in validation.  

From my perspective, while there is certainly increased flexibility to be gained with documents, that comes at the pricing of having to write more validation and processing code.  In addition, Caché Persistent Objects make it very easy to have your schema and structure be self-documenting (and the class definition can be the 'source of truth').  With Document, which in many respects is a move back in the direction of 'roll your own global structures' the developer would be on the hook for creating external documentation on the structure and field uses of the documents that are stored in the container.  Certainly picking good property names is a good step in the right direction, but that doesn't get you are far as you can get with class and property comments in a persistent class definition.  How do you envision people documenting their document schema?  

 

 

Stefan,

This is a great article and an excellent resource to help people come up to speed quickly on this new feature - thank you.

I do have one question / comment however.  You listed 4 benefits to using the Document approach, and I certainly see all of these as benefits over Relational DBs, but it appears to me that Caché Objects have the exact same benefits for points 2, 3 and 4, and it is so trivial to update Caché object schema's that I don't really see 1 as being very convincing to a Caché Object developer.  

I am assuming these benefits are more targeted to people looking to switch from relational DBs?  Or do you see a 'killer app' type possibilities for Caché-based shops that already make extensive use of Caché Objects (and so therefore already have easy schema updates, sparse storage, etc).

Thanks!

Ben

Thanks for pointing that out John!  That hasn't crossed  my radar yet and I am especially glad to see it as well!

Take a look at the $system.OBJ.Load() and $system.OBJ.LoadDir() methods.  This is the most common way to take source from your source control system and load them into your namespace.  

Yes - if you New $namespace, then once the context pops off the stack it will revert automatically to the original namespace before setting the newed version of $namespace

I want to bump this post up because it is really useful (thanks Evgeny) and people definitely need to be aware of this great resource for getting the example source!

Timur - THANK YOU!  It hadn't gotten annoying enough to google a solution yet, and you just saved me the effort :)

Tim- I saw the same thing and then I realized it was an outlook thing:

 

 

It saws right the top that Outlook removed extra line breaks.  If you click that message you can reverse it and the line breaks will then get corrected.

 

HTH,

Ben