I think the problem is that the RemoveItem() method does not action a %Save() on the production, so the item gets deleted, but remains referenced in the production XML.

If you look at the DeleteConfigItem() in the EnsPortal.ProductionConfig class you will see it calls the RemoveItem() method, but then does a %Save() afterwards to commit it.

I think I must have been doing something similar. I also seem to remember programmatically stopping the production first and then restarting it to remove any memory problems, although I think this was only causing problems on an early version of Ensemble.

I can't presume to know all decoders will unescape the solidus correctly.

Escaping the solidus is not a strict rule in the specification and there may be implementations that do not unescape it.

I think it is safer to say "most" than "all" and not place any false confidence on the next developer to read this post. I would rather they execute due diligence and be 100% sure that the third party libraries they are using will handle the 2016.1 output.

I wouldn't recommend this solution, it will cause problems.

It gets a bit confusing talking about strings, so lets refer to a serialised object as a stringy, and a property of a JSON object of type string as a string.

The $ZCONVERT function should be used to decode JSON strings (properties of a JSON object). It should not be used to decode a stringy (a serialised JSON object).

It works in your use case because it sees the stringy as a string and since there are only unwanted reverse solidus's to remove it looks like its working the way you want it to.

Let's say at a later date someone adds a description property, and one of those descriptions has the value 

A "special" file

In the JSON it would be escaped like so... 

{"FileStatus":"P","Path":"\/somepath\/test\/test123\/filename.txt","InterchangeOID":"100458","Desc":"A \"special\" file"}

This gets passed into $ZCONVERT and you end up with...

{"FileStatus":"P","Path":"/somepath/test/test123/filename.txt","InterchangeOID":"100458","Desc":"A "special" file"}

The path looks good, but now the description is invalid.

Is there a technical reason for wanting to remove the reverse solidus from the path?

Hi Chip.

In general, there is a danger of the tail (OOP) wagging the dog (ORM).

Polymorphism itself is a funny old thing. When it comes to true abstract things, its more easy to understand where its benefits come from. I always like to use IDataReader as a strong example of explaining Polymorphism. Where it becomes a bit murky is with the daft examples that try and teach Polymorphism, such as Cat and Horse implement Animal. It never really makes any real world sense, and as soon as you mix database storage it can go from murky to bizarre.

If we simplify it down to just shared data attributes then we can see how Person might make sense, but this type of design will have a trade off somewhere else in the architecture. You could have a Person table, and many concrete role looking implementations of Person, but then other areas such as SQL reporting can become really complex.

From the perspective of the example provided there are well established design patterns, we would have a Patient class and a separate Staff and Role class. Staff might share similarities to Patient, but we would try and solve this through composition over inheritance (and polymorphism), so for instance Address could be a shared thing. In this sense there is no problem that a Doctor can also be a Patient. There is duplication of storage here, but its a small trade off from other aspects of the architecture. There is also the aspect that something like Address has a different context anyway, a Patient Address would be home, whilst a Doctors Address would be work, and here the two database pools don't mix that well, hence why they tend to be two separate data entities.

That all said, its still an interesting question, and perhaps it just needs a better real world use case to explore it further...

As a code optimisation exercise, there is a lot going on here, two methods will create two extra stack levels, lots of unnecessary variables and extraneous operations such as $l and $p.

If you created a macro such as this...

#define isISCPS(%arg1) (%arg1["Caché")||(%arg1["Ensemble")||(%arg1["Intersystems IRIS")||(%arg1["DeepSee")||(%arg1["iKnow")||(%arg1["Atelier")||(%arg1["Online Learning")||(%arg1["Documentation")||(%arg1["WRC")

then you will get around 25x more operations per second for negative tests and upwards of 100x for short circuit positive tests, placing highest frequency tags on the left.

There are a few ways to test the existence of a sub-string in a string...

$find(string,tag)

string[tag

$match(string,tagPatternMatcher)

$lf(list,tag)

Given...

set string="a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z"

and testing for "x"

within a tight loop with a million tests

$find  = .004609

[      = .004813

$match = .008951

$lf    = .023201

$find is marginally quicker than the contains operator with a million operations in .004609 seconds, whilst the other two methods are relatively slower. I say relative because they are still quick in their own rights.

A wider question might not be which is the fastest, but how it's implemented with multiple tag tests. For instance a single variable creation is almost as expensive as the string test itself, and if you wrapped it all in a re-usable method, the method call would be 10x as expensive.

If you are dealing with two strings in the first instance, then a tight loop on $find is going to be your most optimum option. The other functions are probably just as performent in the right context, but if you have to dissect the strings to implement them, then the cost is not the test, but in the other operations that you have to do first. 

Congratulations Fabian and Bert for completing all of the challenges.

It's 5am when the challenges open up here so its hard to compete, but I think I would still be far behind some of the times that you have been achieving.

Plus I get a little side tracked with the challenges, I've been learning Python this year so its been interesting looking at Python solutions after I've done it in ObjectScript. The code is always much more cleaner and compact. It got me thinking on day 6 about building helper libraries specific to the competition. I ended up with a solution that looks like this...

ClassMethod Day6()
{
    set file=$$$file("C:\aoc\day6.txt",.sc) $$$QuitOnError(sc)
    set matrix=file.ToMatrix(", ")
    do matrix.Min(.minx,.miny)
    do matrix.Max(.maxx,.maxy)
    for x1=minx:1:maxx {
        for y1=miny:1:maxy {
            set min=""
            while matrix.ForEach(.key,.x2,.y2) {
                set dist=$zabs(x1-x2)+$zabs(y1-y2)
                if dist=min set nearest="."
                if (min="")||(dist<min) set min=dist,nearest=key
            }
            set count(nearest)=$get(count(nearest))+1
            if (x1=minx)||(x1=maxx)||(y1=miny)||(y1=maxy) set infinite(nearest)=""
        }
    }
    set most=0
    set key=$order(count(""))
    while key'="" {
        if '$data(infinite(key)),count(key)>most set most=count(key),hasMost=key
        set key=$order(count(key))
    }
    return count(hasMost)
}

I then got very side tracked on day 7 and started looking at transcompiling Python to ObjectScript. I managed to use Pythons own AST library to parse Python and push it into a global Mumps tree so I could tinker around converting Python to ObjectScript. Turns out the two languages are very compatible, unlike JavaScript which I gave up on a few years back. The interesting thing about Python is its incredible growth in popularity over the past few years, there is a really interesting article here on it...

https://stackoverflow.blog/2017/09/06/incredible-growth-python/

But then I would just settle on a "for x in y" in ObjectScript...

The IDE (or enhanced code editor if you like) is actually dog food for my own standards based UI library.

In terms of building web based editors I've been building them from scratch for years so I am fairly comfortable with the level of difficulty. There is a screen shot below of one.

And yes, TTD will be one of the last features, but, I do want to prove the concept up front and inject any design decisions into the foundations so that I don't end up with lots of code re-factoring.

I've not used Monaco, mainly ACE. Perhaps there is some synergy between the two projects to use Monaco.

Any help with the original question would be much appreciated.