As you might recognize the fact that JSON support is still work-in-progress, and any next version is getting better than prior. In this case 2016.3 will be better than 2016.2 because there iis added support for $toJSON in resultsets (though I ran this in "latest" so results might look different that in 2016.3FT).

DEVLATEST:12:56:44:SAMPLES>write $system.SQL.Execute("select top 10 Age, DOB from Sample.Person").$compose("%Array").$toJSON()
[{"Age":79,"DOB":35051},{"Age":29,"DOB":53212},{"Age":85,"DOB":32802},{"Age":92,"DOB":30297},{"Age":23,"DOB":55628},{"Age":8,"DOB":60811},{"Age":27,"DOB":53881},{"Age":35,"DOB":51048},
{"Age":0,"DOB":63815},{"Age":77,"DOB":35651}]
DEVLATEST:13:30:28:SAMPLES>write $system.SQL.Execute("select top 10 Age, DOB from Sample.Person").$toJSON()
{"content":[{"Age":79,"DOB":35051},{"Age":29,"DOB":53212},{"Age":85,"DOB":32802},{"Age":92,"DOB":30297},
{"Age":23,"DOB":55628},{"Age":8,"DOB":60811},{"Age":27,"DOB":53881},{"Age":35,"DOB":51048},
{"Age":0,"DOB":63815},{"Age":77,"DOB":35651}],
"metadata":{"columnCount":2,"columnIndex":null,
"columns":[{"ODBCType":4,"clientType":5,"colName":"Age","isAliased":0,"isAutoIncrement":0,"isCaseSensitive":0,
"isCurrency":0,"isExpression":0,"isHidden":0,"isIdentity":0,"isKeyColumn":0,"isNullable":1,
"isReadOnly":0,"isRowId":0,"isRowVersion":0,"isUnique":0,"label":"Age","precision":10,
"property":{"$CLASSNAME":"%Dictionary.CompiledProperty","$REFERENCE":"Sample.Person||Age"},
"qualifier":0,"scale":0,"schemaName":"Sample","tableName":"Person",
"typeClass":{"$CLASSNAME":"%Dictionary.CompiledClass","$REFERENCE":"%Library.Integer"}},
{"ODBCType":9,"clientType":2,"colName":"DOB","isAliased":0,"isAutoIncrement":0,
"isCaseSensitive":0,"isCurrency":0,"isExpression":0,"isHidden":0,"isIdentity":0,
"isKeyColumn":0,"isNullable":1,"isReadOnly":0,"isRowId":0,"isRowVersion":0,"isUnique":0,
"label":"DOB","precision":10,"property":"$CLASSNAME":"%Dictionary.CompiledProperty",
"$REFERENCE":"Sample.Person||DOB"},"qualifier":0,"scale":0,"schemaName":"Sample",
"tableName":"Person","typeClass":{"$CLASSNAME":"%Dictionary.CompiledClass",
"$REFERENCE":"%Library.Date"}}],"formalParameters":],"interface":"","objects":],
"parameterCount":0,"parameters":],"statementType":1},"selectmode":0,
"sqlcode":100,"message":"","rowcount":10}

Disclaimer: I've never worked with MultiValue Basic till today.

The proble with the table you were refferring to http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.FrameSet.cls?KEY=GVRF_basicfeatures- it's saying quite the opposite, roman numerical conversion code ('NR') is not supported in the Caché OCONV implementation.

Though if the code would be supported then it would be callable from ObjectScript mode via syntax like:

DEVLATEST:00:24:37:USER>w $mviconvs("12:12:12pm","MT")
43932
DEVLATEST:00:24:48:USER>w $mvoconv("44444","MTSH")
12:20:44PM

Ok, Eduard, let me elaborate using some fictional, ideal picture.

  • Let assume you are part of large team which is working on some pretty huge system, say multiple hundreds or even few thousands of classes in the single namespace. And whole your team are good citizens and have covered all classes with unit-tests pretty extensively.
  • Let assume you are working on a changes to classes which are part of pretty-large "system" package (i.e. ocnsisting of few hundreds of unit-tests)
  • And you have changed only 2 classes in the package, which is covered by dozen of unit-tests here and where.

And as a good citizen you always run all relevant unit-tests for changed classes before committing to VCS. So what would you choice before commit: to run unit-tests for whole package (which might take hours) or only for your project with relevant classes (which will take few minutes)?

Although I do foresee the need to run all unit-tests in current namespace or for the seelected package, but from developers prospective I will need to run only those unit-test which are connected to the current project I'm working on. And for that I use Studio project (or would use separate Atelier workspaces once and if I'd migrate to Atelier).

So it's a proper approach IMVHO to start from project items and then add wider collections (package(s) or namespace wide).

Ok, ok, good point about PPG that it will remove reference to the object. And that simple local variable will work better, because it will not serialize OREF to string.

And the only case when it was workng for me - then the parent call is still presented in the stack, and still has reference to the originally created object. Which is exactly the case when %-named singleton would work (i.e. you open object at the top of the stack, and it's still visible in the nested calls).

So yes, we better to get rid of PPG and use %-named variables in our CPM code (pull-requests welcome)

Here are assumptions:

- you could not implement singleton which will be working across jobs, only for this same process;

- to share instance to teh object you could emply %-named variable or process-private global.

Here is the sample using PPG, which is easy to convert to % with redefinition of CpmVar macro

https://github.com/cpmteam/CPM/blob/master/CPM/Main.cls.xml#L33

Glad you've mentioned it.

We have a plenty of open-source projects on several our GitHub hubs (the central one - intersystems, the russian one, and spanish one). But, unfortunately, nobody yet created msgpack implementation for Caché or Ensemble.

If you want to implement such component then we will republish it via those hubs with pleasure. :)

P.S.

Looks like it's easy to implement something similar to Python-library, especially having current native JSON support in 2016.x

Good question. There, on docs.intersystems.com, is set of documentations acompanying each version of Caché released so far. And for each version yu could find "Release Notes" which contains teh list of changes introduced to the language, among the other changes (see Release Notes for 2015.2 as an example)

I would need to go and collect all changes for all releases... if documentation team would not do it for you already :)

The recentmost release notes has all the changes since Caché 5.1. The only you need to do now is to go thru the newest list (from 2016.2FT) and to find those relevant changes for ObjectScript compiler or depretaion warnings.

I'm confused a bit with the set of proposed tools for "code coverage" and let me explain why...

Yes, ZBREAK is quite powerful, and ages ago, when there was no yet source-level MAC/CLS debugging in the Studio people implemented all the nasty things using smart ZBREAK expressions and some post-massaging with gathered data. Myself, for example, in my prior life, I've even implemented some "call tree" utility which was gathering information about called routines touched while executing arbitrary expression. And ZBREAK could help in many things, if there would be no kernel support...

But there is kernel support for profiling, which gather all the necessary [for code coverage] information, and which you could see used in ^%SYS.MONLBL. You don't actually interested in the actual timings for every line, but you do want to know that this particular line was called, but this was not.

Also I'm confused with you mentioned UnitTest infrastructure as key in solution, actually it's not necessary. Code coverage utilities are usually targetting not only unit-test code, but whole code size of your system. Assumption is - you do have some set of regression or unit-tests, not necessary using directly that mentioned UnitTest.Manager infrastructure, probably any other test framework, but what is the goal of execution - is collecting information of covered lines in some core of your system. This is quite independent of the test infrastructure, IMVHO.

So for me "code coverage" tool should:

  • use profiling mode with the least intrusive counters activated;
  • be able to remap from INT level gathered by profiler to the original MAC/CLS level;
  • and be able to nicely visualize the gathered information