Hello Eduard,

I was not aware of version agnostic URLs. I will use them, thanks for the link! And yes indeed, I know that the documentation is lacking in some areas. And in fact I have a branch dedicated to issue documentation updates... Time to update and merge, I guess :)

As to postconditionals, well, I put myself in the place of someone which is new to the language: his background will more than probably be with languages with all the "classic" branch keywords (if, else, while, do...); making it so that the code is more readable is always a win! And no, shorter isn't better :)

But yes, you can configure what is called quality profiles in which you select which rules you want to activate, and you can also alter their severity to fit your needs. Not only that but you can also expand the help text and change the parameters of rules which have parameters (classic example: complexity).

Oh, and by the way, there _is_ a difference between postconditionals and if (I learned that yesterday): a postconditional won't set $TEST...

If you're at the summit, remember that at 12:30pm today I'll give a short talk :) If you are there I will demonstrate, among other things, how you can set your own quality profiles.

Hope this helps!

Hello,

That is not what I mean, sorry. What I meant to ask is whether there is a way to store "primitive" JSON values, not only objects and arrays. For instance, can I:

set xxx = ##class(JsonValue).number(2.3)
set xxx = ##class(JsonValue).null()
// or .true(), .false(), .string("some string constant"), etc
w xxx.getType() // would return either of object, array, number, boolean or null

Just one thing...

I was at a conference back in June in Barcelona where I had the chance to meet with, among other InterSystems people, Jamie Newton...

And I mentioned to him that this JSON was valid:

{ "": false }

(ie, the key of the object member is in fact an empty JSON String)

This was some months ago; does ObjectScript now support such keys?

Hello,

I don't understand what you mean?

As you may have seen on the main page, there is an email to which you can write so that your code can get analyzed on this very site. Note that this is a compromise: you get the analysis for free, but the code is visible to everyone; which is why open source projects written in COS get the most benefit from it.

Related to that somehow is that from our experience, most such projects still use Studio's export facility which exports to XML; fortunately, we have a solution to turn those exports into plain source files (https://github.com/litesolutions/cache-import). Once Atelier gains wider adoption, things may change, in a good way: no need to massage XML anymore!

Hello,

This is precisely why I'm asking for feedback about default severities :) Do I take it that you think the severity is too high? If yes, what do you think it should be?

Note that if you wish to, I can set up an account so that you can modify which rules are active and which are not, along with the default severities (just send me a mail). But you talk about fine tuning, I'd rather the settings were good enough that the amount of fine tuning spent is minimized :)

I am a seasoned Java developer myself and I also disagree with many of the rules of SonarQube's Java plugin. But admittedly, I revere type safety and it somewhat shows in the rules :)

----

As to macros, well, you can write a macro as a statement, sure, but the problem I see is that it is an opaque statement. What does this macro do? Does it exit the method? If a variable is passed as an argument, does it modify this variable? Does it increase the complexity? If yes, by how much? Basically, this is the reason for the rule. Nothing Java related :)

In the future the plugin may expand macros, in fact; right now however this is not supported...

The problem with all such analyses is always the same. They fail to account for the single most important factor for real world applications, and that is memory bandwidth.

Code written for benchmarks has the flaw that it is precisely "written for benchmarks". It is no replacement for real life code which accesses memory in ways that such custom code cannot even begin to emulate.

Consider virtualization for a moment. Let's take the x86 world. Let's take Intel's VT-x, or AMD's AMD-V. If you look at the wikipedia entry for such techniques, the core goal of those technologies is _not_ to make machine code run faster (this is the "easy part"), but to reduce the time it takes for programs in virtualized environments to access memory.

Running code is "easy". Accessing memory is hard... And accessing memory is the key.

----

For completeness, I'll just add this: I am mostly a Java developer, and I know that the most glaring flaw of Java is its poor locality of reference -- accessing any `X` in a `List<X>` is very likely to trigger a page fault/load/TLB update cycle for this particular `X` to be available. As a result, when performance comes into play, I do not look as much into the frequency of the CPU as I look into the L{1,2,3} cache sizes and the memory bus width and speed. In the long run, the latter three are the deciding factors as far as I'm concerned.

Hello,

As to the release date: I cannot really say more than "end of Q1", sorry. I'm the tech guy, not the project manager :)

As to an option to write your own checks, all I can say is this: there will be a presentation of this plugin at the InterSystems summit in Phoenix; and a plan is to collect user feedback. It is already being considered to include this feature, however its cost is non negligible... But man, it'd be nice. In short, I sincerely believe that this will be a feature, but maybe not in the very first version!

Again, sorry, I cannot say more and I have already said waaaay too much.

In the meanwhile, I know the rules not to be perfect in any way, so if you have feedback, please send it to me by mail :p It should be in my profile!

Those are indicators; rather, metric evolutions.

Metrics are numeric values, and an up arrow means an increase and a down arrow means a decrease, so far it's rather obvious.

As to green and red, those represent the perceived direction, and this is defined in the metric. As could be guessed, green means better and red worse, but it may be that for some metrics a _higher_ value is in fact better.

In the image you posted here, the metric on the right hand side is the technical debt and a higher value is considered worse; the other column is the number of lines of code and it is shown as increasing because I solved a parser error for a file in that project.

Note that this view displays the tendency over 30 days. The rhythm at which you run analyses is yours to choose. And I need to update the plugin... There are quite a few false positives, and the technical debt of a few of these projects may drop as a result!

I am theoretically not allowed to show the site but here goes:

https://demo.cachequality.com

As to "what I do", it is just writing a language plugin, and that implies filling basic metrics (number of files, classes, methods, etc) and writing code checks...

Feel free to have a look around but please note that I am not a seasoned COS developer, therefore some checks may be grossly {over,under}estimated as to their severity and time to fix... I'm happy to receive any feedback!

Note that I already know that there _are_ false positives on the reports; I'm working to fix them.