Timur Safin · Dec 21, 2022 go to post

While I agree that embedding of a yet another web-server into installation of some enterprise software is really a very, very bad idea, increasing a vector of attack, and in the scenario of tightened enterprise install it should be avoided at all costs. But, OTOH, I understand the position of developers using full kit of IRIS (not community edition) for those this immediate Apache removal might lead to significant user experience regression. And it would be rather more preferable to "deprecate" it somehow smoother (e.g. change default to not install, but still keep it as part of a kit for developers installations).

But all this sounds like you actually need to have 3 kinds of kits available:

- community with PWS enabled by default;

- developer version, with all theoretical insecurities, but with convenient goodies integrated (i.e. Apache available, may be with vscode+extensions, and so on);

- and tightened, production ready kit, with Apache kit removed.

Timur Safin · Nov 27, 2020 go to post

Interesting article, Alexey!
But we are still eagerly waiting for the 2nd part, when we should expect it?

Timur Safin · Apr 2, 2019 go to post

Sorry for any confusion created - that was only half-baked 1st April joke, as correctly pointed out by Evgeny. I do not have yet such marvelous compiler (with closures, only modern syntax allowed, etc.)!
Sorry! :)

Timur Safin · Apr 1, 2019 go to post

Well, as all lambdas, they are not silver bullets for those looking for better performance, neither they bring any performance overhead. They are simple syntax sugar for keeping some context close to operation. Nothing magical, but nothing less

Timur Safin · Apr 1, 2019 go to post

1st version of this post used C++ style for keeping captured variables, but our small, but agile "design committee" decided to change approach and to use closer to JavaScript syntax here (because it's more natural and wider known to ObjectScript programmers), so see updated post soon.

Timur Safin · Aug 23, 2018 go to post

The shorter, not very much serious answer - #dim is recommended by COS Guidelines we used in our local community - COS Guidelines

[But well, whom am I tricking about - these rules written by myself, and it was my personal push to use this practice here]

Here is the longer question: I hate dynamic typing systems, they tend to break sooner, and they frequently complicate understanding for your team mates. #dim constructs was a small set forward in making COS less dynamic typed, and more static typed language. Originally it was created as a hint for Studio auto-complete, but we always understood that it could be reused for some sort of static analyser (if it would ever be created) which will help find some COS errors rather sooner, at the compile time (or around of compilation time). But these has never materialized  (at least it has not happened till last time I checked a couple of years ago). 

So, if we could not check something automatically, via analysers, we could enforce it manually via policy written to guidelines, and obligatory peer-review before commits...

Timur Safin · Mar 28, 2017 go to post

From that I see in postinstall script the only you need to do with cache.reg - is to create directory, and give it proper permissions 755 mask, i.e.:

mkdir -p /usr/local/etc/cachesys/cache.reg
chmod 755 /usr/local/etc/cachesys/cache.reg
ccontrol create ...
Timur Safin · Mar 28, 2017 go to post

Let me be crystal clear and honest - this is horrible

[I thought so more than 2 years ago when Max originally published this approach in Russian and still think so today]

When you write some code you write it not for yourself, not for being modern and trendy, you write it "for another guy" which will visit it tomorrow. You need to write it as simple as possible, using most obvious approach.

If you could write something using same or similar amount of code but without tricky macros then you have to write it simpler and without macros (as here) or without tricky iterators (as in Max case). This complexity just not worth a time your team will loss debugging such code.

Please, do  not get me wrong - I love JavaScript/TypeScript and all modern things. And would love to apply as expressive constructs as closures in JavaScript or lambdas in C++ (hmm). But they are not here (yet) in the ObjectScript. Many of us tried to lobby for closures addition for ages, but gods of COS had no interest in them.

Though, in my personal opinion, implemention of handy closure support would not be much harder than dotted DO statement (and might be based on the same VM token implementation). But I might be wrong in estimation of complexity. 

Timur Safin · Mar 28, 2017 go to post

Thanks for reminding, Evgeny! 
[Hope people which managed to miss return statement will not miss these series :) ]

Timur Safin · Feb 9, 2017 go to post

Threading is a god send in very populated forums. You could used to flat discussions if there are no many participants, but once more people got involved, and more topics got added to the fule, the harder to navigate in a flat thread. 

Threading might be not that deep (i.e. a couple of "em" to the right), but in either form they better to keep threading.

[Lack of threading is driving me crazy in all modern messengers with groups enabled, like Slack, Skype, Telegram. Have you ever tried to not get lost in very popular chat room in Telegram or Slack once there are more than 500 members joined?]

Timur Safin · Feb 6, 2017 go to post

1. When authorization will be implemented - you could go and proceed it yourself;

2. but at the moment (curated mode) - one could ask me to be added to the list via GitHub issue (or via email, if you know my address, and you do :) )

Hopefully 1 will become possible relatively soon, so 2 will not stand for too long.

Timur Safin · Feb 4, 2017 go to post

Indeed, but I'm reluctant to remove it, at leats now, but rather use as a good corner case of system version dependency. We should declare it as Caché 2014-2015 only, (or rather declaratively  <=2015.*).

Certainly, as original author, you have full rights to unpublish it, if you want to. But please wait till we resolve all authorization issues (or make it using GitHub SSO).

P.S.

Have I already mentioned it yet, that the lowest Caché version which I want to have CPM working is 2014.1? (where CSP.REST support introduced). Soe there are 2 major versions (2014 and 2015) where this component makes a lot of sense.

Timur Safin · Feb 3, 2017 go to post

And these 2 silly projects named 'tsafin~*' were put to the list just for this particular reason: I need to have a few components in the registry over which I have direct control, and where I could put such versioning information in the format expected by CPM

P.S.

At the times when I was one of admins in both "intersystems" and "intersystems-ru" GitHub repositories I could just go and commit the correct version support. :) But right now I could not be so rude anymore, and need to play by rules, be gentlemen and ask for a favor.

[Ok, that was a joke - I've never intended to be so much rude]

Timur Safin · Feb 3, 2017 go to post

Well, you remember, this is still "curated" list? So any kbit of information I put to repository have been verified (created) by me. Originally, version information is coming from GitHub metadata for a given project. If this project did have releases (like webterminal) then version is there and I use it inside, if there were no releases, then I put default version number 0.0.1 (npm is rejecting packages without version triplet).

After a moment community would start to use "package.json inside of class xdata" trick, since then we would have version information as precise as author wanted (and XData information will override release matadata values, so it will be author responsibility to keep them updated).

We could try to use automatically generated XData package definitions for each imported github repo, but they will work only if they have been committed to the original source repository. So it still needs some cooperation with author.

P.S.

It wil be much, much easier, if eventually Cache (somehoow) would get native package.json (or generic *.json, or *.yml) support in the class compiler...

Timur Safin · Feb 3, 2017 go to post

Well, this is interesting aspect we did discuss internally. But from practical point of view (intending to create working system the simplest way possible) it was easier for us to start from the system where modern ObjectScript and rich classes library were available. It's just thousands times easier than without them.

Let see how far we could get with ObjectScript part of story, we should first get some developer attention, fill repository, grow audience...

P.S.

And there is no need to name it CMPM if there would be some MUMPS, because the first "C" means "Community" today :)

[Did you see that nice animation I've inserted to the CPM cover page? That was done for a reason]

Timur Safin · Feb 3, 2017 go to post

Yes, proper dependency tracking is tough. And some modern package managers allow to "lock" versions you have downloaded and stick with them forever. I do not see a way how multiple versions of a same component could be used in the same namespace, but, on the other hand, for local installations it's pretty easy to have multiple versions installed in the different namespaces. And developer could then lock some versions for selecte namespaces/applications.

In any case corner cases here are multiple, that's why we decided to postpone implementation of dependency tracking till later releases.

Timur Safin · Feb 3, 2017 go to post

And you are very brave if you want to install webterminal from inside of webterminal session :)

1st side-effect I foresee - web-socket will be disconnected, and if anything went wrong - you might get lost it forever. (Such recursive usage of a component is an interesting scenario

BTW, when I was preparing my curated set of initial components in the respository, and loaded, and reloaded, and reloaded webterminal to my testing namespace, I've recognized interesting side-effect - due to the way compiler projections work in webterminal, it's installed successfully only each even time. (I assume there are some ordering issue between RemoveProjection for uncompile and CreateProjection for immediate compile of classes).

P.S.

BTW, why not be "role-model" and not show others how to work with GitHub issues :) [I mean please do not hesitate to open issue in the CPM project repository. This helps]

Timur Safin · Feb 3, 2017 go to post

Keeping every next project in their special namespace is one of problems I want to address with CPM. it's ok to mix many tools to the single namespace to combine in final solution. CPM will  make this project manageable. (Because you know all installed asserts, with exact versions of them, and at every time could wipe it out)

Timur Safin · Feb 3, 2017 go to post

1. No, IDE is not a show-stopper - JavaScript/NodeJS got developers attention regardless weak JS support in most of popular editors (e.g. only with TypeScript introduction, many years since then, editors are getting convenient refactoring support for JS/TS). Yes IDE could win some hearts, but there should be something more in the ecosystem to trigger wide adoption, beyond editor;

[Though, yes, Studio is rudimentary editor]

2. Instability of modules is not a blocker either. Given enough of eyes in vibrant, fast moving community, instability of some components could be relatively easy resolved. You need a friendly commiters policy (i.e. pull requests should be welcomed, without any NIH syndroms) , and toolset should be mature (i.e. role model components should use modern tools and methodologies like unit-tests, CI, peer-review and all that stuff).

P.S.

Have you submitted issues about problems you've discovered in components you mentioned?

Timur Safin · Jan 17, 2017 go to post

That's exactly the point I want to discuss in the next part. (i.e. What shoud be ine the package metadata? What is the format used for metadata serializations? How we mark dependency if any? How to descibe anything to be run before/after installation of a package, and so on). This is very similar to that we have in multiple different implementations elsewhere, and it worth to discuss once again if we want ot have easy and useable package manager in the Cache environment.

Timur Safin · Nov 26, 2016 go to post

This looks nice, and might get even more convenient if you'd add "shell" for interactive work and not only  API (you do have READ, WRITE, KILL entry points, but why not wrap them in the interactive shell?)

But here is the bigger concern - I've scanned thru your documentation, and haven't found any mention of security. You just open 5000,5001,5002 ports at each respective system, accept all incoming requests, not check any logins or passwords, or challenge phrases, or security tokens, and hope that there is no evil people in the world?

Timur Safin · Nov 24, 2016 go to post

Docker is cool way to deploy and use versioned application, but, IMVHO, it's only applicable for the case when you have single executable or single script, which sets up environment and invoke you for some particular functionality. Like run particular version of compiler for build scenario. Or run continious integration scenario. Or run web front-end environment.

1 function - 1 container with 1 interactive executable is easy to convert to Docker. But not Cache, which is inherently multi-process. Luca has done a great thing in his https://hub.docker.com/r/zrml/intersystems-cachedb/ Docker container where he has wrappee whole environment (including control daemon, write daemon, journal daemon, etc) in 1 handy Docker container whith single entry point implemnted in Go as ccontainrmain but this is , hmm, ... not very effecient way to use Docker.

Containers is all about density of CPU/disk resources, and all the beauty of Docker is based upon the simplicity to run multiple user at the single host. Given this way of packing Cache' configuration to the single container (each user run whole set of Cache' control processes) you will get the worst scalability.

It would be much, much bettrer, if there would be 2 kinds of Cache' docker containers (ran via Swarm for example) where ther would be single control container, and multiple users containers (each connecting to their separate port and separate namespace). But, today, with current security implementation, there would be big, big problem - each user would be seeing whole configuration, which is kind of unexpected in the case of docker container hosting. 

Once, these security issues could be resolved there would be effeciet way to host Cache' under docker, but not before.