go to post Timur Safin · Apr 12, 2016 On the other hand: if we would deal with SAMPLES and ENSDEMO similarly, and %ALL mapping would work in SAMPLES the same way as for others, then leaving DOCBOOK alone would not make much sense :)I'd prefer %ALL to work everywhere, without exceptions, that would significantly simiplify global mapping creation in some package manager you may be aware of.
go to post Timur Safin · Apr 12, 2016 Or better yet to disable this mode (removal of extra line breaks) globally in the Outlook:[This is in Russian, sorry, but location of this element in the English version is the same, so you'll find it easily]
go to post Timur Safin · Apr 11, 2016 Feel free, Wolf, to request these features via GitHub repo issues , and, if you are brave enough, pull requests are very welcomed! This is open-source model, things might be changed easier (if you are really wanting them to be changed).
go to post Timur Safin · Apr 9, 2016 Nobody would complain if you would simply provide the direct link to the book on Amazon
go to post Timur Safin · Apr 6, 2016 My guess that this sounds like result of some source control hook activity (Deltanji?)...
go to post Timur Safin · Apr 4, 2016 Very nice tools, but I disagreed with majority of fatal problems (especially in my projects :)). And who said that I could not write $$$MACRO as a single statement? Where did you get one?So... it would require a lot of fine tuning to work properly accroding to the experienced Cache' ObjectScript programmer tastes. Java idioms ported as-is to ObjectScript just doesn't work.
go to post Timur Safin · Apr 4, 2016 Though, returning to the original question, I tend to agree with the recommendation to use ^GLOBUFF, because of dynaic nature of a data in the Ensemble configuration. There is no hard data, but consistent statistics.
go to post Timur Safin · Apr 4, 2016 GLOBUFF is the nice tool for showing current distribution of a cached data (given current memory limitations, past activity and current algorithm), but what is more important is to estimate a maximum of working set for your applications using your own globals. Which might be quite big if yu have transactions-heavy application.[Worth to mention that for ^GLOBUFF case you could quite unique information - these % of memory used by process-private variables, which will be essential information for SQL heavy applications]But for me it's always interesting to know - what amount of memory would be enough to keep all your globals in memory? Use ^%GSIZE: Global Size Display of /xtreme/db/sber/ 10:57 AM Jul 10 2015 sber.Err 150 sber.data.AccountD 2833662 sber.data.AccountI 1234223 sber.log.LoadAccount 88 sber.log.LoadAccount1Proc 88 sber.log.Transaction 11 sber.log.generateAccount 3 sber.log.generateAccount0 1 sber.log.generateTransaction 4 sber.log.loadBatch 7 sber.log.loadBatch0 1 sber.tmp.Account 999784 TOTAL: 5068022So we see 5 millions of 8KB blocks necessary to hold whole application dataset, this might serve as upper estimate for the memory necessary for your applications. [I intentionally omit here needs for routine buffer, which is usually is negligible comparing to the database buffers, it's very hard to find application code which, in its OBJ form, would occupy multiple gigabytes, or even single GB]All in all, for the case shown above we started from 40GB of global buffers for this application, because we had plenty of RAM.
go to post Timur Safin · Mar 25, 2016 Ok, here is the simpler route:1. if you have performance problems then run pButtons, call WRC, they will get "performance team" involved and explain you what you see :);2. if you have budget for new HW then see item #1 above and ask WRC for advice.
go to post Timur Safin · Mar 25, 2016 I'd expect it supported via this set of share toolsbut apparently it just sends the link to the given address. Ok, you need page scraper then - and the easiest tool for this I know is uKeeper by Umputun (very famous in Russian IT podcaster, one of voices of "Radio-T" podcast).P.S.It's quite normal for all "share-buttons" to just send a link with title and not sending full content, they are all intending to be used in social networks where text length limitation is the major factor.If you need article content to get delivered to your email "inbox" - use RSS subscriptions and RSS-enabled mailer. Like Microsoft Outlook :)
go to post Timur Safin · Mar 15, 2016 As it has already been reported elsewhere there is RSS feed for whole community portal - https://community.intersystems.com/rssAlso, Eduard has created wider feed which includes both StackOverflow [InterSystems related tags] and Community Portal contents - http://www.rssmix.com/u/8177955/rss.xml (though, it has noticable delay while exporting Community Portal)
go to post Timur Safin · Mar 15, 2016 This new Q&A interface is not very intuitive and that we get used to elsewhere (e.g. in SO). If we want to make it resembling StackOverflow closer then it should allow comment of the original question, without providing answer yet.
go to post Timur Safin · Mar 7, 2016 Though, returning to the original topic, about "Highest Rating" flter, it would be highly unexpected to see it keeping Newest order, expectations are that it would return in ranking order (inside of which, for equal rankings it might sort by modification time).
go to post Timur Safin · Mar 7, 2016 No, IIRC, consensus was about slight different formula - sorting by Max(PostDate, LastCommentDate) DESC which is quite different to "ordered by the latest posts, then posts with the latest comments, DESC".Even very old post after new comment should go to the top of list for Newest filtering mode. If it's hard to implement such Max aggregation function in Drupal then I'd try to directly update post date/time on each new comment...
go to post Timur Safin · Mar 7, 2016 IMVHO, threading is useful, very useful, even in such micro-context as discussion after proposed answer in the StackOverflow [as you introduce in this newer change].Once you killed threading, you could easily introduce unnecessary confusion of participants, for which it's becoming harder to comment on the relevant comment.Hope threading will not be killed in the "Article" mode;Hope threading will be added later to the Q&A mode, because it will simplify discussion.
go to post Timur Safin · Mar 3, 2016 Yes, %Projection mechanism is quite convenient tool to automatically setup things upon simple recompilation.Nikita Savchenko (author of this mentioned UMLExplorer ) is now working on an article for HabraHabr site [in Russian] where he explains usage scenarious of that %Projection facilities in more details. My assumptin is that soon after this post will be translated to English.Here are few usage cases I was involved lately:in the package manager I want to have easy to use facilitity which will allow to create package definition even for the sources directly loaded from GitHub. Something similar to package.json, but handled by $system.OBJ.Load() . So solution was to create projection class which will load package metadata definition at the end off package compilation - https://github.com/cpmteam/CPM/blob/master/CPM/Sample/PackageDefinition.cls.xml You inherit your class from CPM.Utils.PackageDefinition, insert package.json definition part as XData block, and at the end of compilation stage you not only have your classes compiled, but also have registered them in the package manager as part of package [this will be necessary, for example, for package uninstall] I've discovered lately that despite the fact that WebSockets could be made working without any installation ste (i.e. you simple call appropriate CSP URL), but for REST handler you have to create web-application (https://community.intersystems.com/comment/3571#comment-3571) and the easiest solution was creation of special projection which would be creating nested web-application upon recompilation - https://github.com/intersystems-ru/iknowSocial/blob/master/TWReader/Setup.cls.xml , and regardless of a namespace it was installed to, and url of default CSP application assigned to this namespace this nested rest handler will still be available handy.
go to post Timur Safin · Mar 3, 2016 But certainly, I should admit, any TextMate template based syntax highlighting system (like those used in the Atom or MS Code) will never be as precise as highlighting in Studio or Atelier.
go to post Timur Safin · Mar 3, 2016 Some time ago, my previous favorite "lightweight" editor was Atom [1], and we even played with its integration with Caché some time ago. But now, due to multiple reasons, this project is stale, community people which was doing development gone, Atom broke their API multiple times (and still not fixed some fundamental issues, like lack of debugger API or editor limitations).So after all these years I believe that Microsoft Code is better, faster, and smarter incarnation of Atom nowadays (their editor is much faster, they do already have refactoring and debugger API implemented). So, eventually, after Atelier API will be released there will be better grounds for resurrection of similar projects, but with server-side already been taken care of via REST API...[1] Ok, ok, there is Sublime text editor, which always was much, much faster than Atom. But extensions there should be written in Python (which I unfortunately dislike), so JavaScript based systems (like Atom, Code or Brackets) are much more preferable for me. At least from "hacking" pont of view.