go to post Alexey Maslov · Mar 22, 2017 Luca,It was clear that we can use VMs from the very beginning of the project, and maybe we'll take this approach at last. I've just looked at the Docker's side willing to search more light/reliable alternative.Thank you again.
go to post Alexey Maslov · Mar 22, 2017 Thanks, Dmitry and Luca.Meanwhile I read some docs and interviewed colleagues who had more experience with Docker than me (while w/o Caché inside). What I've got out of it: Docker doesn't fit well to this very case of mine, which is mostly associated with deployment of already developed app rather than with new development. The reasons are:- Only one containerized app at the client's server (our case) doesn't mean so many benefits as if it would be several ones;- Windows issues which Luca already mentioned;- I completely agree that "data should reside on a host volume..." with only one remark: most likely that all this stuff will be maintained at the client's site by not very well skilled IT personal. It seems that in case of Doker/host volume/etc its configuring will be more complex than rolling up Cache for Windows installation with all possible settings prepared by %Installer based class.
go to post Alexey Maslov · Mar 15, 2017 AFAIK, docker encourages to implement microservices. What about classic Cache based app deployment, can docker be useful in this case? By classic I mean the system accessible via several services (ActiveX, TCP sockets, SOAP and REST), multi-tasking, etc with initial 1-2 GB database inside. At the moment I'm to choose a solution to roll up several dozen of rather small systems of such kind. Should I look at docker technology in this very case?I am bound to ver. 2015.1.4. Host OS most likely will be Windows 2008/2012, while I'd prefer to deploy our system on Linux, that's why I am searching a light solution how to virtualize or containerize it. Thank you!
go to post Alexey Maslov · Mar 6, 2017 Several years ago our company (SP.ARM) had developed Krasnoyarsk Kray Regional HIS, which was finally deployed using haproxy to balance CacheActiveX traffic load among 7 ECP application servers. This kind of traffic is based on the same protocol as ODBC, so the behavior should be quite the same (long sessions via 1972/tcp). To achieve high availability of the proxy server, it was deployed in two similar copies connected with ucarp.Our initial solution for load balancing and high availability of proxy was based on LVS+keepalived, but the engineers of maintainer company (Rostelecom) prefered to use haproxy+ucarp as they felt themselves more comfortable with them; we didn't mind.
go to post Alexey Maslov · Mar 5, 2017 Hi Michael,Using Studio's keyboard heavily, I can add to your list:Shift-F2 - jump to previous bookmarkF6, Shift-F6 - switch to next and previous opened routine/class windowCtrl-F7 - compile current routine/classF7 - [re]compile the whole projectCtrl-F9, F9, Shift-F9 - deal with break points (similar logic as with bookmarks)Ctrl-E, Ctrl-Shift-E - switch to full commands/functions names, switch back to shortcutsCtrl-Shift-O - open projectCtrl-O - open routine/class/etc.P.S. Have not tried Atelier yet.
go to post Alexey Maslov · Feb 22, 2017 Mark, may I ask your for some clarification? You wrote:As a result THP are not used for shared memory, and are only used for each individual process. What's a problem here? Shared memory can use "normal" huge pages, meanwhile individual processes - THP. The memory layout on our developers' serber shows that it's possible.# uname -aLinux ubtst 4.4.0-59-generic #80-Ubuntu SMP Fri Jan 6 17:47:47 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux# cat /sys/kernel/mm/transparent_hugepage/enabled[always] madvise never# tail -11 /proc/meminfoAnonHugePages: 131072 kBCmaTotal: 0 kBCmaFree: 0 kBHugePages_Total: 1890HugePages_Free: 1546HugePages_Rsvd: 898HugePages_Surp: 0Hugepagesize: 2048 kBDirectMap4k: 243808 kBDirectMap2M: 19580928 kBDirectMap1G: 49283072 kB# ccontrol listConfiguration 'CACHE1' directory: /opt/cache1 versionid: 2015.1.4.803.0.16768 ...# cat /opt/cache1/mgr/cconsole.log | grep Allocated...01/27/17-16:41:57:276 (1425) 0 Allocated 1242MB shared memory using Huge Pages: 1024MB global buffers, 64MB routine buffers# grep -e AnonHugePages /proc/*/smaps | awk '{ if($2>4) print $0} ' | awk -F "/" '{print $0; system("ps -fp " $3)} '.../proc/165553/smaps:AnonHugePages: 2048 kBUID PID PPID C STIME TTY TIME CMDcacheusr 165553 1524 0 фев07 ? 00:00:00 cache -s/opt/cache1/mgr -cj -p18 SuperServer^%SYS.SERVER...
go to post Alexey Maslov · Feb 14, 2017 Just to complete an exercise: --- inst-guid.sh --- #!/bin/bash csession CACHE <<EOF | grep -E [0-9A-F\-]{36} write ##class(%SYS.System).InstanceGUID() halt EOF -------------------- $ ./inst-guid.sh 8BCD407E-EE5E-11E4-B9C2-2EAEB3D6024F $
go to post Alexey Maslov · Feb 14, 2017 You can get an output from a Caché routine provided it do some output to its principal device (= stdout), e.g. (approach #1): $ csession CACHE "##class(%SYSTEM.License).CKEY()" Cache Key display: Based on the active key file '/vol/cachesys/mgr/cache.key' LicenseCapacity = Cache 2017.1 Enterpriser - Concurrent Users:1000000, Anything You Need CustomerName = ZAO XX.XXX OrderNumber = 9876543210 ExpirationDate = 10/15/2114 AuthorizationKey = 3141592653592718281828662607004081 MachineID = currently available = 9997 minimum available = 97 maximum available = 1000000 $ as there is no way to pass an arbitrary text from COS routine to OS directly. To bypass it, just incorporate into your shell script a calling sequence like this (approach #2): #!/bin/bash csession CACHE <<EOF >output.txt write ##class(%SYS.System).InstanceGUID() halt EOF After calling it, you will get output.txt like this: Node: tdb01.xxxxx.com, Instance: CACHE USER> 8BCD407E-EE5E-11E4-B9C2-2EAEB3D6024F USER> (Missing 'halt' command causes an error). All you have to do is to parse an output. To avoid this, you may want to write a small wrapper in COS that will provide an output as you need it, than you'll be able to follow an approach #1. HTH.
go to post Alexey Maslov · Feb 7, 2017 Isn't it O(k) for PATRICIA tries, where k is an average key length?E.g., see https://en.wikipedia.org/wiki/Radix_tree#Comparison_to_other_data_struct...
go to post Alexey Maslov · Feb 3, 2017 Thank you, Timur and other CPM contributors, you are doing the great job.Freedom and independence are 2 greatest values which you can not buy for money. And you should keep it as long as possible.This great slogan should be set in stone and cast in steel! Meanwhile, your (potential) target audience could be wider prefer you building FOSS product keeping proprietary COS/Caché features usage minimized and localized as much as possible.While we at SP.ARM would hardly drop our home-brewed PM as it is pretty well integrated with our flagship product (qMS), personally I appreciate emerging of CPM and wonder why it is not CMPM?
go to post Alexey Maslov · Jan 31, 2017 If you are able to do it on Primary, it would be possible to write a line of code which sets the appropriate SQL Table privileges and insert it into ^ZMIRROR routine under NotifyBecomePrimary entry point (see http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...).
go to post Alexey Maslov · Jan 27, 2017 You may also want to try Monitoring for Disconnect Mode (D mode)(http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...).This mode prevents a Caché process from silent halting on disconnect, which occurs by default when the socket device is the process $principal. D mode garranties that process's error trap code will handle <DISCONNECT> errors anyway.
go to post Alexey Maslov · Jan 26, 2017 Class methods calls "decorating" with macros seems to be possible using #def1arg macros.But how to automate the substutitution of all flavors of an instance method calls to macros in a real world project? The method calls may use different <oref> variables in different parts of a project, or even use ".." or $this: set res=oref.Method1(x,y,.z) do Ref.Method1(x,y) if ..Method1(x,,.z) { ... } set a=$this.Method1(x,y)So we need to define a macro with at least two arguments: 1) for oref, 2) for all other method's arguments. Having no variable macro arguments list, how to achieve it?Note: we discussed direct calls only; taking in account $method() and indirection (@, Xecute), the situation would be even worse.
go to post Alexey Maslov · Jan 25, 2017 Dynamic dispatch is a hit on speed anyway.Sounds like yet another reason not to use it in production code.Macros can help in many cases but using $method() and other flavors of indirect method calls.
go to post Alexey Maslov · Jan 25, 2017 Dynamic dispatch is a hit on speed anyway.Sounds like yet another reason not to use it in production code.Macros can help in many cases but using $method() and other flavors of indirect method calls.
go to post Alexey Maslov · Jan 25, 2017 Always call "do a.AmethodName()"I don't think it's a good idea as each extra nested call inevitably introduces extra performance drawback.Anyway, imagine an existing large app with 100x classes and 1000x of methods calls, some of which can be done indirectly. In most common case an ability to do such transformation (all a.m() to a.Am()) sounds unrealistic, while your approach may fit small projects with simple calling conventions, especially when they are being started from scratch.
go to post Alexey Maslov · Jan 25, 2017 1) The following: USER>do a.Awtf() do $method(, "OnBefore" _ Method, Args...) ^ <METHOD DOES NOT EXIST>%DispatchMethod+2^Utils.Annotations1.1 *OnBeforenBeforenBeforenBeforenBeforenBeforenBeforenBeforenBeforenBeforenBeforenBefo,Utils.Annotations1 USER 58d1> is not a great problem as can be easily corrected by adding a check whether a method named $e(method,2,*) really exists in the class. 2) The whole approach seems to be too complicated as you omitted an important step:all calls of all app's methods should be switchable from "do a.methodName" to "do a.AmethodName" back and forth. How to implement this?
go to post Alexey Maslov · Jan 23, 2017 My answer would be simple: monitor your database growth and do the same on global level. It can help to estimate the future growth and with some luck even to point out inefficient solutions that may waste the space. E.g. we have a task that calculates all globals sizes (usually once a night) and store this data in a global that can be used as a source for visualization and allerting by means of some external toolset. At the moment integration with Zabbix is implemented.
go to post Alexey Maslov · Jan 23, 2017 set mbSize=$$AllocatedSize^%GSIZE($Name(@Gref))/1048576 // Global @Gref size in MegaBytes Pros:- uses proven calculation algorithm, the same as interactive %GSIZE utility uses (we met errors in %GlobalEdit::GetGlobalSizeBySubscript() calculation, while it was in "rather old" v.2015.1.2, not sure about 2017.1);- supports the whole global (Gref="^A") as well as its parts (Gref=$name(^A("subs1") or Gref=$name(^A("subs1","subs2")) or ...) as arguments.Con(s):- allows to perform only quick global size estimation, while in most cases it's quite enough;- presize (slow) version can also be derived from %GSIZE source, but it's more tricky exercise...
go to post Alexey Maslov · Jan 20, 2017 It seems to be pretty peculiar case, at least for me. Sounds like Cache is supplied with English Unicode locale which does not support all unicode characters. What for?