The version of Caché is 2015.1.4 for Windows x64, on Core i5 based laptop. When I have a time gap, I'd re-run this test on more powerful server, while I don't expect a noticeable difference.

Let's try to estimate time for both operations:
1: set max(args(i))=i ~ time_to_find_args(i)_position_in_max_array + time_to_allocate_memory_for_max(args(i)) ~  O(ln(length(args(i))) + O(length(args(i)+length(i))) ~ O(3*ln(length(args(i)))
2: max<args(i)  ~ time_to_compare_max_and_args(i) ~ O(ln(length(args(i))))

So it seems that 2 should be ~3 times quicker than 1, but we don't know real coefficients which stand behind those O() estimations. I should confess that local array node allocation penalty turned to be higher than I expected.

This speed difference should be even more would args(i) values be strings rather than numbers.

Bob,

I have a couple of questions on DR Async.

1) There is an option of DR Promotion and Manual Failover with Journal Data from Journal Files
( http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=... )
where one is advised to get journal files from one of failover members.
If both failover members are not available, is it possible to get journal files from another (DR or reporting) Async member?
If so, what preliminary configuration steps one should proceed on that member to allow this option in the case of disaster?


2) Another question is on journal files collection as well. You wrote: 

Asyncs receive journal data from the primary asynchronously, and as a result may sometimes be a few journal records behind

Is it true that Asyncs pulls journal data from Primary? If so, Primary is not aware of whether data was pulled by Async or not. Therefore, if Async was not getting journal data from Primary for several days (e.g. due to communication problems), the next unread journal file can be already purged from Primary's local storage.

Is it possible to recover from this situation without rebuilding Async? E.g., if the purged journals are available as a part of file system level backup, or those files are kept on another Async server, can it help?

==
Thanks...

I agree, it's elegant, but not memory efficient design: the sorted array is built only to pick up its top node. Of course, arg lists are rarely long and yet another small array doesn't mean much, but in more generic case of searching the max element in array of sufficient size the following code should be twice more memory efficient and maybe a bit faster:

ClassMethod max(args...) {
  max=$g(args(1)),im=1 for i=2:1:args s:max<$g(args(i)) max=args(i),im=i
  $lb(max,im)
}

P.S. I was curious if my variant was really faster and ran some tests. Here are the results that were collected using an array filled with random whole numbers. An array was refilled on each test run. 

In the table below:
 min - the lower limit of numbers values in the array
 max- the upper limit of numbers values in the array
 n - the quantity of numbers in the array
 var - # of variant (1 - original, 2 - mine)
 dt - average run time (in seconds)
 dtfill - avg time of filling the array; just for info.

min       max       n         var  dt        dtfill
-10000000 10000000  10        1    0.000005  0.000002
-10000000 10000000  10        2    0.000001  0.000002
-10000000 10000000  100       1    0.000047  0.000012
-10000000 10000000  100       2    0.000004  0.000012
-10000000 10000000  1000      1    0.000425  0.000115
-10000000 10000000  1000      2    0.000031  0.000115
-10000000 10000000  10000     1    0.005828  0.002949
-10000000 10000000  10000     2    0.000554  0.002949
-10000000 10000000  100000    1    0.074641  0.031128
-10000000 10000000  100000    2    0.006824  0.031128
-10000000 10000000  1000000   1    1.194625  0.313878
-10000000 10000000  1000000   2    0.069191  0.313878

Using a repository is a good idea for sure, but what about a solution that can help even if an 'intruder' had bypassed it and changed a class, e.g., on production server? Here is one which answers who changed SomeClassName.CLS; this code can be executed in "%SYS" namespace using System Management Portal/SQL:

SELECT DISTINCT TOP 1 s.UTCTimeStamp, s.OSUsername, s.Username, s.Description
FROM %SYS.Audit as s
WHERE s.Event='RoutineChange'
AND s.Description LIKE '%SomeClassName.cls%'
ORDER BY s.UTCTimeStamp desc

It's easy to adapt it for searching the same info for MAC, INT and INC routines.
Enjoy!

Sometimes such strange results are caused by ignoring the fact that usually there are several levels of caching, from high to low:

- Caché global cache

- filesystem cache (on Linux/UNIX only, as Windows version uses direct i/o)

- hdd controller cache.

So even restarting Caché can be not enough to drop the cache for clear "cold" testing. The tester should be aware of data volume involved, it should be much more than hdd controller cache (at least). mgstat can help to figure this out, besides it can show when you start reading data mostly from global cache rather than from filesystem/hdd.

Hi Murray, thank you for keep writing very useful articles.

ECP is a rather complex stuff and it seems it does worth addition writing.

Just a quick comment to your point: 

For sustained throughput average write response time for journal sync must be:
<=0.5 ms with maximum of <=1 ms.

How can one distinguish journal syncs from other journal records looking at iostat log only? It seems that 0.5-1ms limit should be applied to  every journal write, not only to sync records.

And a couple of small questions. You wrote that
1) "...each SET or KILL the current journal buffer is written (or rewritten) to disk. " 
and
2) "On very busy systems journal syncs can be bundled or deferred into multiple sync requests in a single sync operation."
Having mgstat logs for a (non-ECP) system, is it possible to predict future journal syncs rate after scaling horizontally to ECP cluster? E.g., if we have average and peak mgstat Gloupds values, can we predict future journal syncs rate? What is the top rate of journal syncs when their bundling/deferring begins?

However, the Newbie can ignore it all, by using Caché SQL

If so, how do you answer the curious Newbie's question: why should I use Caché at all, as a few SQL implementations are available for free nowadays?

Usually those questions were answered like this: Caché provides Unified Data Architecture that allows several access methods to the same data (bla-bla-bla), and the quickest of them is Direct Global Access. If we answer this way, we should teach how to traverse across the globals, so you are doing the very right and useful thing!
There is only one IMHO: semantics can be more difficult to catch than syntax. Whether one writes `while (1) { ... }` or `for { ... }`, it's basically all the same, while using $order or $query changes traverse algorithm a lot, and it seems that this stuff should be discussed in more details.

Checking .LCK files is useless in most cases as Caché service auto-starts with OS startup. Of course, switching auto-start off is not a problem for development/testing environment.

Frank touched another interesting question: how long WaitToKillServiceTimeout should be? If we set it to ShutdownTimeout + Typical_Real_Shutdown_Time, and Caché hangs during OS shutdown, I bet that typical Windows admin won't wait 5 minutes and finish with hardware reset...  Choosing between bad and worse, I'd set
WaitToKillServiceTimeout = Typical_Real_Shutdown_Time
letting OS to force Caché down in rare cases when it hangs.

As to documentation for Caché v. 2015.1, ShutdownTimeout parameter ranges from 120 to a maximum of 100,000 seconds with the default of 300 seconds. In my case its value is 120 seconds, but in the worst cases I've managed to find in my log shutdown performed faster, approx. 40 seconds, e.g.: 

05/18/16-18:51:45:817 (3728) 1 Operating System shutdown!  Cache performing fast shutdown.
05/18/16-18:52:27:302 (3728) 1 Forced 11 user processes.  They may not have completed.
05/18/16-18:52:27:302 (3728) 0 Fast shutdown complete
05/18/16-18:52:27:474 (3728) 0 CONTROL exited due to force
05/18/16-18:52:27:630 (3656) 0 JRNDMN exited due to force
05/18/16-18:52:27:614 (3560) 0 GARCOL exited due to force
05/18/16-18:52:27:802 (1064) 0 EXPDMN exited due to force
05/18/16-18:52:27:786 (3760) 0 No blocks pending in WIJ file
05/18/16-18:52:27:880 (3760) 0 WRTDMN exited due to force

while one can see word "force" in the log... It seems that OS shutdown is a special case of forcing Caché down, without waiting ShutdownTimeout seconds. I plan to adjust the registry value as suggested in this article and check what will happen on the next OS shutdown (when I decide to do it).

Steve, 

> if I were to see this I would then check that everything closed nicely

How to do this check? Every Caché startup after its fast shutdown is corresponded with the following message in console log:

09/16/16-13:48:38:305 (2132) 2 Previous system shutdown was abnormal, system forced down or crashed

while there are no logged signs of "normal" forcing down (system tables dump, etc). Maybe this rule has exceptions, but I've never seen them.

==
Thank you,
Alex

Mike,
I fully agree with you that newbies should be aware of dotted syntax and some other old idioms that can be met in legacy code, but it seems that they also need some judgment from "oldies": which coding constructions are more or less acceptable and which should be avoided by any means (like usage of $ZOrder and $ZNext functions).

P.S. (offtop) Could we meet each other in Bratislava in 1992?