Using a repository is a good idea for sure, but what about a solution that can help even if an 'intruder' had bypassed it and changed a class, e.g., on production server? Here is one which answers who changed SomeClassName.CLS; this code can be executed in "%SYS" namespace using System Management Portal/SQL:

SELECT DISTINCT TOP 1 s.UTCTimeStamp, s.OSUsername, s.Username, s.Description
FROM %SYS.Audit as s
WHERE s.Event='RoutineChange'
AND s.Description LIKE '%SomeClassName.cls%'
ORDER BY s.UTCTimeStamp desc

It's easy to adapt it for searching the same info for MAC, INT and INC routines.
Enjoy!

Sometimes such strange results are caused by ignoring the fact that usually there are several levels of caching, from high to low:

- Caché global cache

- filesystem cache (on Linux/UNIX only, as Windows version uses direct i/o)

- hdd controller cache.

So even restarting Caché can be not enough to drop the cache for clear "cold" testing. The tester should be aware of data volume involved, it should be much more than hdd controller cache (at least). mgstat can help to figure this out, besides it can show when you start reading data mostly from global cache rather than from filesystem/hdd.

Hi Murray, thank you for keep writing very useful articles.

ECP is a rather complex stuff and it seems it does worth addition writing.

Just a quick comment to your point: 

For sustained throughput average write response time for journal sync must be:
<=0.5 ms with maximum of <=1 ms.

How can one distinguish journal syncs from other journal records looking at iostat log only? It seems that 0.5-1ms limit should be applied to  every journal write, not only to sync records.

And a couple of small questions. You wrote that
1) "...each SET or KILL the current journal buffer is written (or rewritten) to disk. " 
and
2) "On very busy systems journal syncs can be bundled or deferred into multiple sync requests in a single sync operation."
Having mgstat logs for a (non-ECP) system, is it possible to predict future journal syncs rate after scaling horizontally to ECP cluster? E.g., if we have average and peak mgstat Gloupds values, can we predict future journal syncs rate? What is the top rate of journal syncs when their bundling/deferring begins?

However, the Newbie can ignore it all, by using Caché SQL

If so, how do you answer the curious Newbie's question: why should I use Caché at all, as a few SQL implementations are available for free nowadays?

Usually those questions were answered like this: Caché provides Unified Data Architecture that allows several access methods to the same data (bla-bla-bla), and the quickest of them is Direct Global Access. If we answer this way, we should teach how to traverse across the globals, so you are doing the very right and useful thing!
There is only one IMHO: semantics can be more difficult to catch than syntax. Whether one writes `while (1) { ... }` or `for { ... }`, it's basically all the same, while using $order or $query changes traverse algorithm a lot, and it seems that this stuff should be discussed in more details.

Checking .LCK files is useless in most cases as Caché service auto-starts with OS startup. Of course, switching auto-start off is not a problem for development/testing environment.

Frank touched another interesting question: how long WaitToKillServiceTimeout should be? If we set it to ShutdownTimeout + Typical_Real_Shutdown_Time, and Caché hangs during OS shutdown, I bet that typical Windows admin won't wait 5 minutes and finish with hardware reset...  Choosing between bad and worse, I'd set
WaitToKillServiceTimeout = Typical_Real_Shutdown_Time
letting OS to force Caché down in rare cases when it hangs.

As to documentation for Caché v. 2015.1, ShutdownTimeout parameter ranges from 120 to a maximum of 100,000 seconds with the default of 300 seconds. In my case its value is 120 seconds, but in the worst cases I've managed to find in my log shutdown performed faster, approx. 40 seconds, e.g.: 

05/18/16-18:51:45:817 (3728) 1 Operating System shutdown!  Cache performing fast shutdown.
05/18/16-18:52:27:302 (3728) 1 Forced 11 user processes.  They may not have completed.
05/18/16-18:52:27:302 (3728) 0 Fast shutdown complete
05/18/16-18:52:27:474 (3728) 0 CONTROL exited due to force
05/18/16-18:52:27:630 (3656) 0 JRNDMN exited due to force
05/18/16-18:52:27:614 (3560) 0 GARCOL exited due to force
05/18/16-18:52:27:802 (1064) 0 EXPDMN exited due to force
05/18/16-18:52:27:786 (3760) 0 No blocks pending in WIJ file
05/18/16-18:52:27:880 (3760) 0 WRTDMN exited due to force

while one can see word "force" in the log... It seems that OS shutdown is a special case of forcing Caché down, without waiting ShutdownTimeout seconds. I plan to adjust the registry value as suggested in this article and check what will happen on the next OS shutdown (when I decide to do it).

Steve, 

> if I were to see this I would then check that everything closed nicely

How to do this check? Every Caché startup after its fast shutdown is corresponded with the following message in console log:

09/16/16-13:48:38:305 (2132) 2 Previous system shutdown was abnormal, system forced down or crashed

while there are no logged signs of "normal" forcing down (system tables dump, etc). Maybe this rule has exceptions, but I've never seen them.

==
Thank you,
Alex

Mike,
I fully agree with you that newbies should be aware of dotted syntax and some other old idioms that can be met in legacy code, but it seems that they also need some judgment from "oldies": which coding constructions are more or less acceptable and which should be avoided by any means (like usage of $ZOrder and $ZNext functions).

P.S. (offtop) Could we meet each other in Bratislava in 1992?

Kevin,

If your licenses for both platforms (different in InterSystems license model) allow running Mirroring, the question would be 'yes'. AFAIK, you need at least Entree Multi-Server.

In most cases Cache programs don't depend on bitness of underlined platform, so you can run all of them in 64bit. The only cases when bitness comes to play occur when you are interacting with external applications, e.g. communicating through SQL GateWay with external DSN with 32bit ODBC driver when 64bit version of the driver is not available.

I'd ask another question: Kevin, did you ever try to configure and run Mirroring on Windows 10? Was it working?

I tried it a couple of months ago (Cache 2015.1.4 on Windows 10) and completely failed: mirror members could not communicate. Having enough experience in Mirroring deployment on Red Hat EL and MS Windows Server,  I was sure that it was a platform issue rather than my own. As it was not of great importance for me, I decided not to disturb WRC and dropped it.