Is that supposed to be hard? I immediately visually determined the result 20

Vitaliy, you are apparently not a beginner.

the t3 method code is equivalent to the following code

No, because my version can return a non-empty value having been called as a function, while yours can't.

absolutely nothing will change fundamentally if you replace "return" with "quit"

Absolutely agree, and that was my initial point already published in recent discussion:

  • If "QUIT" was used correctly, and one replaced it with "RETURN", no miracle will happen with code readability.
  • In contrast, forgetting or being unaware of possible side effects can make code understanding harder.

The sample was inspired by those thoughts. Its initial version was a bit more tricky, but InterSystems discourage an "old syntax" even to be used in discussions, while IMHO even a beginner should be aware of it and its caveats.

As to RETURN, it seems that InterSystems promotes this command nowadays as more visually clear remedy to exit methods. Before its addition to Caché docs mentioned "Implicit QUIT", and now it tell us about "Implicit RETURN", while "Implicit QUIT" is still around. 

Evgeny,

Thank you for teaching me COS :)

Just a small note to finish it:

  • The rewritten sample is not functional equivalent of the original one and will return the different result (if correct the syntax error).
  • My sample was not about "old" syntax vs. "new" one; I just wanted to emphasize that simple substitute "quit" with "return" command either may or may not improve the readability of already existing class/routine, it all depends...

Evgeny, agree with you, there is the difference, while I'm not sure what percent of community members will give the right answer to the question below without looking into docs and running the code. AFAIK, voting is impossible here.

What result will be returned by the method:

ClassMethod t1()
 {
  do t2(.a,.b)
  do
  . set a=4,b=5
  . return $increment(a)*$increment(b)
  return $increment(a,-1)*$increment(b,-1)
t2(&pA,&pB) ;
  set pA=2,pB=3
  return pA*pB
 }

No, because as those

 quit sc

ever worked, they were used according to the rule cited above:

outside of a block structure or from within an IF, ELSEIF, or ELSE code block.

This is very personal, but I would never use RETURN, because it introduces backward incompatibility with older Caché versions without any real reason for it but using yet another "me too" "modern" feature. Besides, it provokes the developer to bypass modular approach rule that insists on having one and only one enter as well as exit from each functional module. Isn't it about real (rather than "me too") readability?

Again, that's only IMO, while I doubt if it's possible to issue any community accepted rule set for COS programming style.

Continuing Robert's list:

5) CACHETEMP is always local on APP servers (= ECP clients). As to our experience, it's very important feature, because it allows to keep processing of temporary data locally, without extra network and ECP server disk i/o workload. One of surprises of ECP I've got:  while it's relatively easy to achieve high speed of intra-ECP networking (as far as ~10Gbit/s hardware is available), ECP server disk i/o subsystem can easily become a bottleneck unless you accurately spread data processing among ECP clients.

6) Sergio, you wrote:

I mean, all the data would be stored on disk and will have to be synchronized through the net with the other APP servers.

If you really need to synchronize even temporary data, then simple horizontal scaling with ECP without some optimization of data processing  (see p.5) can be less cost-effective than comparable vertical scaling solution.

This is well-known classic approach how asynchronous behavior can be simulated in "synchronous only" language. It potentially suffers of two problems caused by the need to check if something has dropped to TCP connection:

1) Read x:timeOut (where timeOut>0) causes delays up to timeOut seconds which are more or less acceptable for background job but not acceptable for foreground (e.g., some UI handling).

2) Read x:0 is too unreliable.

Agree with Dmitry: it helps that "not all those tasks have to be asynchronous".

Andrey, Evgeny:

while it was 3d party research, there are some signs of Caché performance tuning: global cache and routine cache were manually enlarged, journaling was switched off at the process level. Were the similar tuning steps performed for other DBMSs used in the research? 

  • AFAIK, Oracle would not allow to stop transaction logging (=journaling) at all. 
  • Only few people have appropriate knowledge of all DBMS products listed in the article (maybe Andrey is one of them), and nobody would touch hidden or semi-hidden performance parameters without it.

That's why I doubt if an Oracle engineer (or even DBA) would approve such "derby" results. 

Shouldn't we agree with an author of the sentence? :

...a single view of Oracle has lead to vastly different results, like the difference between a circle and a square. Always remember that there is no substitute for a real-world test that simulates the actual behavior of your production systems, using your own production SQL  (http://www.dba-oracle.com/t_benchmark_testing.htm)

As you know for sure, pattern can be assign to variable:

 set list=$lb("abc", "c", "aaa", "bbb")
 set result = $$patmatch(list, "1(1(1.""a"".E),1(.E1.""b""))")
 ;set result = ..match(list, "a*,*b")
 zw result // result=$lb("abc","aaa","bbb")
 q

patmatch(pList, pPat)
 new result,set result=""
 for i=1:1:$ll(pList) if $lg(pList,i)?@pPat set result=result_$lb($lg(pList,i))
 quit result

A customer of ours is running a production system of two mirrored DB servers and 13 APP ones, running 24x7 with possible downtime <= 2 hours once a month.

Summary production DB size is ~ 4TB. Full backup takes ~ 10 hours, its restore ~ 16 hours. Therefore incremental backups are scheduled nightly, and full backup - weekly; to avoid performance drawback, Mirror Backup server is assigned to run all this activity.

As to integrity check, it turned to be useless since the DB size had overgrown 2TB: weekend had become not enough to run it through, so it was disabled. It doesn't seem to be a great problem because:

  • Enterprise grade SANs are very reliable, the probability of errors is very low.
  • Let's imagine that integrity error occurred. Which database blocks would be most probably involved? Those that are active changed by users' processes, rather than those that reside in "stale" part of the database. So, the errors would most likely appear on application level sooner than they might be caught with Integrity Check. What, do you think, would the administrator do in case of such kind of errors? Stop production for several days, running Integrity Check and fixing the errors? No, they would just switch from damaged Primary to (healthy) Backup Server. And what if Backup Server is damaged as well? As they reside on separated disk groups, this scenario means real disaster with the best solution: to switch to Async Backup which resides at another data center.

At the moment it's able to recode locally or remote mounted 8-bit database to Unicode one. All activity occurs at the Unicode system.

This approach was feasible because the only global collation we used in 8-bit database was Cache Standard. Use we something else (e.g. Cyrillic2), more complex solution would be needed, something like Pavel and Ivan briefly described.

I transfer large globals in several concurrent jobs; avg speed is ~ 8MB/s. As to journals, their transfer is times slower, while I try to compensate it using "economical" approach (with filtration) mentioned above.

My toolkit is under construction yet. At the moment it's able to recode locally or remote mounted 8-bit database to Unicode one. All activity occurs at the Unicode system.

Ivan, 

I'm just curious: how long it took to transfer 2TB?

IMHO, the main problem in such endeavors is not how to recode the database as it's always possible somehow, but how to avoid long interruption of 24x7 workflow. In my case I'm developing a tool which scans journal files and remotely applies the changes to those globals that are already was (or being) transferred to Unicode database. Async Mirroring with dejournal filtration could be used instead, although I don't want to use it for several reasons.