Short answer: yes, you can certainly do this if you want to and the result is valid.  The main downside, in my opinion, is that the backup is then dependent on more technology, so there are more things that could go wrong.  More on that later.

If you're going to to this though, you really don't want to end up with Online Backup as your backup solution.  The problem with online backup is not consumption of resources, but time to restore,  I thought you were going to say you wanted the DR system so that you could shut it down for a couple hours while you take a cold external backup.  That would be a pretty good reason to do this.   

Since mirrored databases record their journal location inside the database, they intrinsically know from what journal file they need to "catch up" (the mirror checkpoint info).  Like all the usual backup solution, the result is not transactionally consistent in and of itself, but requires journal restore following backup restore to get to a transactionally consistent state. Mirroring makes this easier via the aforementioned checkpoint and the automatic rollback as part of becoming primary.  Of course it's the mirror journal files, not the DR's own journal files that will be used for this, but they live in the same directory, so if you just back that up in the same backup, you'll have the right stuff if it ever came to restoring this.

Now more about those downsides.  Backing up a replica means that you are subject to any problems with the replication.   For example, if a database on the DR had a problem and we had to stop dejournaling to it, that could mean your backup isn't good.  You'd worry a bit that you didn't notice because nobody is running on the DR system.  Or if you add a database to the primary but forget to add the same to the DR, your backup wouldn't have it.  These aren't meant to say this is a bad idea, but it is a consideration.   You want to think a bit about what you're trying to protect against.  You're talking about having a DR, so if you're restoring backup it means that something went wrong with both the primary and the DR.  So is the backup of the DR good in that situation?   If both are in the same physical location and your backing up in case that location is destroyed, then you're protected.  Or if you're backing up to handle the case of errant/malicious deletion of data, then you're protected.  

I don't know what your situation is with the main server, but I'd be curious how the system architect expects backups to take place and how long a backup of the disks are expected to take.  With a large global buffers, ExternalFreeze() can be workable in some application environments even if the freeze will last many minutes. If your operating environment is such that good backups are an absolute must, you might be better off investing in getting external backup working over there.

Again, "1.0" is not a canonical number; "2.2" is.  Both are valid numbers, but only one is in canonical form.  So exactly what you quoted here is the reason for this behavior.

Since both are valid numbers, you don't have to use + for any function that evaluates them as numbers or as boolean.  You do have to use + any time you desire conversion to canonical form (like equality, array sorting, etc).

If you have a true moment-in-time snapshot image of all the pieces of Caché (databases, WIJ, Journals, installation/manager directory, etc), then restoring that image is, to Caché, just as though the machine had crashed at that moment in time.  When the instance of Caché within that restored image starts up, all Caché's usual automatic recovery mechanisms that give you full protection against system crashes equivalently give you full protection in this restore scenario.

Whether a given snapshot can be considered crash-consistent comes from the underlying snapshotting technology, but in general that's what "snapshot" means.  The main consideration is that all of the filesystems involved in Caché are part of the same moment-in-time (sometimes referred to as a "consistency group").  It's no good if you take an image of the CACHE.DAT files from one moment in time with an image of the WIJ or Journals from another.

Most production sites wouldn't plan their backups this way because it means that the only operation you can do on the backup image is restore the whole thing and start Caché.  You can't take one CACHE.DAT from there and get it to a consistent state.  But, in the case of snapshots of a VM guest, this does come up a fair bit, since it's simple to take an image of a guest and start it on other hardware.  

Let me know if you have questions.

Upon return from ExternalFreeze(), the CACHE.DAT files will contain all of the updates that occurred prior to when it was invoked.  Some of those updates may, in fact, be journaled in the file that was switched to (the .003 file in your example), though that doesn't really matter for your question.

BUT, you still need to do journal restore, in general, because the backup image may contain partially committed transactions and journal restore is what rolls them back, even if the image of journals that you have at restore time contains no newer records than the CACHE.DAT files do.  This is covered in the Restore section of documentation, which I recommend having a look at:

There is an exception to this, and that is if you are a crash-consistent snapshot of the entire system, including all CACHE.DAT files, the manager directory, journals, and the WIJ.  In that case, all the crash-consistency guarantees that the WIJ and journals confer mean that when you start that restored image, the usual startup recovery actions will take care of any required roll forward and roll back from journals automatically.   In that scenario with crash-consistent snapshots, ExternalFreeze() wasn't even needed to begin with, because crash-consistent snapshot is by definition good enough.  However, ExternalFreeze() is typically used for planned external backups because it allows you to restore a subset of databases rather than requiring restore of the entire system.

The biggest thing you want to do is use three-argument $order to collapse from two global references to one:  $ORDER(^[Nspace]LAB(PIDX),1,Data)

In regards to the question about setting BBData or other small variants like that, it may very much be data-dependent and depend on what happens later in the loop that you haven't showed us.  But generally speaking if you're going to calculate the $p more than once, you probably do want to store it in a (private) variable.

You can certainly combine multiple conditions with and and or operators (&& and ||) if that's what you're asking.  Also, constructs like $case and $select can help (in case you haven't encountered them before).

Isn't the algorithm you describe going to lead to data discrepancies.  In particular, you have something like 1 in 2^32 chance of missing an update because it hashes to the same crc value.  Maybe this was already obvious to you and that it's okay for some reason, but thought I should say something just in case...  

Of course you could use a cryptographic hash function, like $system.Encryption.SHAHash(), but that takes substantial computation time, so you might not be any better off than you would be by actually opening the object and comparing the values directly.  It sounds like either way you're already resigned to traversing every object in the source database. (If the source database is growing then this entire approach won't work indefinitely of course)

Alex, I agree with you that I wouldn't recommend using this function for any of the use cases you mention. 

Laurel mentions one use case below, where you wish to preserve the state of a DR or backup before performing an application upgrade or data conversion so that it can be viable as a failback if something goes wrong.

Another case (which we mention in documentation) is if you are performing some maintenance activity on the primary host, particularly a virtual host, whereby you expect that it might interrupt connections to the backup and arbiter and you'd rather not have disconnects of failovers occur as a result.  This use case raises some questions, like why not just fail over to the backup before that maintenance, but we'll leave that aside. 

There's also the principle that it's good to have a way to shut things off temporarily without needing to dismantle the configuration or shut down the instance entirely.  That can be handy in troubleshooting.  

At a fundamental level the worry that you attribute to ObjectScript is not really particular to ObjectScript or any other language, but rather an issue of parallel vs serial processing.  The fundamental issue you're raising here is that when programming at the level of individual database accesses ($order or random gets or whatever) one process is in a loop doing a single database operation, performing some (perhaps minimal) computation, and then doing another database operation.  Some of those database operations may require a disk read, but, especially in the $order case, many will not because the block is already cached.  When it does need to do a disk read, the process is not doing computation because, well, this is all serial; the next computation depends on what will be read.  Imagine the CPU portion of the loop could be magically minimized to zero; even then this process could only keep a single disk busy at a time.  However, the disk array you're talking about achieves 50,000 IOPS not from a single disk, but from multiple disks under some theoretical workload that would utilize them all simultaneously. 

Integrity check and the write daemons are able to drive more IOPS because they use multiple processes and/or asynchronous I/O to put multiple I/Os in flight simultaneously.

Where language, programming skill, and ObjectScript come in to play is in how readily a program that wishes to put multiple I/Os in flight can do so.  ObjectScript enables this, primarily, by giving you controls to start multiple jobs (with the JOB command) and good mechanisms to allow those multiple jobs to cooperate.  For a single process, ObjectScript provides $prefetchon to tell the Cache kernel to do disk prefetching asynchronously on behalf of a single process, but that is restricted to only help in sequential-access-type workloads.

Programming constructs that work at a higher level of abstraction (higher than a single global access) may do some parallelization for you.  Caché has some of these types of things in many different contexts, but %PARALLEL in SQL, and the work queue manager come to mind.  (In SQL Server, you are already programming at this higher level of abstraction and indeed it's not surprising that there's parallelization that can happen without the programmer needing to be aware of it. Under the covers though, this is undoubtably implemented with the sorts of programming constructs I've described: multiple threads of execution and/or asynchronous I/O.)

Of course, how readily a task can be adapted to being parallelized is highly specific to what the task is doing and that is application-specific.  Therefore there are tasks for which this disk array has far more capability than an application may use at a given user load.  However, even an application for which no single task would ever utilize any where this much disk capability, when scaled up to tens of thousands of users, it may indeed want a disk array like this and make use of it quite naturally.  Naturally, not by virtue of the program being written to parallelize an individual tasks, but by having thousands of individual tasks running in parallel.

There is no utility to do this.  You're right that to create such a mechanism is just a matter of manipulating the right bits and bytes just so, but it does mean that you'd lose the guarantee that these are identical copies, so we haven't created one.  The only context in which anything like this is available is the special case of converting shadow systems to mirror systems where we do have a migration utility that doesn't require completely resynchronizing the databases.

This is pretty clearly a mistake in the definition of the Search custom query.  We will look into the history a bit more and correct it. Since the (custom query) Execute method defines the expected arguments, invocation through a resultset works.  Beyond the understandable confusion you had, Mike, it makes sense that this could cause other things not to work like Dmitry illustrates.

You might want to take a look at the List query in %SYS.Journal.Record.  That's a much nicer interface for searching a journal in my opinion.  Also, I suspect you'll find it performs better for most use cases. 

Hopefully someone will chime in with real-life numbers, but I thought it would be helpful to take you through the principles at play to guide your thinking...

1. With any mirror configuration that is going over a WAN (for failover or just DR), you're going to need to ensure sufficient bandwidth to transfer of journals over the network at the peak rate of journal creation.  This is application- and load- specific of course, so this is derived from measuring a reference system running that application.  It's important to base this on peak journal creation, not average journal creation rate, giving plenty of room for spikes, additional growth, etc.

2016.1 introduces network compression for journal transfer and that can substantially reduce bandwidth (70% or more for typical journal contents).  Although it can add a computation latency to the latency you'd consider in #2 below, if you're already going to use SSL encryption, compression may actually save some latency compared to SSL encryption alone.  See documentation on Journal data compression.

2. With failover members in different data centers, latency can be a factor for certain application events.  Specifically it's a factor when an application uses synchronous commit mode transactions or journal Sync() API to ensure that a particular update is durably committed. That requires a synchronous round trip to the backup, which of course incurs any network latency.  This is discussed under Network latency considerations

3. You'll need a strategy for IP redirection when failover occurs. For an intro to the subject, read Mirroring Configurations For Dual Data Centers and Geographically Separated Disaster Recovery.  Then see Mark Bolinsky's excellent article here on the community

4. You'll need a location for the arbiter that is in neither of the two data centers as discussed in Locating the Arbiter to Optimize Mirror Availability

Not anything else.  True values are any that evaluate numerically as non-zero.  Strings evaluate numerically by treating the beginning of the string as a number until a non-numeric character is encountered.  So "1 avocado" evaluates numerically as 1 and "-.04abc" evaluates numerically as -0.04.  Both of these strings are true.  "basil" is just as false as the null string. 

For more discussion see the docs here:


Well, applications can audit to the audit database using $system.Security.Audit().  That would provide the rollback protection described and also the other benefits that the CACHEAUDIT database affords.

I think it would be difficult to form a general purpose rollback handler that would be simple enough to be the most usable solution.  Rollbacks aren't necessarily like exceptions.  Speaking very generally, there's two main cases, I think:

 a. The transaction is in a function you're writing. You already have complete control so you don't need a special handler.  

 b. The transaction is in the function you're calling in a function or that function calls. Remember transactions can be nested and there could be many transactions inside the callee.  Your code doesn't know whether the callee's rollback was expected or not, and doesn't really have any context around what is being rolled back, so I'm not sure how you'd have a sensible handler - it would be very non-local.  With any function you call, In the end, your code is going to get some indication of success, and if callers of a particular function have some need to know whether rollback was performed, then I suspect it's specific to that function and up to it to return the necessary info to its callers in a way that is appropriate within the interface it provides.