Short answer: yes, you can certainly do this if you want to and the result is valid.  The main downside, in my opinion, is that the backup is then dependent on more technology, so there are more things that could go wrong.  More on that later.

If you're going to to this though, you really don't want to end up with Online Backup as your backup solution.  The problem with online backup is not consumption of resources, but time to restore,  I thought you were going to say you wanted the DR system so that you could shut it down for a couple hours while you take a cold external backup.  That would be a pretty good reason to do this.   

Since mirrored databases record their journal location inside the database, they intrinsically know from what journal file they need to "catch up" (the mirror checkpoint info).  Like all the usual backup solution, the result is not transactionally consistent in and of itself, but requires journal restore following backup restore to get to a transactionally consistent state. Mirroring makes this easier via the aforementioned checkpoint and the automatic rollback as part of becoming primary.  Of course it's the mirror journal files, not the DR's own journal files that will be used for this, but they live in the same directory, so if you just back that up in the same backup, you'll have the right stuff if it ever came to restoring this.

Now more about those downsides.  Backing up a replica means that you are subject to any problems with the replication.   For example, if a database on the DR had a problem and we had to stop dejournaling to it, that could mean your backup isn't good.  You'd worry a bit that you didn't notice because nobody is running on the DR system.  Or if you add a database to the primary but forget to add the same to the DR, your backup wouldn't have it.  These aren't meant to say this is a bad idea, but it is a consideration.   You want to think a bit about what you're trying to protect against.  You're talking about having a DR, so if you're restoring backup it means that something went wrong with both the primary and the DR.  So is the backup of the DR good in that situation?   If both are in the same physical location and your backing up in case that location is destroyed, then you're protected.  Or if you're backing up to handle the case of errant/malicious deletion of data, then you're protected.  

I don't know what your situation is with the main server, but I'd be curious how the system architect expects backups to take place and how long a backup of the disks are expected to take.  With a large global buffers, ExternalFreeze() can be workable in some application environments even if the freeze will last many minutes. If your operating environment is such that good backups are an absolute must, you might be better off investing in getting external backup working over there.

Again, "1.0" is not a canonical number; "2.2" is.  Both are valid numbers, but only one is in canonical form.  So exactly what you quoted here is the reason for this behavior.

Since both are valid numbers, you don't have to use + for any function that evaluates them as numbers or as boolean.  You do have to use + any time you desire conversion to canonical form (like equality, array sorting, etc).

If you have a true moment-in-time snapshot image of all the pieces of Caché (databases, WIJ, Journals, installation/manager directory, etc), then restoring that image is, to Caché, just as though the machine had crashed at that moment in time.  When the instance of Caché within that restored image starts up, all Caché's usual automatic recovery mechanisms that give you full protection against system crashes equivalently give you full protection in this restore scenario.

Whether a given snapshot can be considered crash-consistent comes from the underlying snapshotting technology, but in general that's what "snapshot" means.  The main consideration is that all of the filesystems involved in Caché are part of the same moment-in-time (sometimes referred to as a "consistency group").  It's no good if you take an image of the CACHE.DAT files from one moment in time with an image of the WIJ or Journals from another.

Most production sites wouldn't plan their backups this way because it means that the only operation you can do on the backup image is restore the whole thing and start Caché.  You can't take one CACHE.DAT from there and get it to a consistent state.  But, in the case of snapshots of a VM guest, this does come up a fair bit, since it's simple to take an image of a guest and start it on other hardware.  

Let me know if you have questions.

Upon return from ExternalFreeze(), the CACHE.DAT files will contain all of the updates that occurred prior to when it was invoked.  Some of those updates may, in fact, be journaled in the file that was switched to (the .003 file in your example), though that doesn't really matter for your question.

BUT, you still need to do journal restore, in general, because the backup image may contain partially committed transactions and journal restore is what rolls them back, even if the image of journals that you have at restore time contains no newer records than the CACHE.DAT files do.  This is covered in the Restore section of documentation, which I recommend having a look at:

There is an exception to this, and that is if you are a crash-consistent snapshot of the entire system, including all CACHE.DAT files, the manager directory, journals, and the WIJ.  In that case, all the crash-consistency guarantees that the WIJ and journals confer mean that when you start that restored image, the usual startup recovery actions will take care of any required roll forward and roll back from journals automatically.   In that scenario with crash-consistent snapshots, ExternalFreeze() wasn't even needed to begin with, because crash-consistent snapshot is by definition good enough.  However, ExternalFreeze() is typically used for planned external backups because it allows you to restore a subset of databases rather than requiring restore of the entire system.

The biggest thing you want to do is use three-argument $order to collapse from two global references to one:  $ORDER(^[Nspace]LAB(PIDX),1,Data)

In regards to the question about setting BBData or other small variants like that, it may very much be data-dependent and depend on what happens later in the loop that you haven't showed us.  But generally speaking if you're going to calculate the $p more than once, you probably do want to store it in a (private) variable.

You can certainly combine multiple conditions with and and or operators (&& and ||) if that's what you're asking.  Also, constructs like $case and $select can help (in case you haven't encountered them before).

Isn't the algorithm you describe going to lead to data discrepancies.  In particular, you have something like 1 in 2^32 chance of missing an update because it hashes to the same crc value.  Maybe this was already obvious to you and that it's okay for some reason, but thought I should say something just in case...  

Of course you could use a cryptographic hash function, like $system.Encryption.SHAHash(), but that takes substantial computation time, so you might not be any better off than you would be by actually opening the object and comparing the values directly.  It sounds like either way you're already resigned to traversing every object in the source database. (If the source database is growing then this entire approach won't work indefinitely of course)

Alex, I agree with you that I wouldn't recommend using this function for any of the use cases you mention. 

Laurel mentions one use case below, where you wish to preserve the state of a DR or backup before performing an application upgrade or data conversion so that it can be viable as a failback if something goes wrong.

Another case (which we mention in documentation) is if you are performing some maintenance activity on the primary host, particularly a virtual host, whereby you expect that it might interrupt connections to the backup and arbiter and you'd rather not have disconnects of failovers occur as a result.  This use case raises some questions, like why not just fail over to the backup before that maintenance, but we'll leave that aside. 

There's also the principle that it's good to have a way to shut things off temporarily without needing to dismantle the configuration or shut down the instance entirely.  That can be handy in troubleshooting.