Personally, I would definitely recommend using variant 1 with * . It seems to  me shorter, brighter and clearer.

I would not worry about the speed of evaluation *  (just feeling - certainly I have no numbers or proofs about performance).

If I am correct and remember well the ISC courses, the "global directory/list of global vectors" is held somewhere in memory  for each process while beeing in certain namespace. And therefore should be quickly evaluated.

Additionally, if you would need to add a new global in the future (such as qAuditS for streams), then the current definition will satisfy such need automatically and without updating the map table. However similar mapping for the different LIVE / TEST / EDU / CLONE / ... instances should always be done by scripts and automatically during the upgrade every instance.

Just some hints. I wrote a few helper functions for similar purposes.

MaskToPattern(mask,sw) ; mask with * and ? -> M pattern
 ; $$MaskToPattern^Idea.System.ZIF("ABC?D*") -> 1"ABC"1E1"D".E
 ; sw .. used for contains test (.E on both ends) : $$MaskToPattern^Idea.System.ZIF("ABC?D",1) -> .E1"ABC"1E1"D".E

and

MATCH(input,mask) ; matches input string against search mask

-> $$MATCH^Idea.System.ZIF("CSVZ","CST*")=0, $$MATCH^Idea.System.ZIF("CSVZ","CSV*")=1
MATCHOR(input,masklist) ; matches input string against list of search masks with OR condition (at least one)

-> $$MATCH^Idea.System.ZIF("CSVZ","CSV*,PF*")=1
MATCHAND(input,masklist) ; matches input string against list of search masks with AND condition (all must fit)

-> $$MATCH^Idea.System.ZIF("CSVZ","CSV*,PF*")=0


My implementation :

 MaskToPattern(mask,sw) pattern,char,pos pattern="",char="" pos=1:1:$L(mask) D
 . $E(mask,pos)="*" pattern=pattern_$S(char="":"",1:"1"""_char_"""")_".E",char="" Q
 . $E(mask,pos)="?" pattern=pattern_$S(char="":"",1:"1"""_char_"""")_"1E",char="" Q
 . char=char_$E(mask,pos)
 pattern=pattern_$S(char="":"",1:"1"""_char_""""),char="" S:$G(sw) pattern=".E"_pattern_".E"
 pattern

MATCH(input,mask) input?@$$MaskToPattern^Idea.System.ZIF(mask)
MATCHOR(input,list) ok,pie,mask ok=0 pie=1:1:$L(list,",") mask=$P(list,",",pie) mask'="",$$MATCH^Idea.System.ZIF(input,mask) ok=1 Q
 ok
MATCHAND(input,list) ok,pie,mask ok=1 pie=1:1:$L(list,",") mask=$P(list,",",pie) mask'="",'$$MATCH^Idea.System.ZIF(input,mask) ok=0 Q
 ok

Thanks Alexey. Sure. It depens, whether you have Latin1/Latin2 characters in subscipts or not. If not, then everything is much easier and can be done by simple $query loop.

In case you have

^GLOBAL("Latin1/Latin2 subscript 1", large subnodes) 

^GLOBAL("Latin1/Latin2 subscript 2", large subnodes) 

then it is not trivial to "rename" all ^GLOBAL("Latin1/Latin2 subscript 1") node under ^GLOBAL("UTF subscript 1").

1) there is no easy/cheap way to "rename" node (kill + merge + kill is not the solution)

2) even if some "rename" solution will be implented, the the $QUERY loop will be confused

- new ^GLOBAL("UTF subscript 1") may collate arbitrary  wherever

I do rememeber my discussion with Andreas about that. And as Andreas claimed to me, that "RENAME" command is part of Caché product :-) and found later, that he was confused by some internal ISC discussions about this topic.

Dmitry is absolutely right.  I had to deal with this task quite a long time ago. I had a presentation about this topic on our M&Caché technology group Czech&Slovak meeting back in 2004.

Just a few points from my presentation
- we had a 8bit database with data in Latin2
- we had minimum of $listbuild data these days
- if it is possible due to small amount of data -> use simple export/import  
  
  1) export data to external file on 8bit instance
  2) import data from external file on Unicode instance 
  
-  and let NLS settings do the work (everything will be done by appropriate NLS settings for files )

We had databases with a plenty of data
- our aproach was to do as much as possible "in-place" conversions
- one can unmount CACHE.DAT on original 8bit instance a mount the same CACHE.DAT on Unicode instance
- we categorized globals into 3 categories

  0 - completely without Czech characters
  1 - Czech characters only in data
  2 - Czech characters in data and subscripts
  
- group 0 : no action needed
- group 1 : wrote $QUERY loop and data conversion in place
- group 2 : global export/import

- group 0 was pretty big : fine, no action = super extremely fast :-) 
- group 2 was pretty small : fine
- group 1 was the crucial part
  * we used multiple jobs (carefully coordinated)
  * one has to skip subscripts data with BLOBs for example
  * one has to be carefull with repeatability in case of "crash"
    - multiple $ZCVT may lose some characters <->  $ZCVT($ZCVT(input,"I","Latin2"),"I","Latin2") is wrong (or at least in 2004 was wrong)

Hi Alexey,

- yes, exacly, we scan the queue during backup node  startup

- the code is separated and ready to be delayed/called manually later (but, we never had to do this, except while testing)

- we never had timing issue and did not worry about this, whether it takes few seconds or few minutes

- in fact all this process has quite a few substeps

  a) scan all changed "roles" and update them (together with updating necessary resources)

  b) scan all changed "users" and update them (together with updating necessary resources)

  c) update also changed passwords (ssh, hush, little bit tricky/dirty)

  d) sending notifying e-mail to administrators/monitoring SW, that "D/R scenario happened and this XY instance on that XY machine is up"

Pavel

We also had to deal with this problem. As mentioned before Caché itself does not support this automatically due to CACHESYS nature.

First of all I think, that any attempt to write reliable "scripts to export users, roles etc periodically from the master instance into files and import them into the other(s)" fails. Whatever you choose for the backup interval (day/hour/10min), it may fail in the worst moment. You will have to ensure the file is transferred correctly/file systems on both machines are up and ready, etc.

Of course, this may work (whether manually or automatically) on sites with a relatively small number of users/roles and a relatively small number of changes.

Assumptions in our installations:
1) we do not allow edit user/roles via SMP
- there is plenty of reasons for it

2) we use our own user/roles management system
- we need more complex functionality than SMP/Caché security offers
- we have to deal with hundreds of changes in users/roles daily in the production system
- we add/change/disable/enable users/roles there on the fly based on different sources
- for example : on the basis of completed tests in EDU environment by end users, interfacing the central system for processing role requests 
- almost nothing is done manually by application administrators
- this is "pretty much alive"

3) we have our own datastructures for rules/roles/rights
- we can simply let them mirror by appropriate mapping

4) we only write the most important informations into users/roles using API in Security.Users/Security.Roles classes

- every change is automatically updated into Caché security (users/roles/privileges/resources)


Principle of our solution:
1) we keep "MirrorQueue" of all changed users/roles in the system
- we can do this, while we have full control over changes in our own user/roles management system and we can log it
- this "MirrorQueue" is simply mirrored

2) when mirror/backup goes up we use ZMIRROR hooks (e.g. NotifyBecomePrimary())
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...

- we simply scan "MirrorQueue" and apply all changes users/roles (using the same API in Security.Users/Security.Roles classes)

3) when mirror/backup is up (and became primary) all our user/role data are synchronised  

Feel free to ask for further details.  
 

Excellent and inexhaustible topic, Vitaliy. I have been interested in this since starting with databases, especially with M and globals.

There exists  my "M-technology and temporal databases"  diploma thesis at the Faculty of Mathematics and Physics, Charles University Prague back in 1998.

The topic is generally much more extensive. The "Validity time" aspect is described here. When the data are valid.

But there is also the possibility of looking at the "Transaction time". When data were entered into database. So you can ask "what was the rate for this security valid since time V1 using data available at time T1".

As mentioned - great and interresting topic. And nice sample :-)