How to Sync User Accounts, Resources, Roles, WebApps, Tasks across to Mirror Members

Hi,

We have Mirroring established between NODE 1 & Node 2 . We have set the "cachesys" database enabled for Journalling. But we dont see the User Accounts , Roles, Resources created on Node 1 ( favoured Primary) reflected on Node 2 . Is creating them manually again is the only option for this ? . Is there any way to sync them or would adding %SYS to MIRROR a possible solution. Would it be great if anyone has faced this as we have an issue that during failovers Team is locked out . 

Best Regards,

Arun Madhan

  • + 1
  • 1
  • 207
  • 6
  • 4

Answers

I don't believe it's possible for the CACHESYS database (the one that sits behind the %SYS namespace) to be added to the mirror, because each member of the mirror needs to store instance-specific data there.

I've seen sites write their own scripts to export users, roles etc periodically from the master instance into files and import them into the other(s). For example, the Export method of Security.Users

But it's long puzzled me that InterSystems doesn't seem to have done this job for us all. Or perhaps they have, and I haven't yet heard about it.

eventually, EMS might do it.
I've just never seen it in action.

Just adding CACHESYS to mirror could be a deadly exercise

Who then is primary? Me or my Mirror?

But to achieve your goal you may have an additional DB. Let's name it SYSMIRROR
And now you use global / subscript level mapping to place the common information there.
e.g. ^SYS("Security") or parts of it ("RolesD","UsersD", ..)  whatever you think you need.
I never tried it but it could not see a contradiction.

For better synchronization and uniqueness, I'd personally prefer to have this SYSMIRROR accessed over ECP while holding a local copy of SYSMIRROR for backup /  failover

We also had to deal with this problem. As mentioned before Caché itself does not support this automatically due to CACHESYS nature.

First of all I think, that any attempt to write reliable "scripts to export users, roles etc periodically from the master instance into files and import them into the other(s)" fails. Whatever you choose for the backup interval (day/hour/10min), it may fail in the worst moment. You will have to ensure the file is transferred correctly/file systems on both machines are up and ready, etc.

Of course, this may work (whether manually or automatically) on sites with a relatively small number of users/roles and a relatively small number of changes.

Assumptions in our installations:
1) we do not allow edit user/roles via SMP
- there is plenty of reasons for it

2) we use our own user/roles management system
- we need more complex functionality than SMP/Caché security offers
- we have to deal with hundreds of changes in users/roles daily in the production system
- we add/change/disable/enable users/roles there on the fly based on different sources
- for example : on the basis of completed tests in EDU environment by end users, interfacing the central system for processing role requests 
- almost nothing is done manually by application administrators
- this is "pretty much alive"

3) we have our own datastructures for rules/roles/rights
- we can simply let them mirror by appropriate mapping

4) we only write the most important informations into users/roles using API in Security.Users/Security.Roles classes

- every change is automatically updated into Caché security (users/roles/privileges/resources)


Principle of our solution:
1) we keep "MirrorQueue" of all changed users/roles in the system
- we can do this, while we have full control over changes in our own user/roles management system and we can log it
- this "MirrorQueue" is simply mirrored

2) when mirror/backup goes up we use ZMIRROR hooks (e.g. NotifyBecomePrimary())
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...

- we simply scan "MirrorQueue" and apply all changes users/roles (using the same API in Security.Users/Security.Roles classes)

3) when mirror/backup is up (and became primary) all our user/role data are synchronised  

Feel free to ask for further details.  
 

Hi Pavel,

your solution sounds interesting.

2) when mirror/backup goes up we use ZMIRROR hooks

Do you scan the queue only on the backup node startup, as this is the time when ZMIRROR hooks are called? What if the node does not restarted several months, can the queue become too long to delay the startup completeness?

Hi Alexey,

- yes, exacly, we scan the queue during backup node  startup

- the code is separated and ready to be delayed/called manually later (but, we never had to do this, except while testing)

- we never had timing issue and did not worry about this, whether it takes few seconds or few minutes

- in fact all this process has quite a few substeps

  a) scan all changed "roles" and update them (together with updating necessary resources)

  b) scan all changed "users" and update them (together with updating necessary resources)

  c) update also changed passwords (ssh, hush, little bit tricky/dirty)

  d) sending notifying e-mail to administrators/monitoring SW, that "D/R scenario happened and this XY instance on that XY machine is up"

Pavel

There are a few good suggestions here, and I just want to add another one. You could write custom code as a wrapper to call API's for whatever you're changing (whether adding users, tasks, etc.). Then, on all mirror members, you could configure a REST service that would accept requests to do the same (aka call your wrappers based on the request payload). Then, when you call a wrapper to, for instance, add a user, your code would call the REST service on the other mirror members, which would trigger adding that user on those members, and call the Security.Users Create method to add the user on the local instance.

Hi Pete,
we have implemented the model that's very similar to yours with slight differences:

  • we use it at home only, where we have several development and testing Caché instances;
  • most of them are not connected using Mirroring and/or ECP;
  • all Caché users are LDAP users, so we need not bother of creating/modifying users on per instance basis;
  • one instance is used as a repository of roles definition (so called Roles Repository); it "knows" about each role that should be defined on each instance; this repository is wrapped with a REST service;
  • each role has a "standard" name, e.g. roleDEV, roleUSER, roleADM; these names are used in LDAP users' definition and retrieved during LDAP authentication process;
  • the resources lists for each standard role are stored in the Roles Repository; those lists are associated with instance addresses (server+port), so each role can be differently defined for different Caché instances; therefore, a user with the role roleDEV  can have different privileges on different servers;
  • each Caché instance queries the Roles Repository at startup; after getting the current definitions of its roles, it applies it to its Caché Security database.

This solution is used in our company's Dev & Testing environment more than a year without great problems. It is rather flexible and doesn't depend on proprietary transport protocols. The only drawbacks found are:

  • the Roles Repository is automatically queried at Caché startup only, so if something in role(s) definition(s) should be changed on fly, manual querying is needed;
  • sometimes InterSystems introduces new security caveats with the new versions of its products; one of them was a subject of an article here: https://community.intersystems.com/post/implicit-privileges-developer-ro....

Comments

Hi

I believe this is a work in progress in the product - but I know of no ETA, so at the moment, everyone builds their own synchronisation techniques as other comments here explain.

* Disclaimer - this is not necessarily the 'Answer' you where looking for, but merely a comment you might find useful * 

In the past I create a framework for doing this and more.   I'll describe it here :

Using a pre-defined base - one would create any number of subclasses for each type of data you wanted to synchronise (for example, a sub-class for mirroring security settings) and in these subclasses implement 2 methods only:

- The first method 'export' deals with collecting a set of data from wherever, and, saves it as properties of the class (for example, in this case the method will export all security settings and read back the xml export in a global character stream for persistence within the DB). These are persistent sub-classes

- The second  method  'import' is the opposite side, which would unpack recently collected data and sync (for example - export the global character stream of the instance data to a temporary 'security.xml'  file and run through the system API's to importing those settings)  

The persistent data created by the classes during the 'export' method call is be saved to a mirrored database, so by default becomes available on other nodes, during 'import' invocation.

A frequently running scheduled task , executing  on any mirror member (primary, secondary or async member) would iterate through known subclasses and based on knowing that server's role, would either invoke the 'export' or 'import' methods of each subclasses . (of course the primary members call the 'export' method only, the other roles call the 'import' method.)

There are various checks and balances, for example, to ensure only the last instance data is imported on the import sides in case some where skipped for some reason. Also - that no import executes midway - ie - waits until an export has been flagged as complete...

I wrote this as a framework because I felt there is other data - not just the Security data in CACHESYS  that would need replicating between members. 

I have done a fair amount of testing on the tools,  and completed it around the time I heard InterSystems was working on a native solution so have not persisted further in documenting/completing.  I wrote this for someone else, who ended up building a more hardcoded, straightforward approach, so it is not actually in production anywhere.

Steve