Hey Peter, minor fix: the redirect URL should end in .cls of course.

Also, I am trying this out and found yours and Eduard's how-tos very clear. However I am seeing permissions issues despite having checked every option in my Dropbox account. The error I am getting is this (on file retrieval)

ERROR #8921: Error response: ERROR 401: {"error_summary": "missing_scope/..", "error": {".tag": "missing_scope", "required_scope": "files.metadata.read"}}.

I created a Dropbox folder called ToIRIS and have the MFT source folder set to /ToIRIS.

I think I am missing a step. Any ideas?

Thx!

Hey Fabio

Nice work on this! I had a long discussion with a customer about data auditing and he pointed me to this article.

We discussed the concepts of auditing specific fields, or just auditing the whole record on update. I think this depends very much on the actual usage of the application. For example, if a user typically only modifies one field out of 50 for an object, then your approach works well, but if they typically modify ten fields it may be better to just store the whole record as in the former case you are writing 11 additional database records, versus just one if you write the whole record. But in turn this also depends on the size of each record - if there is a lot of text then it could become burdensome as well. And remember, all those database writes get journaled.....

We also discussed mapping the audit globals to a separate database. Then you have the option of (say monthly) archiving the audit database to cheaper storage and replacing with a fresh copy. Then retrieving and mounting on a test system as necessary for audit record retrieval purposes. This way you also can just throw away old databases versus having to purge them with all the overhead that entails.

Hope these thoughts help someone!

Interestingly, I am researching this for a customer. Another way to do this that I have seen is to seed the ID counter at the site to a really high number (e.g. set ^Vendor.CountryD=10000000) after the initial Country code table is deployed. Then, new entries that are distributed by the vendor will never have conflicting IDs (assuming no more than 10000000 entries are ever shipped of course). But this also means that you can't ship the whole vendor copy of the global as you will overwrite the onsite ID counter node.

And, in a multi-tenant/namespace deployment you can't generate SQL or Object Insert scripts - if you have indices on the table then they need to be built locally for each namespace and global mapped and so you would need to run the insert script on each namespace to maintain the indices. However, rebuilding the indices on code tables should be pretty quick as they tend not to be huge, and hopefully won't balloon the journals too much....

I like the idea of a read-only distributed database - oddly I hadn't thought of that as an extra layer of protection beyond what the application code may provide

The biggest issue with this approach though is  likely to be the risk of a unique property collision - customer adds MYNewCountry , and then at some point so does the Vendor. And so an upgrade script would have to check for that - possibly renaming the customer object to say MyNewCountryX.

Generally not good practice to mix $order and $next. $next uses -1 for "end of set", which of course is a valid global subscript. $order uses the null string, which on almost all Cache instances out there is not valid.

Also, this line:

        set clase = $Order(^Ens.MessageBodyD(pos,1))

Should probably be this although I doubt this actually works: you need to use $listget(clase,1) or something similar:

        set clase = $Get(^Ens.MessageBodyD(pos,1))

And test for the While loop exit should be:

       while (pos'="")

However, I would stay away from direct global access unless you really need it. Stick to SQL for querying/updating , or objects for updates.

HTH

andre

There is also the added benefit of minimal downtime upgrades in a Mirrored environment. Basically you don't mirror the  code database, and maintain that manually. But this allows you to update the code on the failover member, fail over to it, and then upgrade the code on the old primary. Without this, you cannot update the "single" code/data database on the failover member as it is read-only.

This is a little more complex on Ensemble and Health Connect, but the principle is the same.

Thx Sean.

On the date transform stuff, I don't care about length, more interested in using nice clean on the eye out of the box Ensemble functions, which demo much better than $zdh et al, and better than a custom function than means a new customer has to write code, however simple. Sometimes deals are won or lost on "silly" stuff like this.

R.e. checking for a segment, your example is better than mine but again less intuitive than something like request.SegmentExists("NTE(1")) would be - if it existed. LOL.

Thx for the info on the multiple PID segments, good to know.

Cheers

This is VERY important IMHO - purely from the perspective of maintainability by non-Msters, like Java and .Net programmers. They ALL understand curly braces, but they are ALL scratching their heads and reading manuals for the dot syntax. And then swearing to never touch Caché again.

Personally I only use dot syntax these days if iterating down through a deep global structure with multiple subscript levels, where I think it makes the code a little easier to read and I am an old M guy so it is second nature. But that is a very isolated case and not often used these days in the object/SQL world. HH