I don't understand the difference between this two kinds of voting :) Which solution is the best, depends on many factors: if we need turbo performance, we'd take your approach, if not - %ResultSet based one. BTW, I guess that file dirs scanning is a small part of a bigger task and those files are processed after they have been searched and the processing takes much longer than the directory search.

Last 2c for cross-platform approach: the main place where COS developer faces problems is interfacing with 3d party software. As I was told by one German colleague, "Cache is great for seamless integration".

E.g., I've recently found that forcely resetting of LD_LIBRARY_PATH by Cache for Linux may cause problems for some utilities on some Linux versions. It's better to stop here, maybe I'll write about it separately.

I'm voting for Rubens's solution as it is OS independent. Caché as a great sandbox in many cases, why not use its "middleware" capabilities? Developer's time costs much more than CPU cycles, and every piece of OS dependent code should be written and debugged separately for each OS that should be supported.

As to performance, in this very case I doubt if the recursion costs much compared to system calls. Anyway, it's not a great problem to replace it with the iteration.

And what will happen, if one decides to revert his class def to a previous version with previous storage def state, having some data already populated using new schema?

It seems that there is no "good" choice between TS's two options, only some "bad" vs "worse", unless business logic is accurately separated from data and kept in different classes. In this case the probability of reverting data class def should be lower.

  • Вопросы есть? Вопросов нет!
  • И называйте меня просто — товарищ Сухов!
  • Have you any questions? No questions!
  • And call me simply - comrade Sukhov

Quotes from the cult film "White Sun of the Desert" / "Белое солнце пустыни", 1970.

This film is traditionally watched by Russian cosmonauts before every space flight.

PS. Sorry for off-topic, Jon provoked me :)

During Cache backup it appears that all the available memory on the server is being used.

It's usually not a problem as the more memory is used for buffering the quicker file i/o operates. Only free memory is used for this purpose, so memory allocated by users' or system processes should not be swapped. Please add more details, why it turned to be a problem in your case?

PS. Double check in console log whether Cache allocates its shared memory segment using large pages as it guaranties that it is totally allocated at Cache startup and will neither be expanded nor swapped afterwards.

While it's easy to change GUID following John's hint (if you understood the code, you already know which global node to kill, and it will be recreated on the next call), I wonder why do you want it? It seems that your instanse have migrated to another host, hense you may want to save all its requisites (including GUID) untouched. This GUID is not used by Cache itself, but may be used in some app data fields, so you may loose app level integrity.

PS. If you didn't understand the code, or are cautious to change system globals, you'd better deinstall Cache and re-install it into the same folder. System DBs would be totally rewritten while your app DBs as well as cache.cpf would be left as is.

Hi, Richard.

...to have cache run as a different user to access that macro...

Running Caché for Windows using a dedicated user account (so called service account) may have some other advantages:

  • Caché processes get ability to use MS network resources (shared folders), e.g. Caché Backup can be performed directly to remote folder; 
  • Kerberos authentication can be used.

So long story short just going down the libre office route.

Why do you expect that libre office route would be shorter? Do you plan to use another (not $zf(-1,...)) approach?

Thank you, John, for this reminder, while I prefer to deliberately enable auditing of all events for the simple reason: a few added-on disk IOPS, as well as 1GB+ of disk space for audit database (even on high loaded Caché instances) do not seem to be a great price for an ability to trace not predicted cases in production.

We even met a prospect's requirement that any audit database write failure should be dealt as a critical error with a notification of admins by all means, even with shutting the Caché down!