Hi, Jaqueline!

You also can consider using isc-dev import/export tool.

Install it in USER and map into %ALL (to make it runnable from any namespace), set up the tool like this:

d ##class(sc.code).workdir("/your/project/src")

Enter your project namespace and  call:

d ##class(sc.code).export()

And the tool will export (along with classes) all the DeepSee components you have in Namespace including pivots, dashboards, term lists, pivot variables and calculation members in folders like:

cls\package\class

dfi\folder\pivot

gbl\DeepSee\TermList.GBL.xml

gbl\deepsee\Variables.GBL.xml.

So you can use it with your CVS tool and see commit history, diffs, etc.

To import everything from the source folder into new namespace use:

d ##class(sc.code).import()

or standard $SYSTEM.OBJ.ImportDir() method.

HTH

2) It's OK approach for relatively small external source tables because in this case, you need to build the cube for all the records and have no option to update/sync.

If you OK with timings on cube building you are very welcome to use the approach.

If the building time is sensible for the application consider use the approach what @Eduard Lebedyuk already has advised you: import only new records from the external database (hash method, sophisticated query or some other bright idea) and do the cube syncing which will perform faster than cube rebuilding.

Hi, Max!

I think you have two questions here.

1. how to import data into Caché class from another DBMS.

2. How to update the cube which uses an imported table as a fact table.

About the second question: I believe you can introduce a field in the imported table with a hash-code for the fields of the record and import only new rows so DeepSee Cube update will work automatically in this case. Inviting @Eduard Lebedyuk to describe the technics in details.

Regarding the 1st question - I didn't get your problem, but you can test linked table via SQL Gateway UI  in Control Panel.

Hi, Peter!

What situation do you have in mind that could cause the compilation to be unsuccessful?

E.g. compilations using a projection when the result of compilation could be totally unpredictive. 

Also, compilation can be a time-consuming process, comparing to replacing cache.dat file - so it potentially a longer pause in production operation. 

I can't see how deploying/copying the Cache.dat will avoid problems when you have multiple developers or multiple projects on the test server.

I'm not saying that coping cache.dat strategy should be used for a test server. Indeed we can compile the branch on a build/test and then transfer cache.dat to a production if testing goes well.