go to post Evgeny Shvarov · Apr 6, 2018 I'll answer the comment related to import/export utility isc-dev . It exports everything in a namespace to a specified folder. You can setup the folder with: d ##class(sc.code).workdir("D:\yourfilder") Then export with d ##class(sc.code).export() You can specify a mask of classes too or set it up in isc.json file.
go to post Evgeny Shvarov · Apr 3, 2018 Hi, Jaqueline! You also can consider using isc-dev import/export tool. Install it in USER and map into %ALL (to make it runnable from any namespace), set up the tool like this: d ##class(sc.code).workdir("/your/project/src") Enter your project namespace and call: d ##class(sc.code).export() And the tool will export (along with classes) all the DeepSee components you have in Namespace including pivots, dashboards, term lists, pivot variables and calculation members in folders like: cls\package\class dfi\folder\pivot gbl\DeepSee\TermList.GBL.xml gbl\deepsee\Variables.GBL.xml. So you can use it with your CVS tool and see commit history, diffs, etc. To import everything from the source folder into new namespace use: d ##class(sc.code).import() or standard $SYSTEM.OBJ.ImportDir() method. HTH
go to post Evgeny Shvarov · Apr 3, 2018 Hi Sergey! Kanban is great! Love it)But now I do not start any project without issue tracker. If you got a bug report or a feature request, where you will place it?
go to post Evgeny Shvarov · Apr 2, 2018 Hi, Peter, thanks! The question was that everything which is listed in pivot-settings-dropdown is available to call with %LISTING in MDX, except with "custom listing". Which can be called with DRILLTHROUGH .... RETURN fields MDX instruction, as you wrote. Thanks!
go to post Evgeny Shvarov · Mar 31, 2018 Thank you, Alessandro! That's clear now. I guess Custom Listing is being executed with DRILLTHROUGH + field names by comma: there is an option in Drillthrough documentation.
go to post Evgeny Shvarov · Mar 31, 2018 Have no idea, Alessandro.Just figured out, that there are some extra listings (in addition to those we have in the cube) available for pivots in Analyzer. %Listing only works (for me) with those which are declared in the cube.You can see (and alter custom listing) if you open Detail Listings section in Analyzer (version 2017.2).
go to post Evgeny Shvarov · Mar 31, 2018 Hi, Robert!Thanks for the answer!Yes, %LISTING this works perfectly with listings declared in the cube definition.But if you open Analyzer you can see extra "Custom" listings, which are available in the pivot, but not available via %LISTING.
go to post Evgeny Shvarov · Mar 31, 2018 This is changed. So ReactJS is React and AngularJS is Angular now.
go to post Evgeny Shvarov · Mar 29, 2018 Thank you, Artem!It's a great feature which helps you call any binaries diectly from Objectscript without coding a proxy dll. It was also used for Caché Localization manager.
go to post Evgeny Shvarov · Mar 28, 2018 2) It's OK approach for relatively small external source tables because in this case, you need to build the cube for all the records and have no option to update/sync.If you OK with timings on cube building you are very welcome to use the approach.If the building time is sensible for the application consider use the approach what @Eduard Lebedyuk already has advised you: import only new records from the external database (hash method, sophisticated query or some other bright idea) and do the cube syncing which will perform faster than cube rebuilding.
go to post Evgeny Shvarov · Mar 27, 2018 Hi, Chintan!Have you seen this article? Maybe it's relevant.
go to post Evgeny Shvarov · Mar 22, 2018 Thanks, Eduard!Also this is a relevant article too by @Vicky Li
go to post Evgeny Shvarov · Mar 22, 2018 Hi, Max!I think you have two questions here.1. how to import data into Caché class from another DBMS.2. How to update the cube which uses an imported table as a fact table.About the second question: I believe you can introduce a field in the imported table with a hash-code for the fields of the record and import only new rows so DeepSee Cube update will work automatically in this case. Inviting @Eduard Lebedyuk to describe the technics in details.Regarding the 1st question - I didn't get your problem, but you can test linked table via SQL Gateway UI in Control Panel.
go to post Evgeny Shvarov · Mar 18, 2018 Hi Peter!Sure, that's why I raised the topic - to gather the best practices of "what works" in production, preferably "for years".Thanks for sharing your experience. BTW, do you want to share your Source Control library on DC someday?
go to post Evgeny Shvarov · Mar 17, 2018 Hi, Peter!What situation do you have in mind that could cause the compilation to be unsuccessful?E.g. compilations using a projection when the result of compilation could be totally unpredictive. Also, compilation can be a time-consuming process, comparing to replacing cache.dat file - so it potentially a longer pause in production operation. I can't see how deploying/copying the Cache.dat will avoid problems when you have multiple developers or multiple projects on the test server.I'm not saying that coping cache.dat strategy should be used for a test server. Indeed we can compile the branch on a build/test and then transfer cache.dat to a production if testing goes well.
go to post Evgeny Shvarov · Mar 16, 2018 Hi, Peter!I never mentioned deploying cache.dat on a test server. Only for the production site. Compilation on a test server is OK.Compilation on a production can be unsuccessful - what do you do in this case?
go to post Evgeny Shvarov · Mar 15, 2018 Jaqueline!I guess you have widgets with pivots which include dates in from different cubes/dimensions and want them to be filtered by one control with date dimension. Right?It's better if you provide the MDX query of at least two of different widgets (better in a new question) and we'll try to find an answer.
go to post Evgeny Shvarov · Mar 15, 2018 Peter, thanks again for the good questions!Also, we have Developer Community FAQ series of posts which could be helpful as well.