In a simple configuration, one namespace uses one database.
But often, a namespace is a combination of a code database(=Routines), a data database (=Globals), and a bunch of mappings to other databases.

Looking at the names of your databases, it could well be the case that you need e.g. a namespace MYDATA1 which points for code and data to the database MYDATA1, and has Global Mappings for a list of specific globals to the database MYDATA1TEMP.

(You can see this list If you go to the portal -> explorer -> Globals, and select in the first combo 'databases' and the second 'MYDATA1TEMP', you should see a limited amount of globals that will probably hold temporary data which is stored seprately from you other live data.)


Ideally, the backup should also backup the file cache.cpf to get a list of the configuration of namespaces, databases, and mappings.

I had a similar problem where api calls could take up too much time to immediately respond.

So i immediately return as a response just a queryid (incremental number), which the user can use in a separate call to get the status, and in a separate call to get the results when the status is 'finished'.

Each call to the query api is stored in a table with the incremental queryid, the request and status='queued').
I have a job that will look at the next 'queued' request in the table that needs to be processed, set the status to 'in process', process the query, change the status in the table to 'finished' (and also store the results in the table).

So each query is processed one by one, and 'users' have to wait and use the status call to check when their query is processed. I also have a purge function that will delete processed queries after 14 days (so users can get the results within this period).

The api call that returns the next queryid will check if the background job is running to process the queue, if it is not running, it will job it. I also have put this method in the %ZSTART so it is started when Caché/Iris starts.
The background job in a loop process the queries, and will hang for a minute if there is nothing to do. (There are also ways in Caché/Iris to let the job 'sleep' and 'wake up'.)
 

If the .DAT file was copied while Ensemble was running, it could be corrupt (there could be blocks that were written in memory but not yet to disk). The only reliable way to use a .DAT file as backup is when the file was copied while Ensemble was down, or when the file was dismounted. (the .lck file should not be present in the same directory as the .DAT file)
The good thing about a backupped .DAT file is that you can mount it as a new database, and copy stuff from it while the rest is still running as is.

Hi Mark,
It is stated in the doc that %All is not enough : you have to have the exact role or be the exact username that matches row level security (in your case : the property FACILITY in each row).
Alternatively, you can disable it by setting the securitypolicy param to 0 (but then you need to recompile the classes that depend on it and rebuild the indices)
https://docs.intersystems.com/iris20232/csp/docbook/DocBook.UI.Page.cls?...

Hi Mark, If you can do inserts but not selects, the class might have row security enabeled.
Is there a parameter  ROWLEVELSECURITY = 1, and a method %SecurityPolicy present ? In that method, you can exclude rows based on e.g. $username and $roles. Even if your user has the %All role, you could be excluded to see a row in this security.
If you see that, you can or create the correct user/role, or set the parameter to 0 (and rebuild all indexes).
Can you send the class definition of one of the tables, so I can try the class on my server ?

Can you do the following (assuming the namespace USER is empty):
- copy the class definition of one class to the USER namespace (take a class that does not refer to other of your classes)
- do a SQL Insert into that table in USER
- do an SQL Select of that table in USER
- look if the global ^...D exists in USER.
- now copy the ^...D global from your orignal namespace to USER (using Merge command, or export/import in mgmnt portal) 
- Rebuild the indexes in USER of the class
- do the SQL Select again in USER

Hi Mark,

What can you see in the management portal (Explorer -> SQL) when you click on a table in Catalog Details -> Maps/Indices : should list your globals that are being used.
What can you see when you click on Open Table ?
How about the user that you use : does it have %All rights? 

Have you tried opening an instance with Set obj = ##CLASS(DocM.DocumentImage).%OpenId( <some id> ) ?

The exact index global for e.g. your class DocM.DocumentImage will by default be ^DocM.DocumentImageI (unless the class definition storage was changed).


You can rebuild the indices from the data by using the portal, click on the sql tabel and click on Actions->Rebuild indices.
Or go to terminal and Do ##class(DocM.DocumentImage).%BuildIndices()

(When doing an SQL query in the portal, you can view the query plan to see wheter an index was used.)

Hi Mark,

Can you also check that you received the correct CACHE.DAT files by looking at the label inside the files (they might be copied from the wrong directories).
Do you still have the original .DAT files ? After renaming them from CACHE.DAT to IRIS.DAT, before you mount them in Iris, you can check the label stored inside the .DAT file.

when in terminal (%SYS) :
%SYS> Write $$ROOT^LABEL("c:\directory-unmounted-irisdat-file\")
It should display the original path of the CACHE.DAT from the Unix directory.