Hello,

I assume you are looking for something like CDC (capture data changes) for .

The basic idea is to programmatically read journal files record by record and analyze the SET/KILL ones  (according to some dictionary you build to determine which globals or classes need the CDC capability). 

I have done something similar using the ^JRNUTIL

https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GC...

There are few considerations here:

1. If you are using a failover (sync) mirror, you will need to that Ens.* data on the backup so a switch can be done seamlessly, and Ensemble production can start from the exact same point when failure occurred.

2. mapping any globals into a non-journal database will still have them on the journal when the process is "in transaction". Ensemble production is writing everything "in-transaction"

3. Mapping Ens.* to another DB is good idea if you have several disks, and you want to improve I/O on your system (specifically within cloud VMs that usually have a bit slower disks than on-premise). Another option is to let the system purge Ensemble data regularly and maybe keep less days of history.

4. Mirror journals are regularly being purged after all sync/a-sync mirror members got them.

In my opinion, it is much better & faster to store binary files out of the database. I have an application with hundreds of thousands of images. To get a faster access on a Windows O/S they are stored in a YYMM folders (to prevent having too many files in 1 folder that might slow the access) while the file path & file name are stored of course inside the database for quick access (using indices). As those images are being read a lot of times, I did not want to "waste" the "cache buffers" on those readings, hence storing them outside the database was the perfect solution.

Thank you Robert,

Excellent article.
In fact we use this approach when we need to gain speed for queries that run on hundreds of millions records indexes and need to check few items only, so we save checking the "base" class by using [ DATA ... ]  

In addition, to gain more speed, which is so essential in huge queries, we use the (good, old) $order on the index global itself to gain more speed. This is much faster than normal SQL. 

Hello,

I recommend either ways:

1. copy the *.DAT file to new server: for this you need to dismount the DB on source before copy (so you have downtime), mount it and do a merge between 2 DB/Namespaces.

2. Do a (hot) backup for the DB that has the table  = NO downtime (!) - then copy the *.BCK file to other server, restore to a NEW DB and merge.
This is slower - but downtime is 0

Hello,

I have done many migrations from DSM -> Cache in the past, so I can share some of the knowledge here.

Basically there are two phases involved in the process:
1. Migration the DB itself (.i.e globals). For this you can use utility called %DSMCVT which will read the globals from a DSM DB into a Cache/IRIS DB. Sometimes a 7/8bit -> unicode is also part of the process, so an appropriate code for that should be written.
2. Migration of code, that, as mentioned before is the complex part of the process, as you need to rewrite/modify parts of your code that handle with O/S specific issues:
a. File system - all devices code is different (i.e. open/use/close commands) when working with files/directories.
b. Calls to VMS special variables/functions, need to be converted to "standard" code.
c. Some systems rely on VMS "batch queues" for scheduling jobs or for printing reports. This should be addresses as well.
d. Some systems rely on VMS FTP functionality. In this case you need to write the same functionality it inside Cache/IRIS.
d. Using tools, like "screen generator" tools that uses some CHUI non-standard" ANSI codes for on screen attributes, might need to be "adopted" to standard ANSI codes.

As mentioned before, the time/effort for such a migration, highly depends on the native of your application, and the way it was designed. for example if you have 1 central function to print a report (lets say it is in a global), then you need to modify only this function to have all your reports working on the new system. This also applies to "central" functions for reading/writing files,  dates calculations etc.
 

Hello,

your method returns %ArrayOfObjects so you need to create and populate it within your code...

your code should look like (I have highlighted the relevant changes in code) :

set booksRS = ##class(%ResultSet).%New("Library.Book,BooksLoaned")
        set rsStatus = booksRS.Execute()
        books = ##class(%ArrayOfObjects).%New()
        if rsStatus = $$$OK {
            while booksRS.Next() {
                set book = ##class(Library.Book).%New()
                set book.Title = booksRS.Get("Title")
                set book.Author = booksRS.Get("Author")
                set book.Genre = booksRS.Get("Genre")
                set dbFriend = ##class(Library.Person).%OpenId(booksRS.Get("Friend"))
                set book.Friend = dbFriend
                Set sc = books.SetAt(book,$Increment(i))
            }
        }else{
            !,"Error fetching books in GetoanedBooks()"
        }
        do booksRS.Close()
        return books

When you create a class that the IDs are maintained by the system (standard class without modifying the storage, then  there is a  save  mechanism to give a new ID to every new object that is being saved to the database.

Of course that when multiple users are saving new data simultaneously , a specific process ID after a %Save() might not be the last one in the table.

Hello,\

You may find documentation on how to work with streams here: 
https://docs.intersystems.com/iris20191/csp/docbook/DocBook.UI.Page.cls?KEY=GOBJ_propstream

in BPL you can add a "code" element where you can create and populate your stream with data,. or you can add a "call" element to call a classmethod where you implement the code.

For example: to create a stream and write some data into it :
(if you need to have this stream data vailable for other components of the BPL it is best to use a %context property for it)

set stream  = ##class(%Stream.GlobalCharacter).%New()
do stream.Write("some text")
do stream.Write(%request.anyproperty)
do stream.Write(%context.anyproperty)

in your request / response messages to pass to BO you will have to add a stream property :

Property MyProp As %Stream.GlobalCharacter;

Hi.

Julius ConntQ function will give a wrong answer for ^Locations","Canada") since it will count also the "USA" nodes.

Here is a code that will do the trick :

ClassMethod Count(node)
{
    S QLen=$QL(node) QLen Keys=$QS(node,QLen)
    F Count=0:1 node=$Query(@node) Q:node="" || (QLen && ($QS(node,QLen)'=Keys))
    Quit Count
}

W ##class(Yaron.test).Count($name(^Locations))
5

w ##class(Yaron.test).Count($name(^Locations("USA")))
3

w ##class(Yaron.test).Count($name(^Locations("Canada")))
2