go to post Yaron Munz · Jul 14, 2021 I agree with Eduard. However, there is an option to have global mapping (by subscript) to different databases that might be located on multiple disks for getting better I/O https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GG...
go to post Yaron Munz · Jul 1, 2021 Hello Subramaniyan, If you can have a downtime for the DB, than you could write a script that dismount the DB, FTP it to another server and mount it again. This of course depends on the DB size and your network speed. If a downtime is not possible, I would recommend doing a "hot backup" then copy (or FTP) it to another server and restore it. Another option is to use "external backup" with using a "freeze" & "Thaw" to ensure data integrity. Further information: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...
go to post Yaron Munz · Jul 1, 2021 There is a stand alone studio called "IRIS studio" that can be installed independently (no need for IRIS install). If you download the latest version it will support "backward" all versions of Cache/IRIS
go to post Yaron Munz · Jun 2, 2021 Hello, I assume you are looking for something like CDC (capture data changes) for . The basic idea is to programmatically read journal files record by record and analyze the SET/KILL ones (according to some dictionary you build to determine which globals or classes need the CDC capability). I have done something similar using the ^JRNUTIL https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GC...
go to post Yaron Munz · Jun 2, 2021 There are few considerations here: 1. If you are using a failover (sync) mirror, you will need to that Ens.* data on the backup so a switch can be done seamlessly, and Ensemble production can start from the exact same point when failure occurred. 2. mapping any globals into a non-journal database will still have them on the journal when the process is "in transaction". Ensemble production is writing everything "in-transaction" 3. Mapping Ens.* to another DB is good idea if you have several disks, and you want to improve I/O on your system (specifically within cloud VMs that usually have a bit slower disks than on-premise). Another option is to let the system purge Ensemble data regularly and maybe keep less days of history. 4. Mirror journals are regularly being purged after all sync/a-sync mirror members got them.
go to post Yaron Munz · May 31, 2021 In my opinion, it is much better & faster to store binary files out of the database. I have an application with hundreds of thousands of images. To get a faster access on a Windows O/S they are stored in a YYMM folders (to prevent having too many files in 1 folder that might slow the access) while the file path & file name are stored of course inside the database for quick access (using indices). As those images are being read a lot of times, I did not want to "waste" the "cache buffers" on those readings, hence storing them outside the database was the perfect solution.
go to post Yaron Munz · Nov 30, 2020 Using $Extract will be 17 % faster (!)Set date=20201121090000 Set newDate=$E(date,1,4)_"-"_$E(date,5,6)_"-"_$E(date,7,8)_" "_$E(date,9,10)_":"_$E(date,11,12)_":"_$E(date,13,14)
go to post Yaron Munz · Aug 28, 2020 Thank you Robert, Excellent article.In fact we use this approach when we need to gain speed for queries that run on hundreds of millions records indexes and need to check few items only, so we save checking the "base" class by using [ DATA ... ] In addition, to gain more speed, which is so essential in huge queries, we use the (good, old) $order on the index global itself to gain more speed. This is much faster than normal SQL.
go to post Yaron Munz · Aug 14, 2020 Hello, I recommend either ways: 1. copy the *.DAT file to new server: for this you need to dismount the DB on source before copy (so you have downtime), mount it and do a merge between 2 DB/Namespaces. 2. Do a (hot) backup for the DB that has the table = NO downtime (!) - then copy the *.BCK file to other server, restore to a NEW DB and merge.This is slower - but downtime is 0
go to post Yaron Munz · Aug 14, 2020 Analytics (aka DeppSee) build it. also... great advantage for Intersystems itself (to earn more for license upgrade).
go to post Yaron Munz · Aug 14, 2020 Agree. It would be nice to have a connector for ASB (Azure service bus)
go to post Yaron Munz · Oct 23, 2019 Last version of Cache/Ensemble that support OpenVMS is 2017.1. IRIS does not.
go to post Yaron Munz · Oct 21, 2019 Hello, I have done many migrations from DSM -> Cache in the past, so I can share some of the knowledge here. Basically there are two phases involved in the process:1. Migration the DB itself (.i.e globals). For this you can use utility called %DSMCVT which will read the globals from a DSM DB into a Cache/IRIS DB. Sometimes a 7/8bit -> unicode is also part of the process, so an appropriate code for that should be written.2. Migration of code, that, as mentioned before is the complex part of the process, as you need to rewrite/modify parts of your code that handle with O/S specific issues:a. File system - all devices code is different (i.e. open/use/close commands) when working with files/directories.b. Calls to VMS special variables/functions, need to be converted to "standard" code.c. Some systems rely on VMS "batch queues" for scheduling jobs or for printing reports. This should be addresses as well.d. Some systems rely on VMS FTP functionality. In this case you need to write the same functionality it inside Cache/IRIS.d. Using tools, like "screen generator" tools that uses some CHUI non-standard" ANSI codes for on screen attributes, might need to be "adopted" to standard ANSI codes. As mentioned before, the time/effort for such a migration, highly depends on the native of your application, and the way it was designed. for example if you have 1 central function to print a report (lets say it is in a global), then you need to modify only this function to have all your reports working on the new system. This also applies to "central" functions for reading/writing files, dates calculations etc.
go to post Yaron Munz · Oct 14, 2019 Hello, your method returns %ArrayOfObjects so you need to create and populate it within your code... your code should look like (I have highlighted the relevant changes in code) : set booksRS = ##class(%ResultSet).%New("Library.Book,BooksLoaned") set rsStatus = booksRS.Execute() S books = ##class(%ArrayOfObjects).%New() if rsStatus = $$$OK { while booksRS.Next() { set book = ##class(Library.Book).%New() set book.Title = booksRS.Get("Title") set book.Author = booksRS.Get("Author") set book.Genre = booksRS.Get("Genre") set dbFriend = ##class(Library.Person).%OpenId(booksRS.Get("Friend")) set book.Friend = dbFriend Set sc = books.SetAt(book,$Increment(i)) } }else{ w !,"Error fetching books in GetoanedBooks()" } do booksRS.Close() return books
go to post Yaron Munz · Sep 25, 2019 Hi. There is a version on stand alone studio 2019.2 on WRc that can connect to any version including old ones. This is an only studio kit.
go to post Yaron Munz · Sep 20, 2019 Hello, Usually, when you have a BS calling a BP then to a BO, if all will have a "poll size" of 1 messages will go in-order. Another option, if you do not want to work with "queues" you can work with "inproc". https://cedocs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=EMONITOR_concepts_msg_invoc_style
go to post Yaron Munz · Aug 30, 2019 Another improvement, would be: instead of using ^oddCOM use the class %Dictionary.CompiledClass and loop on Propertiesthis will give you compliance for future releases.
go to post Yaron Munz · Aug 28, 2019 When you create a class that the IDs are maintained by the system (standard class without modifying the storage, then there is a save mechanism to give a new ID to every new object that is being saved to the database.Of course that when multiple users are saving new data simultaneously , a specific process ID after a %Save() might not be the last one in the table.
go to post Yaron Munz · Aug 27, 2019 Hello,\You may find documentation on how to work with streams here: https://docs.intersystems.com/iris20191/csp/docbook/DocBook.UI.Page.cls?KEY=GOBJ_propstreamin BPL you can add a "code" element where you can create and populate your stream with data,. or you can add a "call" element to call a classmethod where you implement the code.For example: to create a stream and write some data into it :(if you need to have this stream data vailable for other components of the BPL it is best to use a %context property for it) set stream = ##class(%Stream.GlobalCharacter).%New() do stream.Write("some text") do stream.Write(%request.anyproperty) do stream.Write(%context.anyproperty) in your request / response messages to pass to BO you will have to add a stream property : Property MyProp As %Stream.GlobalCharacter;