Hey everyone.

I have noticed that my backup mirror is warning that the MirrorDatabaseLatencyTime is having a bad time (time in ms is 3000, and warnvalue is 3000). While I look into what may be causing this latency between the two servers, I was considering if reducing the size of the journal files would improve this value in any way.

My assumption is that reducing the file size would mean that the frequency of the journal files being created would be increased, but the reduced size would mean that the transfer and application of each file would be reduced.

00
0 2 104

Hi,

We are using CACHE 2017.2.1, I would like to retrieve data from Journal for killed global. Let say we have global name ^ EMP(123) with data and also have some child nodes and it has been killed by using cache kill command for some reason and we don't know who has executed this and when. My questions are below.

1) Can we get back the data of killed global from journal files,Is it possible or not ?

00
0 5 207

Currently we are using Health Share 2015.2.2, and looking to upgrade to the latest version in the next month.

From what I understand we have to upgrade to 2017 or 2018 prior to going to 2019.1 since 2019.1 is on the IRIS platform.

In trying to outline my steps in the upgrade process I came up with a question.

  • Can a Full System Backup from 2015.2.2 be restored into 2019.1?
  • Do I have to restore the 2015.2.2 back up into 2017 or 2018, then do the 2019.1 conversion?

Has anyone had experience with this? or should I open a ticket with WRC?

00
0 5 289

I'm a DBA and support Caché databases on AIX. I coded shell scripts for monitoring journaling status, databases size, license end date.

We recently got a new instance of Caché on Windows. I'm just curious to know whether anyone coded database monitoring scripts on Windows using PowerShell or any other scripting language.

If yes, please share the details.

 

Thanks & Regards,

Bharath Nunepalli.

00
0 3 240

Hello, community!

I've stumbled on some unexpected behavior, and decided to check with you if this is normal. Basically, I'm rebuilding indices and the result is not journaling (which leads to missing indices at shadow server).

The $ZV is "Cache for UNIX (Red Hat Enterprise Linux for x86-64) 2015.2.1 (Build 705U) Mon Aug 31 2015 16:53:38 EDT"

I have an example class 

Class tmp.A As %Persistent;

Index IP1 On P1;

Property P1 As %String;

for example there is one object which have P1 = 1, so

00
0 1 171

I'm working on a task where I need to apply journal file records to another database. I can't use Journal.Restore class methods as I need to perform some data transformation, therefore I'm reading journal file record by record using %SYS.Journal.Record API.  

It seems that there are only few journal records that I need to process, namely:

00
0 4 270

Currently, we are running 2014.1 on two different servers (OpenVMS, RHEL). The plan is to transition from OpenVMS to RHEL, but our Write Daemon is in a Troubled state on both servers.

On the OpenVMS server, we have a WIJ file that's 26G and can grow to 40G (size of database cache). Since it hasn't grown to 40G, we don't believe the size of the WIJ file to be the issue.

What else should we be looking at regarding the performance of the Write Daemon?

00
0 1 490

I'm purging a lot of management data from an Ensemble production, which is creating 100s of GBs of journals. Has anybody succeeded in disabling journaling on an Ensemble purge? The user interface doesn't have an option for this, but I'm thinking you might be able to identify the process and externally disable journaling on it.

00
0 3 566