Please excuse my ignorance. I am trying to identify what areas would be best to review in the System Dashboard (for Cache 2010.2) for performance issues with the database. It seems to be running slower than usual, but I am trying to find out the best way to go about identifying what the issue is.
The following are captures from the System Dashboard.
I recently discovered the Monitoring Activity Volume feature in IRIS and I was amazed by it. So, I put it to work in one of our productions. It is nice how easy it is to set up and all the possibilites that came with it.
But there's something weird: the numbers. Actually, one of the BP is stating a time of more than 6 seconds to process:
When we write unit test cases for cache object script code using %UnitTest.TestCase, what is the best way to write code to identify code coverage?
So, let say my unit test case hit all 10 lines of code of a method for a given class. So, unit test coverage should be 100% for that. But, using line-by-line coverage [(%Monitor.System.LineByLine] getting wrong percentage, because it also includes code comment/documentation as part of code. So, practically we can not ever achieve 100% of code coverage by using this API.
I created a task from Management portal Task manager to use the Ens.Util.Tasks.Purge task . Task set up includes email notification setup for Completion email and error email.
This task is giving an error and no email is generated:
I've been trying for a while to come up with a set of tools to monitor the health of a mirror set and email a report nightly on the status of the mirror, or flag issues in real time. Making sure that all the databases are caught up, that all the mirror members are online.
I need to develop a tool to help to get what data is being consumed by a certain process, in order to get all data used to build an automated test scenario.
For example, some user process will pull data from ^GLOBAL(1)="dataString", ^GLOBAL(2)="dataString2", ^GLOBAL1(1)="data1String", ^GLOBAL2(4)="data2String4". Amidst all other data on these Globals, I will ignore everything that was not used in the user process, and get the specific keys used on it.
Can Cache Monitor (^MONMGR) and System Monitor be configured to also send 'OK' messages? With the first bad email, you still wonder if things are still broken, when in-fact normalcy has been restored, some even within some seconds.
Internally we use splunk for monitoring applications and network.
Does Ensemble have a way of exposing internal metrics and/or a way of exposing custom built metrics?
I've used Deepsee dashboards in the past to monitor Apache Tomcat/Apache Camel/hawtio using JMX rest calls. This is the other way around and ideally I'd like to expose metrics on:
I have a problem on enabling SNMP monitoring on Cache.
I installed on HP UX NET SNMP 5.7.2 package from HP Software Center and enabled agentX protocol in snmpd.cfg.
When I enabled full debugging on Cache and NET SNMP I discovered that sent and received packets on both sides are not the same. Some bytes are different. I think the problem is in default charset for TCP/IP connection which is on our system set to CP1250 instead of default RAW. So result is that Cache notifies are not visibile from snmpwalk etc.
In the Business Operation, how do we get to know which source is currently sending primary request if there are multiple requests coming at the same time?
My group needs to be able to monitor items / tasks, and let a non-management-portal user see the monitoring. Is it possible to run DeepSee queries on Production items? I feel like I should not be recreating the production environment or the task manager just so that I can query on the items that are running, and on their states (like "successful" or "send email").
Also, I need to log custom events for each task, and I'm running into difficulties with the task manager in this regard; hence the question about using the Production instead, but querying it.
Has anyone done any kind of integration with Dynatrace, which is a JVM transaction monitoring tool? Our organization uses this extensively with our Java and .Net applications and we wanted to know if it is even possible.
Alerts are messages generated by production components. InterSystems IRIS automatically writes the alerts to a log file and sends then to the production component named Ens.Alert. If your production does not have a component named Ens.Alert, then InterSystems IRIS writes alerts to the log file but does not send them to any component. The component named Ens.Alert can be of any class. The most frequently used classes for Ens.Alert are:
Can someone direct me to where in the documentation we can find how consumption may be calculated for global storage?
Caché Version
2010.1
Operating System
HP OpenVMS 8.4
EDIT: After receiving some responses, it seems I was unclear in my initial inquiry. I am looking to determine our rate of consumption of storage; however, I am having some difficulty in doing that.
I'm very new to intersystems, therefore I'm seeing some suggestions here. My query - I'd like to setup monitoring for IRIS local database size monitoring using SolarWinds. Preferably, using REST API connections, could someone help with this. Thank you in advance.
We want to monitor an Ensemble Production and send custom email alerts in function of some Rules. For example, if we normally receive 1 message per second, if suddenly we receive 5 or more messages per second, we want to send an email alert. And if tomorrow we don't want to check this again, we want to disable it through Ensemble Business Rules.
I installed a community version of Intersystems IRIS in a Large AWS EC2 instance to do some testing. I installed SAM and when I try to "Add a new cluster" I receive the following: "ERROR #5005: Cannot open file '/config/prometheus/isc_tmp_yml_file.yml'"