Clear filter
Article
Anastasia Dyubaylo · Apr 7, 2023
Hi Community!
We know that sometimes you may need to find info or people on our Community! So to make it easier here is a post on how to use different kinds of searches:
find something on the Community
find something in your own posts
find a member by name
➡️ Find some info on the Community
Use the search bar at the top of every page - a quick DC search. Enter a word or phrase, or a tag, or a DC member name, and you'll get the results in a drop-down list:
Also, you can press Enter or click on a magnifying glass and the search results will open.
On this page, you can refine your results:
You can choose if you want to search only in Questions, Articles, etc, or your own posts.
You can look for posts of a particular member.
You can look for posts with specific tags.
You can set up a time limit and sort them by date/relevance.
➡️ Find something in your own posts
Go to your profile, choose Posts on the left-hand side, and after the page refreshes just write what you want to find in the search bar:
➡️ Find a Community member
If you know his or her name or e-mail open the Menu in the top left:
and click on Members:
This will open a table with all the members of this Community and at the top of it there is a Search box:
Hope you'll find these explanations useful.
Happy searching! ;)
Article
Mikhail Khomenko · May 15, 2017
Prometheus is one of the monitoring systems adapted for collecting time series data.
Its installation and initial configuration are relatively easy. The system has a built-in graphic subsystem called PromDash for visualizing data, but developers recommend using a free third-party product called Grafana. Prometheus can monitor a lot of things (hardware, containers, various DBMS's), but in this article, I would like to take a look at the monitoring of a Caché instance (to be exact, it will be an Ensemble instance, but the metrics will be from Caché). If you are interested – read along.
In our extremely simple case, Prometheus and Caché will live on a single machine (Fedora Workstation 24 x86_64). Caché version:
%SYS>write $zvCache for UNIX (Red Hat Enterprise Linux for x86-64) 2016.1 (Build 656U) Fri Mar 11 2016 17:58:47 EST
Installation and configuration
Let's download a suitable Prometheus distribution package from the official site and save it to the /opt/prometheus folder.
Unpack the archive, modify the template config file according to our needs and launch Prometheus. By default, Prometheus will be displaying its logs right in the console, which is why we will be saving its activity records to a log file.
Launching Prometheus
# pwd/opt/prometheus# lsprometheus-1.4.1.linux-amd64.tar.gz# tar -xzf prometheus-1.4.1.linux-amd64.tar.gz# lsprometheus-1.4.1.linux-amd64 prometheus-1.4.1.linux-amd64.tar.gz# cd prometheus-1.4.1.linux-amd64/# lsconsole_libraries consoles LICENSE NOTICE prometheus prometheus.yml promtool# cat prometheus.yml global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.scrape_configs: - job_name: 'isc_cache' metrics_path: '/metrics/cache' static_configs: - targets: ['localhost:57772']# ./prometheus > /var/log/prometheus.log 2>&1 &[1] 7117# head /var/log/prometheus.logtime=«2017-01-01T09:01:11+02:00» level=info msg=«Starting prometheus (version=1.4.1, branch=master, revision=2a89e8733f240d3cd57a6520b52c36ac4744ce12)» source=«main.go:77»time=«2017-01-01T09:01:11+02:00» level=info msg=«Build context (go=go1.7.3, user=root@e685d23d8809, date=20161128-09:59:22)» source=«main.go:78»time=«2017-01-01T09:01:11+02:00» level=info msg=«Loading configuration file prometheus.yml» source=«main.go:250»time=«2017-01-01T09:01:11+02:00» level=info msg=«Loading series map and head chunks...» source=«storage.go:354»time=«2017-01-01T09:01:11+02:00» level=info msg=«23 series loaded.» source=«storage.go:359»time=«2017-01-01T09:01:11+02:00» level=info msg="Listening on :9090" source=«web.go:248»
The prometheus.yml configuration is written in the YAML language, which doesn't like tabulation symbols, which is why you should use spaces only. We have already mentioned that metrics will be downloaded from http://localhost:57772 and we'll be sending requests to /metrics/cache (the name of the application is arbitrary), i.e. the destination address for collecting metrics will be http://localhost:57772/metrics/cache. A "job=isc_cache" tag will be added to each metric. A tag, very roughly, is the equivalent of WHERE in SQL. In our case, it won't be used, but will do just fine for more than one server. For example, names of servers (and/or instances) can be saved to tags and you can then use tags to parameterize requests for drawing graphs. Let's make sure that Prometheus is working (we can see the port it's listening to in the output above – 9090):
A web interface opens, which means that Prometheus is working. However, it doesn't see Caché metrics yet (let's check it by clicking Status → Targets):
Preparing metrics
Our task is to make metrics available to Prometheus in a suitable format at http://localhost:57772/metrics/cache. We'll be using the REST capabilities of Caché because of their simplicity. It should be noted that Prometheus only “understands” numeric metrics, so we will not export string metrics. To get the latter, we will use the API of the SYS.Stats.Dashboard class. These metrics are used by Caché itself for displaying the System toolbar:
Example of the same in the Terminal:
%SYS>set dashboard = ##class(SYS.Stats.Dashboard).Sample() %SYS>zwrite dashboarddashboard=<OBJECT REFERENCE>[2@SYS.Stats.Dashboard]+----------------- general information ---------------| oref value: 2| class name: SYS.Stats.Dashboard| reference count: 2+----------------- attribute values ------------------| ApplicationErrors = 0| CSPSessions = 2| CacheEfficiency = 2385.33| DatabaseSpace = "Normal"| DiskReads = 14942| DiskWrites = 99278| ECPAppServer = "OK"| ECPAppSrvRate = 0| ECPDataServer = "OK"| ECPDataSrvRate = 0| GloRefs = 272452605| GloRefsPerSec = "70.00"| GloSets = 42330792| JournalEntries = 16399816| JournalSpace = "Normal"| JournalStatus = "Normal"| LastBackup = "Mar 26 2017 09:58AM"| LicenseCurrent = 3| LicenseCurrentPct = 2. . .
The USER space will be our sandbox. To begin with, let's create a REST application /metrics. To add come very basic security, let's protect our log-in with a password and associate the web application with some resource – let's call it PromResource. We need to disable public access to the resource, so let's do the following:
%SYS>write ##class(Security.Resources).Create("PromResource", "Resource for Metrics web page", "")1
Our web app settings:
We will also need a user with access to this resource. The user should also be able to read from our database (USER in our case) and save data to it. Apart from this, this user will need read rights for the CACHESYS system database, since we will be switching to the %SYS space later in the code. We'll follow the standard scheme, i.e. create a PromRole role with these rights and then create a PromUser user assigned to this role. For password, let’s use "Secret":
%SYS>write ##class(Security.Roles).Create("PromRole","Role for PromResource","PromResource:U,%DB_USER:RW,%DB_CACHESYS:R")1%SYS>write ##class(Security.Users).Create("PromUser","PromRole","Secret")1
It is this user PromUser that we will used for authentication in the Prometheus config. Once done, we'll re-read the config by sending a SIGNUP signal to the server process.
A safer config
# cat /opt/prometheus/prometheus-1.4.1.linux-amd64/prometheus.ymlglobal: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.scrape_configs: - job_name: 'isc_cache' metrics_path: '/metrics/cache' static_configs: - targets: ['localhost:57772'] basic_auth: username: 'PromUser' password: 'Secret'## kill -SIGHUP $(pgrep prometheus) # or kill -1 $(pgrep prometheus)
Prometheus can now successfully pass authentication for using the web application with metrics.
Metrics will be provided by the my.Metrics request processing class. Here is the implementation:
Class my.Metrics Extends %CSP.REST
{
Parameter ISCPREFIX = "isc_cache";
Parameter DASHPREFIX = {..#ISCPREFIX_"_dashboard"};
XData UrlMap [ XMLNamespace = "http://www.intersystems.com/urlmap" ]
{
<Routes>
<Route Url="/cache" Method="GET" Call="getMetrics"/>
</Routes>
}
/// Output should obey the Prometheus exposition formats. Docs:
/// https://prometheus.io/docs/instrumenting/exposition_formats/
///
/// The protocol is line-oriented. A line-feed character (\n) separates lines.
/// The last line must end with a line-feed character. Empty lines are ignored.
ClassMethod getMetrics() As %Status
{
set nl = $c(10)
do ..getDashboardSample(.dashboard)
do ..getClassProperties(dashboard.%ClassName(1), .propList, .descrList)
for i=1:1:$ll(propList) {
set descr = $lg(descrList,i)
set propertyName = $lg(propList,i)
set propertyValue = $property(dashboard, propertyName)
// Prometheus supports time series database
// so if we get empty (for example, backup metrics) or non-digital metrics
// we just omit them.
if ((propertyValue '= "") && ('$match(propertyValue, ".*[-A-Za-z ]+.*"))) {
set metricsName = ..#DASHPREFIX_..camelCase2Underscore(propertyName)
set metricsValue = propertyValue
// Write description (help) for each metrics.
// Format is that the Prometheus requires.
// Multiline descriptions we have to join in one string.
write "# HELP "_metricsName_" "_$replace(descr,nl," ")_nl
write metricsName_" "_metricsValue_nl
}
}
write nl
quit $$$OK
}
ClassMethod getDashboardSample(Output dashboard)
{
new $namespace
set $namespace = "%SYS"
set dashboard = ##class(SYS.Stats.Dashboard).Sample()
}
ClassMethod getClassProperties(className As %String, Output propList As %List, Output descrList As %List)
{
new $namespace
set $namespace = "%SYS"
set propList = "", descrList = ""
set properties = ##class(%Dictionary.ClassDefinition).%OpenId(className).Properties
for i=1:1:properties.Count() {
set property = properties.GetAt(i)
set propList = propList_$lb(property.Name)
set descrList = descrList_$lb(property.Description)
}
}
/// Converts metrics name in camel case to underscore name with lower case
/// Sample: input = WriteDaemon, output = _write_daemon
ClassMethod camelCase2Underscore(metrics As %String) As %String
{
set result = metrics
set regexp = "([A-Z])"
set matcher = ##class(%Regex.Matcher).%New(regexp, metrics)
while (matcher.Locate()) {
set result = matcher.ReplaceAll("_"_"$1")
}
// To lower case
set result = $zcvt(result, "l")
// _e_c_p (_c_s_p) to _ecp (_csp)
set result = $replace(result, "_e_c_p", "_ecp")
set result = $replace(result, "_c_s_p", "_csp")
quit result
}
}
Let's use the console to check that our efforts have not been vain (added the --silent key, so that curl doesn't impede us with its progress bar):
# curl --user PromUser:Secret --silent -XGET 'http://localhost:57772/metrics/cache' | head -20# HELP isc_cache_dashboard_application_errors Number of application errors that have been logged.isc_cache_dashboard_application_errors 0# HELP isc_cache_dashboard_csp_sessions Most recent number of CSP sessions.isc_cache_dashboard_csp_sessions 2# HELP isc_cache_dashboard_cache_efficiency Most recently measured cache efficiency (Global references / (physical reads + writes))isc_cache_dashboard_cache_efficiency 2378.11# HELP isc_cache_dashboard_disk_reads Number of physical block read operations since system startup.isc_cache_dashboard_disk_reads 15101# HELP isc_cache_dashboard_disk_writes Number of physical block write operations since system startupisc_cache_dashboard_disk_writes 106233# HELP isc_cache_dashboard_ecp_app_srv_rate Most recently measured ECP application server traffic in bytes/second.isc_cache_dashboard_ecp_app_srv_rate 0# HELP isc_cache_dashboard_ecp_data_srv_rate Most recently measured ECP data server traffic in bytes/second.isc_cache_dashboard_ecp_data_srv_rate 0# HELP isc_cache_dashboard_glo_refs Number of Global references since system startup.isc_cache_dashboard_glo_refs 288545263# HELP isc_cache_dashboard_glo_refs_per_sec Most recently measured number of Global references per second.isc_cache_dashboard_glo_refs_per_sec 273.00# HELP isc_cache_dashboard_glo_sets Number of Global Sets and Kills since system startup.isc_cache_dashboard_glo_sets 44584646
We can now check the same in the Prometheus interface:
And here is the list of our metrics:
We won’t focus on viewing them in Prometheus. You can select the necessary metric and click the “Execute” button. Select the “Graph” tab to see the graph (shows the cache efficiency):
Visualization of metrics
For visualization purposes, let’s install Grafana. For this article, I chose installation from a tarball. However, there are other installation options, from packages to a container. Let's perform the following steps (after creating the /opt/grafana folder and switching to it):
Let's leave the config unchanged for now. On our last step, we launch Grafana in the background mode. We'll be saving Grafan's log to a file, just like we did with Prometheus:
# ./bin/grafana-server > /var/log/grafana.log 2>&1 &
By default, Grafana’s web interface is accessible via port 3000. Login/password: admin/admin.
A detailed instruction on making Prometheus work with Grafana is available here. In short, we need to add a new Data Source of the Prometheus type. Select your option for direct/proxy access:
Once done, we need to add a dashboard with the necessary panels. The test sample of a dashboard is publicly available, along with the code of the metrics collection class. A dashboard can be simply imported to Grafana (Dashboards → Import):
We'll get the following after import:
Save the dashboard:
Time range and update period can be selected in the top right corner:
Examples of monitoring types
Let's test the monitoring of calls to globals:
USER>for i=1:1:1000000 {set ^prometheus(i) = i}USER>kill ^prometheus
We can see that the number of references to globals per second has increased, while cache efficiency dropped (the ^Prometheus global hasn’t been cached yet):
Let's check our license usage. To do this, let's create a primitive CSP page called PromTest.csp in the USER namespace:
<html><head><title>Prometheus Test Page</title></head><body>Monitoring works fine!</body></html>
And visit it so many times (we assume that the /csp/user application is not password-protected):
# ab -n77 http://localhost:57772/csp/user/PromTest.csp
We'll see the following picture for license usage:
Conclusions
As we can see, implementing the monitoring functionality is not hard at all. Even after a few initial steps, we can get important information about the work of the system, such as: license usage, efficiency of globals caching, application errors. We used the SYS.Stats.Dashboard for this tutorial, but other classes of SYS, %SYSTEM, %SYS packages also deserve attention. You can also write your own class that will supply custom metrics for your own application – for instance, the number of documents of a particular type. Some useful metrics will eventually be compiled into a separate template for Grafana.
To be continued
If you are interested in learning more about this, I will write more on the subject. Here are my plans:
Preparing a Grafana template with metrics for the logging daemon. It would be nice to make some sort of a graphical equivalent of the ^mgstat tool – at least for some of its metrics.
Password protection for web applications is good, but it would be nice to verify the possibility of using certificates.
Use of Prometheus, Grafana and some exporters for Prometheus as Docker containers.
Use of discovery services for automatically adding new Caché instances to the Prometheus monitoring list. It's also where I'd like to demonstrate (in practice) how convenient Grafana and its templates are. This is something like dynamic panels, where metrics for a particular selected server are shown, all on the same Dashboard.
Prometheus Alert Manager.
Prometheus configuration settings related to the duration of storing data, as well possible optimizations for systems with a large number of metrics and a short statistics collection interval.
Various subtleties and nuances that will transpire along the way.
Links
During the preparation of this article, I visited a number of useful sites and watched a great deal of videos:
Prometheus project website
Grafana project website
Blog of one of Prometheus developers called Brian Brazil
Tutorial on DigitalOcean
Some videos from Robust Perception
Many videos from a conference devoted to Prometheus
Thank you for reading all the way down to this line! Hi Mikhail, very nice! And very useful.Have you done any work on templating, e.g. so one or more systems may be selected to be displayed on one dashboard? Hi, Murray! Thanks for your comment! I'll try to describe templating soon. I should add it to my plan. For now, an article about ^mgstat is almost ready (in head -)). So stay tuned for more. Hi, Murray!You can read continuation of article here. There templates are used. Hi Mikhail, very nice! I am configure here the process! Where are the metric values stored?
What kind of metrics can be added via SetSensor? Only counters or also gauges and histograms? Metrics in this approach are stored in Prometheus. As Prometheus is time-series database, you can store any numeric metric there either counter or gauge. Hum... ok for that part I understand it, but if I extend the %SYS.Monitor.SAM.Abstract (https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_rest) and SetTimer method, do you know where is it stored? I'm not sure, but think that SAM-implementation is based on System Monitor (https://docs.intersystems.com/iris20194/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_healthmon) in some fashion. Could you clarify your task? Do you want to know an exact name of Global where samples are stored? For what?Link you've shared describes how to expose your custom numeric metric that could be read by Prometheus, for instance, and then stored there. 💡 This article is considered as InterSystems Data Platform Best Practice.
Announcement
Anastasia Dyubaylo · Mar 23, 2018
Hi Community!Please welcome the InterSystems Developer Community SubReddit!Here you will find all the most interesting announcements from InterSystems Developer Community, i.e. useful articles, discussions, announcements and events.A little about Reddit: — is a social news aggregation, web content rating, and discussion website.See how DC SubReddit looks like:Subscribe to the DC SubReddit and maybe it will become one of your favorites!
Announcement
Anastasia Dyubaylo · Oct 16, 2017
Hi, Community!We've launched Facebook Page for InterSystems Developer Community!Follow to be in touch what happens on InterSystems Developer Community. Here you can find useful articles, answers,announcements, hot discussions, best practices based on: InterSystems IRIS, Caché, Ensemble, DeepSee and iKnow.The permalink to join DC Facebook Page can be found also on the right of every DC page:Follow us now to be in the know and don`t forget to like the page!
Announcement
Michelle Spisak · Oct 17, 2017
Learning Services Live Webinars are back! At this year’s Global Summit, InterSystems debuted InterSystems IRIS Data Platform™, a single, comprehensive product that provides capabilities spanning data management, interoperability, transaction processing, and analytics. InterSystems IRIS sets a new level of performance for the rapid development and deployment of data-rich and mission-critical applications. Now is your chance to learn more! Joe Lichtenberg, Director of Product and Industry Marketing for InterSystems, presents "Introducing InterSystems IRIS Data Platform", a high-level description of the business drivers and capabilities behind InterSystems IRIS.Webinar recording available now! Webinar recording is available!
Announcement
Anastasia Dyubaylo · Oct 26, 2017
Hi, Community!
Please find a new session recording from Global Summit 2017:
An InterSystems Guide to the Data Galaxy
This video shows how to understand the concept of an open analytics platform for the enterprise.
@Benjamin.Deboe tells how InterSystems technology pairs up with industry standards and open-source technology to provide a solid platform for analytics.
You can also see the additional resources here.
Don't forget to subscribe to the InterSystems Developers YouTube Channel
Enjoy!
Article
David Loveluck · Nov 8, 2017
Application Performance MonitoringTools in InterSystems technologyBack in August in preparation for Global Summit I published a brief explanation of Application Performance Management (APM). To follow up on that I have written and will be publishing over the coming weeks a series of articles on APM.One major element of APM is the construction of a historic record of application activity, performance and resource usage. Crucially for APM the measurement starts with the application and what users are doing with the application. By relating everything to business activity you can focus on improving the level of service provided to users and value to the line business that is ultimately paying for the application.Ideally an application includes instrumentation that reports on activity in business terms, such as ‘display the operating theater floor plan’ or ‘register student for course’ and give the count, the response time and resources used for each on an hourly or daily basis. However many applications don’t have this capability and you have to make do with the closest approximation you can.There are many tools (some expensive, some open source) available to monitor applications, ranging from java script injection for monitoring user experience to middleware and network probes for measuring application communication. The articles will focus on the tools that are available within InterSystems products. I will describe how I have used these tools to manage the performance of applications and improve customer experiences.The tools described include:CSP Page StatisticsSQL Query StatisticsEnsemble Activity MonitorEven if you do have good application instrumentation, additional system monitoring can provide valuable insights and help you and I will include an explanation of how to configure and use:Caché History MonitorI will also expanded my earlier explanation of APM, the reasons for monitoring performance and the different audiences you are trying to help with the information you gather. added a link to 'Using the Cache History Monitor'
Announcement
Evgeny Shvarov · Dec 24, 2017
Hi, Community!I'm pleased to announce that in this December 2017 we have 2 years of InterSystems Developer Community up and running! Together we did a lot this year, and a lot more is planned for the next year!Our Community is growing: In November we had 3,700 registered members (2,200 last November) and 13,000 web users visited the site in November 2017 (7,000 last year).Thank you for using it, thanks for making it useful, thanks for your knowledge, experience, and your passion!And, may I ask you to share in the comments the article or question which was most helpful for you this year?Happy Birthday, Developer Community! And to start it: for me the most helpful article this year was REST FORMS Queries - yes, I'm using REST FORMS a lot, thanks [@Eduard.Lebedyuk]!Another is Search InterSystems documentation using iKnow and iFind technologiesTwo helpful questions were mine (of course ;):How to find duplicates in a large text fieldand Storage Schema in VCS: to Store Or Not to Store?and How to get the measure for the last day in a month in DeepSee There were many interesting articles and discussions this year. I'd like to thank all of you who participated and helped our community grow. @Murray.Oldfield series on InterSystems Data Platforms Capacity Planning and Performance was a highly informative read.
Article
Developer Community Admin · Oct 21, 2015
InterSystems Caché 2015.1 soars from 6 million to more than 21 million end-user database accesses per second on the Intel® Xeon® processor E7 v2 family compared to Caché 2013.1 on the Intel® Xeon® processor E5 familyOverviewWith data volumes soaring and the opportunities to derive value from data rising, database scalability has become a crucial challenge for a wide range of industries. In healthcare, the rising demands for healthcare services and significant changes in the regulatory and business climates can make the challenges particularly acute. How can organizations scale their databases in an efficient and cost-effective way?The InterSystems Caché 2015.1 data platform offers a solution. Identified as a Leader in the Gartner Magic Quadrant for Operational Database Management Systems,1 Caché combines advanced data management, integration, and analytics. Caché 2015.1 is optimized to take advantage of modern multi-core architectures and represents a new generation of ultra-high-performance database technology. Running on the Intel® Xeon® processor E7 v2 family, Caché 2015.1 provides a robust, affordable solution for scalable, data-intensive computing.To examine the scalability of Caché 2015.1, InterSystems worked with performance engineers at Epic, whose electronic medical records (EMRs) and other healthcare applications are deployed by some of the world’s largest hospitals, delivery systems, and other healthcare organizations. The test team found that Caché 2015.1 with Enterprise Cache Protocol® (ECP®) technology on the Intel Xeon processor E7 v2 family achieved more than 21 million end-user database accesses per second (known in the Caché environment as Global References per Second or GREFs) while maintaining excellent response times. This was more than triple the load levels of 6 million GREFs achieved by Caché 2013.1 on the Intel® Xeon® processor E5 family."The scalability and performance improvements of Caché version 2015.1 are terrific. Almost doubling the scalability, this version provides a key strategic advantage for our user organizations who are pursuing large-scale medical informatics programs as well as aggressive growth strategies in preparation for the volume-to-value transformation in healthcare."– Carl Dvorak, President, Epic
Article
Developer Community Admin · Oct 21, 2015
Providing a reliable infrastructure for rapid, unattended, automated failoverTechnology OverviewTraditional availability and replication solutions often require substantial capital investments in infrastructure, deployment, configuration, software licensing, and planning. Caché Database Mirroring (Mirroring) is designed to provide an economical solution for rapid, reliable, robust, automatic failover between two Caché systems, making mirroring the ideal automatic failover high-availability solution for the enterprise.In addition to providing an availability solution for unplanned downtime, mirroring offers the flexibility to incorporate certain planned downtimes on a particular Caché system while minimizing the overall SLA's for the organization. Combining InterSystems Enterprise Cache Protocol (ECP) application servers with mirroring provides an additional level of availability. Application servers allow processing to seamlessly continue on the new system once the failover is complete, thus greatly minimizing workflow and user disruption. Configuring the two mirror members in separate data centers offers additional redundancy and protection from catastrophic events.Key Features and BenefitsEconomical high availability database solution with automatic failoverRedundant components minimize shared-resource related risksLogical data replication minimizes risks of carry-forward physical corruptionProvides a solution for both planned and unplanned downtimeProvides business continuity benefits via a geographically dispersed disaster recovery configurationProvides Business Intelligence and reporting benefits via a centralized Enterprise Data Warehouse configurationTraditional availability solutions that rely on shared resources (such as shared disk) are often susceptible to a single point of failure with respect to that shared resource. Mirroring reduces that risk by maintaining independent components on the primary and backup mirror systems. Further, by utilizing logical data replication, mirroring reduces the potential risks associated with physical replication, such as out-of-order updates and carry-forward corruption, which are possible with other replication technologies such as SAN-based replication.Finally, mirroring allows for a special Async Member, which can be configured to receive updates from multiple mirrors across the enterprise. This allows a single system to act as a comprehensive enterprise data store, enabling - through the use of InterSystems DeepSee - real-time business intelligence that uses enterprise-wide data. The async member can also be deployed in a Disaster Recovery model in which a single mirror can update up to six geographically-dispersed async members; this model provides a robust framework for distributed data replication, thus ensuring business continuity benefits to the organization. The async member can also be configured as a traditional reporting system so that application reporting can be offloaded from the main production system.
Article
Developer Community Admin · Oct 21, 2015
Providing a reliable infrastructure for rapid, unattended, automated failoverTechnology OverviewTraditional availability and replication solutions often require substantial capital investments in infrastructure, deployment, configuration, software licensing, and planning. Caché Database Mirroring (Mirroring) is designed to provide an economical solution for rapid, reliable, robust, automatic failover between two Caché systems, making mirroring the ideal automatic failover high-availability solution for the enterprise.In addition to providing an availability solution for unplanned downtime, mirroring offers the flexibility to incorporate certain planned downtimes on a particular Caché system while minimizing the overall SLA's for the organization. Combining InterSystems Enterprise Cache Protocol (ECP) application servers with mirroring provides an additional level of availability. Application servers allow processing to seamlessly continue on the new system once the failover is complete, thus greatly minimizing workflow and user disruption. Configuring the two mirror members in separate data centers offers additional redundancy and protection from catastrophic events.Key Features and BenefitsEconomical high availability solution with automatic failover for database systemsRedundant components minimize shared-resource related risksLogical data replication minimizes risks of carry-forward physical corruptionProvides a solution for both planned and unplanned downtimeProvides business continuity benefits via a geographically dispersed disaster recovery configurationTraditional availability solutions that rely on shared resources (such as shared disk) are often susceptible to a single point of failure with respect to that shared resource. Mirroring reduces that risk by maintaining independent components on the primary and backup mirror systems. Further, by utilizing logical data replication, mirroring reduces the potential risks associated with physical replication, such as out-of-order updates and carry-forward corruption, which are possible with other replication technologies such as SAN-based replication.Finally, mirroring allows for a special Async Member, which can be configured to receive updates from multiple mirrors across the enterprise. This allows a single system to act as a comprehensive enterprise data store, enabling - through the use of InterSystems DeepSee - real-time business intelligence that uses enterprise-wide data. The async member can also be deployed in a Disaster Recovery model in which a single mirror can update up to six geographically-dispersed async members; this model provides a robust framework for distributed data replication, thus ensuring business continuity benefits to the organization. The async member can also be configured as a traditional reporting system so that application reporting can be offloaded from the main production system.
Article
Developer Community Admin · Oct 21, 2015
IntroductionTo overcome the performance limitations of traditional relational databases, applications - ranging from those running on a single machine to large, interconnected grids - often use in-memory databases to accelerate data access. While in-memory databases and caching products increase throughput, they suffer from a number of limitations including lack of support for large data sets, excessive hardware requirements, and limits on scalability.InterSystems Caché is a high-performance object database with a unique architecture that makes it suitable for applications that typically use in-memory databases. Caché's performance is comparable to that of in-memory databases, but Caché also provides:Persistence - data is not lost when a machine is turned off or crashesRapid access to very large data setsThe ability to scale to hundreds of computers and tens of thousands of usersSimultaneous data access via SQL and objects: Java, C++, .NET, etc.This paper explains why Caché is an attractive alternative to in-memory databases for companies that need high-speed access to large amounts of data.
Announcement
Janine Perkins · Feb 2, 2016
Do you need to quickly build a web page to interact with your database?
Take a look at these two courses to learn how Zen Mojo can help you display collections and make your collections respond to user interactions.
Displaying Collections and Using the Zen Mojo Documentation
Learn the steps for displaying a collection of Caché data on a Zen Mojo page, find crucial information in the Zen Mojo documentation, and find sample code in the Widget Reference. Learn More.
Zen Mojo: Handling Events and Updating Layouts
This is an entirely hands-on course devoted to event handling and updating the display of a Zen Mojo page in response to user interaction. Learn to create a master-detail view that responds to user selections. Learn More.
Question
Mike Kadow · Jan 15, 2018
How can I access the InterSystems Class Database with the Atelier IDE?Say I want access to the Samples database and Namespace? I'm not clear what you mean by "access the Class Database". Connect Atelier to the SAMPLES namespace and you'll be able to view/edit classes from the SAMPLES database. Please expand on what else you're trying to do. John, maybe I was asking two separate things.When I bring up I/S documentation, there is an option to go to Class Reference.When I select that option I can see all Classes in the InterSystems database, that is what I am really wanting to do.I put the comment in about Samples as an after thought, not really thinking it does not apply to the other question. Ok, I realize I am not being clear, often I say things before I think them through.When I being up I/S documentation, I can select the Class Reference option.From Studio I can look up classes that are in the Class Reference Option.I tried to do the same thing in Atelier, and was unable to find the command to browse through all the Classes I see in the Class Reference Option. That, is what I am trying to do. I hope that is clear. You can also see this information in the Atelier Documentation view as you are moving focus within a class. If you do not see this view you can launch it by selecting Window > Show View > Other > Atelier > Atelier Documentation > Open.For example, I opened Sample.Person on my local Atelier client, selected the tab at the bottom for Atelier Documentation, then clicked on "%Populate" in the list of superclasses. Now I can see this in the Atelier Documentation view: Hey, NicoleThat's excellent !!!What ever I click on shows up in "Atelier Documentation" Tab.Thanks for the hint! Easier:in tab Server Explorer clickk to ADD NEW Sever Connection (green cross) OK, you look for something different than I understood.The CLASS REFERENCE to DocBook seems to be not directly available as in Studio.Just by external access to Documentation ....Part of it is found if you have an class in your editor and move your cursor over a class namethen you get a volatile class description that you can nail down by clicking or <F2>Its's pretty similar to the DocBook version EXCEPT that you have no further references (e.g. Data Types or %Status or ...)So it's not a multi level navigation like in browser!For illustration I have done this for %Persitent. For %Populate, %XML.Adaptor you have do again and again.