Clear filter
Announcement
Emily Geary · Aug 7, 2024
The InterSystems Certification Team is building an InterSystems TrakCare Integration certification exam and is looking for Subject Matter Experts (SMEs) from our community to help write and review questions. You, as a valued InterSystems community member, know the challenges of working with our technology and what it takes to be successful at your job. A work assignment will typically involve writing 15 assigned questions and reviewing 15 assigned directly to you.
Proposed Project Work Dates: The work assignments will be assigned by the Certification Team through September 15, 2024.
Here are the details:
Action Item
Details
Contact InterSystems Certification
Write to certification@intersystems.com to express your interest in the Certification Subject Matter Expert Program. Tell us that you are interested in being an InterSystems TrakCare Integration SME (an individual with at least one year of experience with InterSystems TrakCare Integration tasks).
Complete project profile - External Participants
If you are an external volunteer looking to participate, a team member will send you a profile form to determine if your areas of expertise align with open project.
Accept
If you are selected for an exam development opportunity, a team member will email you a Non-Disclosure Agreement requiring your signature.
Train
After receiving your signed document, and before beginning to write questions, you will be asked to watch a short training video on question-item writing.
Participate
Once onboarded, the Certification Team will send you information regarding your first assignment. This will include:
an invitation to join Certiverse, our new test delivery platform, as an item writer and reviewer
an item writing assignment, which usually consists of the submission of 15 scenario-based questions
an alpha testing assignment, which usually consists of reviewing 15 items written by your peers
You will typically be given one month to complete the assignment.
Subject Matter Experts are eligible for a SME badge based on successful completion of their exam development participation. SMEs are also awarded the InterSystems TrakCare Integration certification if they write questions for all KSA Groups and their questions are accepted
Interested in participating? Email certification@intersystems.com now!
KSA Group
KSA
Target Item
Demonstrates mastery of foundational concepts required for TrakCare integrations
1. Accesses TrakCare system and locates test patients and data
Describes components on the EPR
Explains how Edition provides region-specific functionality
Uses TrakCare to perform patient and episode lookup, and to perform other basic operations
2. Examines and configures basic TrakCare settings
Describes important items in the Configuration Manager (e.g. Site Code, System Paths, etc.)
Identifies steps to configure integration-related code tables
Toggles TrakCare features
3. Examines and interprets TrakCare data model and data definitions, and performs basic SQL queries
Examines and interprets data definitions and relationships between tables using TrakCare Data Dictionary UI
Examines and interprets data definitions, relationships, and global storages with InterSystems class reference function
Performs basic SQL queries on patient, episode, and order items
4. Examines and interprets integration security configuration settings
Describes TrakCare application security model
Examines security settings using the Management Portal in InterSystems IRIS
5. Interprets HL7 V2 requirements
Describes common HL7 V2 message types (e.g., ADT, SIU, REF, and ORM/ORU)
Correlates TrakCare triggering events with HL7 V2 message types
Describes mapping between HL7 V2 data and TrakCare data model
6. Interprets HL7 FHIR requirements
Names common HL7 FHIR resources
Correlates HL7 FHIR resources with TrakCare data model
7. Identifies elements in an SDA
Identifies important elements within SDA structure
Examines SDA definition by using the InterSystems class reference guide
Correlates SDA elements with HL7 V2 message segments and fields
2. Uses the HealthCare Messaging Framework (HMF) for TrakCare integrations
1. Demonstrates mastery of foundational HMF concepts
Uses the HMF (TrakCare/SDA3) Clinical Summary
Uses the HMF External Search and Merge workflow
Uses the HMF FHIR Manager
2. Designs and configures HMF productions
Identifies steps to define and configure HMF according to the integration solution design/architecture
Examines existing HMF configuration settings
Lists roles of each generated production (System, Router, Gateway)
Prepares HMF productions and interfaces according to the integration solution design
3. Develops and customizes HMF productions
Formulates development and customization plans
Develops custom inbound/outbound methods
Enhances generated DTL code
Creates extensions
Enables/disables Event Triggers/productions using Integration Manager
Configures Outbound Query interfaces and makes process calls to search patient information from an external system.
Creates additional SDA elements (extensions) at container and patient levels
4. Deploys HMF solutions
Prepares HMF adapter-based productions and interfaces
Contrasts local versus remote deployments
5. Manages and monitors HMF productions
Examines integration events and SDA content using Integration History UI
Views configuration settings (e.g., ports, URLs, file paths, etc.) for business services and business processes generated in each production (System, Router, and Gateway)
Identifies production status using the Production Manager and InterSystems IRIS Message Viewer
Examines message workflow using Visual Trace in the InterSystems IRIS Management Portal for each production
6. Troubleshoots HMF productions
Examines and interprets low-level HMF trace details in ^zTRAK("HMF") global
Examines HMF production settings
Formulates resolution strategies
3. Configures TrakCare for external access and access to 3rd party applications
1. Configures inbound data access
Uses external SQL tools (e.g., WinSQL to perform SQL queries)
2. Configures outbound data access
Creates and runs SQL queries against external tables or views
3. Uses SOAP web services
Writes InterSystems ObjectScript code to invoke external SOAP web services
Uses SOAP Wizard to create proxy clients
Uses industry-standard tools such as SoapUI, Postman, and CURL to make SOAP calls to TrakCare
4. Links to External Viewer
Configures External Viewer for TrakCare to launch into external PACS
Associates External Viewer to receiving location
5. Launches into 3rd party application with context
Names TrakCare in-process variables
Creates and configures custom charts
6. Uses TrakCare REST APIs
Retrieves TrakCare API Swagger documentation
Configures TrakCare REST APIs using the API Manager
Audits REST API history
Announcement
Cindy Olsen · May 5, 2023
Effective May 16, documentation for versions of InterSystems Caché® and InterSystems Ensemble® prior to 2017.1 will only be available in PDF format on the InterSystems documentation website. Local instances of these versions will continue to present content dynamically.
Announcement
Anastasia Dyubaylo · Jun 17, 2020
Hey Developers,
We're pleased to invite you to join the next InterSystems IRIS 2020.1 Tech Talk: Using InterSystems Managed FHIR Service in the AWS Cloud on June 30 at 10:00 AM EDT!
In this InterSystems IRIS 2020.1 Tech Talk, we’ll focus on using InterSystems Managed FHIR Service in the AWS Cloud. We’ll start with an overview of FHIR, which stands for Fast Healthcare Interoperability Resources, and is a next generation standards framework for working with healthcare data.
You'll learn how to:
provision the InterSystems IRIS FHIR server in the cloud;
integrate your own data with the FHIR server;
use SMART on FHIR applications and enterprise identity, such as Active Directory, with the FHIR server.
We will discuss an API-first development approach using the InterSystems IRIS FHIR server. Plus, we’ll cover the scalability, availability, security, regulatory, and compliance requirements that using InterSystems FHIR as a managed service in the AWS Cloud can help you address.
Speakers:🗣 @Patrick.Jamieson3621, Product Manager - Health Informatics Platform, InterSystems 🗣 @Anton.Umnikov, Senior Cloud Solution Architect, InterSystems
Date: Tuesday, June 30, 2020Time: 10:00 AM EDT
➡️ JOIN THE TECH TALK!
Announcement
Daniel Palevski · Nov 27, 2024
InterSystems announces General Availability of InterSystems IRIS, InterSystems IRIS for Health, and HealthShare Health Connect 2024.3
The 2024.3 release of InterSystems IRIS® data platform, InterSystems IRIS® for Health, and HealthShare® Health Connect is now Generally Available (GA).
Release Highlights
In this release, you can expect a host of exciting updates, including:
Much faster extension of database and WIJ files
Ability to resend messages from Visual Trace
Enhanced Rule Editor capabilities
Vector search enhancements
and more.
Please share your feedback through the Developer Community so we can build a better product together.
Documentation
Details on all the highlighted features are available through these links below:
InterSystems IRIS 2024.3 documentation, release notes, and the Upgrade Checklist.
InterSystems IRIS for Health 2024.3 documentation, release notes, and the Upgrade Checklist.
Health Connect 2024.3 documentation, release notes, and the Upgrade Checklist.
In addition, check out the upgrade information for this release.
Early Access Programs (EAPs)
There are many EAPs available now. Check out this page and register to those you are interested.
How to get the software?
As usual, Continuous Delivery (CD) releases come with classic installation packages for all supported platforms, as well as container images in Docker container format.
Classic installation packages
Installation packages are available from the WRC's Continuous Delivery Releases page for InterSystems IRIS, InterSystems IRIS for Health, and Health Connect. Additionally, kits can also be found in the Evaluation Services website.
Availability and Package Information
This release comes with classic installation packages for all supported platforms, as well as container images in Docker container format. For a complete list, refer to the Supported Platforms document.
Installation packages and preview keys are available from the WRC's preview download site or through the evaluation services website.
The build number for this Continuous Delivery release is: 2024.3.0.217.0.
Container images are available from the InterSystems Container Registry. Containers are tagged as both "2024.3" or "latest-cd".
Announcement
Fabiano Sanches · Mar 19, 2024
The 2024.1 release of InterSystems IRIS® for HealthTM, and HealthShare® Health Connect is now Generally Available (GA).
❗This announcement does not apply for InterSystems IRIS®
Release Highlights
In this release, you can expect a host of exciting updates, including:
Support for Smart on FHIR 2.0.0
FHIR R4 object model generation
Improved performance of FHIR queries
Removal of the Private Web Server (PWS)
and more.
Please share your feedback through the Developer Community so we can build a better product together.
Documentation
Details on all the highlighted features are available through these links below:
InterSystems IRIS for Health 2024.1 documentation, release notes and the Upgrade Checklist.
HealthShare Health Connect 2024.1 documentation, release notes and the Upgrade Checklist.
In addition, check out this link for upgrade information related to this release.
Early Access Programs (EAPs)
There are many EAPs available now. Check out to this page and register to those you are interested.
How to get the software?
As usual, Extended Maintenance (EM) releases come with classic installation packages for all supported platforms, as well as container images in Docker container format.
Classic installation packages
Installation packages are available from the WRC's Extended Maintenance Releases page for InterSystems IRIS for Health, and from the HealthShare Full Kits page for HealthShare Health Connect. Additionally, kits can also be found in the Evaluation Services website. InterSystems IRIS Studio is still available in the release, and you can get it from the WRC's Components distribution page.
Availability and Package Information
This release comes with classic installation packages for all supported platforms, as well as container images in Docker container format. For a complete list, refer to the Supported Platforms document.
Installation packages and preview keys are available from the WRC's preview download site or through the evaluation services website (use the flag "Show Preview Software" to get access to the 2024.1).
The build number for this developer preview is: 2024.1.0.263.0.
Container images are available from the InterSystems Container Registry. Containers are tagged as both "2024.1" or "latest-em". I am not seeing HealthShare Health Connect 2024.1 listed under the HealthShare Full Kits. Am I missing something?
@Scott.Roth - there was an issue found with the HSHC 2024.1 kit so it was pulled down and a corrected kit should be available in the near future. So sorry for the inconvenience. What was the correction. We have b263 and wonder if we should 'upgrade' already? @Ian.Minshall , @Scott.Roth Please read this post w.r.t. HSHC2024.1. A new kit is available now.
https://community.intersystems.com/post/apr-8-2024-%E2%80%93-alert-upgrades-fail-healthshare%C2%AE-health-connect-instances-not-licensed-hl7%C2%AE-fhir%C2%AE Thanks, I did receive an email, downloaded the new kit, and upgraded our DEV environment yesterday to start evaluating. great!!
Announcement
Daniel Palevski · Mar 26
InterSystems Announces General Availability of InterSystems IRIS, InterSystems IRIS for Health, and HealthShare Health Connect 2025.1
The 2025.1 release of InterSystems IRIS® data platform, InterSystems IRIS® for HealthTM, and HealthShare® Health Connect is now Generally Available (GA). This is an Extended Maintenance (EM) release.
Release Highlights
In this exciting release, users can expect several new features and enhancements, including:
Advanced Vector Search Capabilities
A new disk-based Approximate Nearest Neighbor (ANN) index significantly accelerates vector search queries, yielding sub-second responses across millions of vectors. Access the following exercise to learn more - Vectorizing and Searching Text with InterSystems SQL .
Enhanced Business Intelligence
Automatic dependency analysis in IRIS BI Cube building and synchronization, ensuring consistency and integrity across complex cube dependencies.
Improved SQL and Data Management
Introduction of standard SQL pagination syntax (LIMIT... OFFSET..., OFFSET... FETCH...).
New LOAD SQL command for simplified bulk import of DDL statements.
Enhanced ALTER TABLE commands to convert between row and columnar layouts seamlessly.
Optimized Database Operations
Reduced journal record sizes for increased efficiency.
Faster database compaction, particularly for databases with lots of big string content.
Increased automation when adding new databases to a mirror.
New command-line utility for ECP management tasks.
Strengthened Security Compliance
Support for cryptographic libraries compliant with FIPS 140-3 standards.
Modernized Interoperability UI
Opt-in to a revamped Production Configuration and DTL Editor experience, featuring source control integration, VS Code compatibility, enhanced filtering, split-panel views, and more. Please see this Developer Community article for more information about how to opt-in and provide feedback.
Expanded Healthcare Capabilities
Efficient bulk FHIR ingestion and scheduling, including integrity checks and resource management.
Enhanced FHIR Bulk Access and improved FHIR Search Operations.
New Developer Experience Features
Embedded Python support within the DTL Editor, allowing Python-skilled developers to leverage the InterSystems platform more effectively. Watch the following video to learn more - Using Embedded Python in the BPL and DTL Editors.
Enhanced Observability with OpenTelemetry
Introduction of tracing capabilities in IRIS for detailed observability into web requests and application performance.
Please share your feedback through the Developer Community so we can build a better product together.
Documentation
Details on all the highlighted features are available through these links below:
InterSystems IRIS 2025.1 documentation and release notes.
InterSystems IRIS for Health 2025.1 documentation and release notes.
Health Connect 2025.1 documentation and release notes.
In addition, check out the upgrade impact checklist for an easily navigable overview of all changes you need to be aware of when upgrading to this release.
In particular, please note that InterSystems IRIS 2025.1 introduces a new journal file format version, which is incompatible with earlier releases and therefore imposes certain limitations on mixed-version mirror setups. See the corresponding documentation for more details.
Early Access Programs (EAPs)
There are many EAPs available now. Check out this page and register to those you are interested.
Download the Software
As usual, Extended Maintenance (EM) releases come with classic installation packages for all supported platforms, as well as container images in Docker container format.
Classic Installation Packages
Installation packages are available from the WRC's InterSystems IRIS page for InterSystems IRIS and InterSystems IRIS for Health, and WRC’s HealthShare page for Health Connect. Kits can also be found in the Evaluation Services website.
Availability and Package Information
This release comes with classic installation packages for all supported platforms, as well as container images in Docker container format. For a complete list, refer to the Supported Platforms document.
The build number for this Extended Maintenance release is 2025.1.0.223.0.
Container images are available from the InterSystems Container Registry. Containers are tagged as both "2025.1" and "latest-em".
Announcement
Bob Kuszewski · May 15, 2024
InterSystems is pleased to announce the general availability of:
InterSystems IRIS Data Platform 2024.1.0.267.2
InterSystems IRIS for Health 2024.1.0.267.2
HealthShare Health Connect 2024.1.0.267.2
This release adds support for the Ubuntu 24.04 operating system. Ubuntu 24.04 includes Linux kernel 6.8, security improvements, along with installer and user interface improvements. InterSystems IRIS IntegratedML is not yet available on Ubuntu 24.04.
Additionally, this release addresses two defects for all platforms:
A fix for some SQL queries using “NOT %INLIST” returning incorrect results. We previously issued an alert on this error.
A fix for incomplete stack traces in certain circumstances.
How to get the software
As usual, Extended Maintenance (EM) releases come with classic installation packages for all supported platforms, as well as container images in Docker container format. For a complete list, refer to the Supported Platforms page.
Classic installation packages
Installation packages are available from the WRC's Extended Maintenance Releases page. Additionally, kits can also be found in the Evaluation Services website.
Containers
Container images for both Enterprise and Community Editions of InterSystems IRIS and IRIS for Health and all corresponding components are available from the InterSystems Container Registry web interface.
Containers are tagged as both "2024.1" or "latest-em".
Article
Developer Community Admin · Oct 21, 2015
InterSystems Caché 2015.1 soars from 6 million to more than 21 million end-user database accesses per second on the Intel® Xeon® processor E7 v2 family compared to Caché 2013.1 on the Intel® Xeon® processor E5 familyOverviewWith data volumes soaring and the opportunities to derive value from data rising, database scalability has become a crucial challenge for a wide range of industries. In healthcare, the rising demands for healthcare services and significant changes in the regulatory and business climates can make the challenges particularly acute. How can organizations scale their databases in an efficient and cost-effective way?The InterSystems Caché 2015.1 data platform offers a solution. Identified as a Leader in the Gartner Magic Quadrant for Operational Database Management Systems,1 Caché combines advanced data management, integration, and analytics. Caché 2015.1 is optimized to take advantage of modern multi-core architectures and represents a new generation of ultra-high-performance database technology. Running on the Intel® Xeon® processor E7 v2 family, Caché 2015.1 provides a robust, affordable solution for scalable, data-intensive computing.To examine the scalability of Caché 2015.1, InterSystems worked with performance engineers at Epic, whose electronic medical records (EMRs) and other healthcare applications are deployed by some of the world’s largest hospitals, delivery systems, and other healthcare organizations. The test team found that Caché 2015.1 with Enterprise Cache Protocol® (ECP®) technology on the Intel Xeon processor E7 v2 family achieved more than 21 million end-user database accesses per second (known in the Caché environment as Global References per Second or GREFs) while maintaining excellent response times. This was more than triple the load levels of 6 million GREFs achieved by Caché 2013.1 on the Intel® Xeon® processor E5 family."The scalability and performance improvements of Caché version 2015.1 are terrific. Almost doubling the scalability, this version provides a key strategic advantage for our user organizations who are pursuing large-scale medical informatics programs as well as aggressive growth strategies in preparation for the volume-to-value transformation in healthcare."– Carl Dvorak, President, Epic
Article
Developer Community Admin · Oct 21, 2015
Providing a reliable infrastructure for rapid, unattended, automated failoverTechnology OverviewTraditional availability and replication solutions often require substantial capital investments in infrastructure, deployment, configuration, software licensing, and planning. Caché Database Mirroring (Mirroring) is designed to provide an economical solution for rapid, reliable, robust, automatic failover between two Caché systems, making mirroring the ideal automatic failover high-availability solution for the enterprise.In addition to providing an availability solution for unplanned downtime, mirroring offers the flexibility to incorporate certain planned downtimes on a particular Caché system while minimizing the overall SLA's for the organization. Combining InterSystems Enterprise Cache Protocol (ECP) application servers with mirroring provides an additional level of availability. Application servers allow processing to seamlessly continue on the new system once the failover is complete, thus greatly minimizing workflow and user disruption. Configuring the two mirror members in separate data centers offers additional redundancy and protection from catastrophic events.Key Features and BenefitsEconomical high availability database solution with automatic failoverRedundant components minimize shared-resource related risksLogical data replication minimizes risks of carry-forward physical corruptionProvides a solution for both planned and unplanned downtimeProvides business continuity benefits via a geographically dispersed disaster recovery configurationProvides Business Intelligence and reporting benefits via a centralized Enterprise Data Warehouse configurationTraditional availability solutions that rely on shared resources (such as shared disk) are often susceptible to a single point of failure with respect to that shared resource. Mirroring reduces that risk by maintaining independent components on the primary and backup mirror systems. Further, by utilizing logical data replication, mirroring reduces the potential risks associated with physical replication, such as out-of-order updates and carry-forward corruption, which are possible with other replication technologies such as SAN-based replication.Finally, mirroring allows for a special Async Member, which can be configured to receive updates from multiple mirrors across the enterprise. This allows a single system to act as a comprehensive enterprise data store, enabling - through the use of InterSystems DeepSee - real-time business intelligence that uses enterprise-wide data. The async member can also be deployed in a Disaster Recovery model in which a single mirror can update up to six geographically-dispersed async members; this model provides a robust framework for distributed data replication, thus ensuring business continuity benefits to the organization. The async member can also be configured as a traditional reporting system so that application reporting can be offloaded from the main production system.
Article
Developer Community Admin · Oct 21, 2015
Providing a reliable infrastructure for rapid, unattended, automated failoverTechnology OverviewTraditional availability and replication solutions often require substantial capital investments in infrastructure, deployment, configuration, software licensing, and planning. Caché Database Mirroring (Mirroring) is designed to provide an economical solution for rapid, reliable, robust, automatic failover between two Caché systems, making mirroring the ideal automatic failover high-availability solution for the enterprise.In addition to providing an availability solution for unplanned downtime, mirroring offers the flexibility to incorporate certain planned downtimes on a particular Caché system while minimizing the overall SLA's for the organization. Combining InterSystems Enterprise Cache Protocol (ECP) application servers with mirroring provides an additional level of availability. Application servers allow processing to seamlessly continue on the new system once the failover is complete, thus greatly minimizing workflow and user disruption. Configuring the two mirror members in separate data centers offers additional redundancy and protection from catastrophic events.Key Features and BenefitsEconomical high availability solution with automatic failover for database systemsRedundant components minimize shared-resource related risksLogical data replication minimizes risks of carry-forward physical corruptionProvides a solution for both planned and unplanned downtimeProvides business continuity benefits via a geographically dispersed disaster recovery configurationTraditional availability solutions that rely on shared resources (such as shared disk) are often susceptible to a single point of failure with respect to that shared resource. Mirroring reduces that risk by maintaining independent components on the primary and backup mirror systems. Further, by utilizing logical data replication, mirroring reduces the potential risks associated with physical replication, such as out-of-order updates and carry-forward corruption, which are possible with other replication technologies such as SAN-based replication.Finally, mirroring allows for a special Async Member, which can be configured to receive updates from multiple mirrors across the enterprise. This allows a single system to act as a comprehensive enterprise data store, enabling - through the use of InterSystems DeepSee - real-time business intelligence that uses enterprise-wide data. The async member can also be deployed in a Disaster Recovery model in which a single mirror can update up to six geographically-dispersed async members; this model provides a robust framework for distributed data replication, thus ensuring business continuity benefits to the organization. The async member can also be configured as a traditional reporting system so that application reporting can be offloaded from the main production system.
Article
Developer Community Admin · Oct 21, 2015
Introduction
To overcome the performance limitations of traditional relational databases, applications - ranging from those running on a single machine to large, interconnected grids - often use in-memory databases to accelerate data access. While in-memory databases and caching products increase throughput, they suffer from a number of limitations including lack of support for large data sets, excessive hardware requirements, and limits on scalability.
InterSystems Caché is a high-performance object database with a unique architecture that makes it suitable for applications that typically use in-memory databases. Caché's performance is comparable to that of in-memory databases, but Caché also provides:
Persistence - data is not lost when a machine is turned off or crashes
Rapid access to very large data sets
The ability to scale to hundreds of computers and tens of thousands of users
Simultaneous data access via SQL and objects: Java, C++, .NET, etc.
This paper explains why Caché is an attractive alternative to in-memory databases for companies that need high-speed access to large amounts of data.
superior-alternative-to-in-memory-databases-key-value-stores.pdf
Article
Murray Oldfield · Mar 8, 2016
Your application is deployed and everything is running fine. Great, hi-five! Then out of the blue the phone starts to ring off the hook – it’s users complaining that the application is sometimes ‘slow’. But what does that mean? Sometimes? What tools do you have and what statistics should you be looking at to find and resolve this slowness? Is your system infrastructure up to the task of the user load? What infrastructure design questions should you have asked before you went into production? How can you capacity plan for new hardware with confidence and without over-spec'ing? How can you stop the phone ringing? How could you have stopped it ringing in the first place?
A list of other posts in this series is here
This will be a journey
This is the first post in a series that will explore the tools and metrics available to monitor, review and troubleshoot systems performance as well as system and architecture design considerations that effect performance. Along the way we will head off down a quite a few tracks to understand performance for Caché, operating systems, hardware, virtualization and other areas that become topical from your feedback in the comments.
We will follow the feedback loop where performance data gives a lens to view the advantages and limitations of the applications and infrastructure that is deployed, and then back to better design and capacity planning.
It should go without saying that you should be reviewing performance metrics constantly, it is unfortunate the number of times customers are surprised by performance problems that would have been visible for a long time, if only they were looking at the data. But of course the question is - what data? We will start the journey by collecting some basic Caché and system metrics so we can get a feel for the health of your system today. In later posts we will dive into the meaning of key metrics.
There are many options available for system monitoring – from within Caché and external and we will explore a lot of them in this series.
To start we will look at my favorite go-to tool for continuous data collection which is already installed on every Caché system – ^pButtons.
To make sure you have the latest copy of pButtons please review the following post:
https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-how-update-pbuttons
Collecting system performance metrics - ^pButtons
The Caché pButtons utility generates a readable HTML performance report from log files it creates. Performance metrics output by pButtons can easily be extracted, charted and reviewed.
Data collected in the pButtons html file includes;
Caché set up: with configuration, drive mappings, etc.
mgstat: Caché performance metrics - most values are average per second.
Unix: vmstat and iostat: Operating system resource and performance metrics.
Windows: performance monitor: Windows resource and performance metrics.
Other metrics that will be useful.
pButtons data collection has very little impact on system performance, the metrics are being collected already by the system, pButtons simply packages these for easy filing and transport.
To keep a baseline, for trend analysis and for troubleshooting it is good practice to collect a 24-hour pButtons (midnight to midnight) every day for a complete business cycle. A business cycle could be a month or more, for example to capture data from end of month processing. If you do not have any other external performance monitoring or collection you can run pButtons year-round.
The following key points should be noted:
Change the log directory to a location away from production data to store accumulated output files to avoid disk full problems!
Run an operating system script or otherwise compress and archive the pButtons file regularly, this is especially important on Windows as the files can be large.
Review the data regularly!
In event of a problem needing immediate analysis pButtons data can be previewed (collected immediately) while metrics continue to be stored for collection at the end of the days run.
For more information on pButtons including preview, stopping a run and adding custom data gathering please see the Caché Monitoring Guide in the most recent Caché documentation:
http://docs.intersystems.com
The pButtons HTML file data can be separated and extracted (to CSV files for example) for processing into graphs or other analysis by scripting or simple cut and paste. We will see examples of the output in graphs later in the next post.
Of course if you have urgent performance problems contact the WRC.
Schedule 24 hour pButtons data collection
^pButtons can be started manually from the terminal prompt or scheduled. To schedule a 24-hour daily collection:
1. Start Caché terminal, switch to %SYS namespace and run pButtons manually once to set up pButtons file structures:
%SYS>d ^pButtons Current log directory: /db/backup/benchout/pButtonsOut/
Available profiles:
1 12hours - 12 hour run sampling every 10 seconds
2 24hours - 24 hour run sampling every 10 seconds
3 30mins - 30 minute run sampling every 1 second
4 4hours - 4 hour run sampling every 5 seconds
5 8hours - 8 hour run sampling every 10 seconds
6 test - A 5 minute TEST run sampling every 30 seconds
Select option 6. for test, 5 minute TEST run sampling every 30 seconds. Note your numbering may be different, but the test should be obvious.
During the run, run a Collect^pButtons (as shown below), you will see information including the runid. In this case “20160303_1851_test”.
%SYS>d Collect^pButtons
Current Performance runs:
20160303_1851_test ready in 6 minutes 48 seconds nothing available to collect at the moment.
%SYS>
Notice that this 5 minute run has 6 minutes and 48 seconds to go? pButtons adds a 2 minute grace period to all runs to allow time for collection and collation of the logs into html format.
2. IMPORTANT! Change pButtons log output directory – the default output location is the <cache install path>/mgr folder. For example on unix the path to the log directory may look like this:
do setlogdir^pButtons("/somewhere_with_lots_of_space/perflogs/")
Ensure Caché has write permissions for the directory and there is enough disk space available for accumulating the output files.
3. Create a new 24 hour profile with 30 second intervals by running the following:
write $$addprofile^pButtons("My_24hours_30sec","24 hours 30 sec interval",30,2880)
Check the profile has been added to pButtons:
%SYS>d ^pButtons
Current log directory: /db/backup/benchout/pButtonsOut/
Available profiles:
1 12hours - 12 hour run sampling every 10 seconds
2 24hours - 24 hour run sampling every 10 seconds
3 30mins - 30 minute run sampling every 1 second
4 4hours - 4 hour run sampling every 5 seconds
5 8hours - 8 hour run sampling every 10 seconds
6 My_24hours_30sec- 24 hours 30 sec interval
7 test - A 5 minute TEST run sampling every 30 seconds
select profile number to run:
Note: You can vary the collection interval – 30 seconds is fine for routine monitoring. I would not go below 5 seconds for a routine 24 hour run (…”,5,17280) as the output files can become very large as pButtons collects data at every tick of the interval. If you are trouble-shooting a particular time of day and want more granular data use one of the default profiles or create a new custom profile with a shorter time period, for example 1 hour with 5 second interval (…”,5,720). Multiple pButtons can run at the same time so you could have a short pButtons with 5 second interval at running at the same time as the 24-hour pButtons.
4. Tip For UNIX sites review the disk command. The default parameters used with the 'iostat' command may not include disk response times. First display what disk commands are currently configured:
%SYS>zw ^pButtons("cmds","disk")
^pButtons("cmds","disk")=2
^pButtons("cmds","disk",1)=$lb("iostat","iostat ","interval"," ","count"," > ")
^pButtons("cmds","disk",2)=$lb("sar -d","sar -d ","interval"," ","count"," > ")
In order to collect disk statistics, use the appropriate command to edit the syntax for your UNIX installation. Note the trailing space. Here are some examples:
LINUX: set $li(^pButtons("cmds","disk",1),2)="iostat -xt "
AIX: set $li(^pButtons("cmds","disk",1),2)="iostat -sadD "
VxFS: set ^pButtons("cmds","disk",3)=$lb("vxstat","vxstat -g DISKGROUP -i ","interval"," -c ","count"," > ")
You can create very large pButton html files by having both iostat and sar commands running. For regular performance reviews I usually only use iostat. To configure only one command:
set ^pButtons("cmds","disk")=1
More details on configuring pButtons are in the online documentation.
5. Schedule pButtons to start at midnight in the Management Portal > System Operation > Task Manager:
Namespace: %SYS
Task Type: RunLegacyTask
ExecuteCode: Do run^pButtons("My_24hours_30sec")
Task Priority: Normal
User: superuser
How often: Once daily at 00:00:01
Collecting pButtons data
pButtons shipped in more recent versions of InterSystems data platforms include automatic collection. To manually collect and collate the data into an html file; In %SYS namespace, run the following command to generate any outstanding pButtons html output files:
do Collect^pButtons
The html file will be in the logdir you set at step 2 (if you did not set it go and do it now!). Otherwise the default location is the <Caché install dir/mgr>
Files are named <hostname_instance_Name_date_time_profileName.html> e.g. vsan-tc-db1_H2015_20160218_0255_test.html
Windows Performance Monitor considerations
If the operating system is Windows then Windows Performance Monitor (perfmon) can be used to collect data in synch with the other metrics collected. On older Caché distributions of pButtons, Windows perfmon needs to be configured manually. If there is demand from the post comments I will write a post about creating a perfmon template to define the performance counters to monitor and schedule to run for the same period and interval as pButtons.
Summary
This post got us started collecting some data to look at. Later in the week I will start to look at some sample data and what it means. You can follow along with data you have collected on your own systems. See you then.
http://docs.intersystems.com Great article Murray. I'm looking forward to reading subsequent ones. Thanks for this Murray. I am just sending to our system manager!(We don't run pButtons all the time but perhaps we should) Thank you, Murray! Great article!People! See also other articles in InterSystems Data Platform Blog and subscribe not to miss new ones. When that's not enough, there are a ton of tools to go deeper and look at the OS side of things: http://www.brendangregg.com/linuxperf.html A nice intro., moving on to part two. Thank you. this one is so popular it's showing up three times in the posting list. Will try to figure out why... Just adding my 2c to "4. Tip For UNIX sites":All necessary unix/linux packages should be installed before the first invocation of
Do run^pButtons(<any profile>)
otherwise some commands may be missed in ^pButtons("cmds"). I've recently faced it at the server where sysstat wasn't installed: `sar -d` and `sar -u` commands were absent. If you decide to install it later (`sudo yum install sysstat`in my case), ^pButtons("cmds") would not be automatically updated without little help from you: just kill it before calling the run^pButtons().
This is actual at least for pButtons v.1.15c-1.16c and v.5 (which recently occurred on ftp://ftp.intersys.com/pub/performance/), in Caché 2015.1.2. Excellent Article. I have a question about options for running pButtons. Please Note: It has been awhile since I last used pButtons. But I can attest that it is a very useful tool. My question: Is there an option to run pButtons in "basic mode" versus "advanced mode"? I thought I recalled such a feature, but cannot seem to remember how to select the run mode. And from what I recall, basic mode collects less data/information than advanced mode. This can be helpful for support teams and others when they only need the high-level details.Thanks! And I look forward to reading the next 5 articles in this series. Basic and advanced mode were in an old version of another tool named ^Buttons. With ^pButtons you have an option to reduce the number of OS commands being performed, as it was shown in Tip #4. It is a pretty good idea to run pButtons all the time. That way you know you’ll have data for any acutely weird performance behavior during the time of the problem.The documentation has an example of setting up a 24 hour pButtons to run every week at a specific time using Task Manager:http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_pbuttons#GCM_pButtons_runsmpYou can also just set the task to run every day rather than weekly.If you're worried about space, the reports don't take up too much room, and (since they're fairly repetitive) they zip pretty well. Fantastic article. Congrats. thanks :) Hi @Murray.Oldfield, we hv such need of Cache on Windows OS. Really appreciate if you can help do one. Thx!
Question
sansa stark · Aug 24, 2016
Hi All,Now I installed the Cache 5.0 that terminal is not open can any one know the default username and password for Cache 5.0 Try username SYS password XXX. You can change the password using Control Panel|Security|User accounts ( TRM: account).Thanks. As I remember, you could create user TRM or TELNET, with routine %PMODE, and will get access without authentication.Don't remember how it was in 5.0, but currently default login/password is _SYSTEM/SYS
Announcement
Janine Perkins · Oct 12, 2016
Learn to successfully design your HL7 production.After you have built your first HL7 production in a test environment, you can start applying what you have learned and begin building in a development environment. Take this course to learn to create namespaces and databases for your production, apply recommended design principles to your production, and use appropriate naming conventions in your production.Learn More.