InterSystems IRIS for Health

Syndicate content 6 

Hi,

  I have a permanent job opportunity which would ideally suit someone from Europe who want's to experience the UK (initially London), the requirements are fluent in Cache Objects, SQL, XML, Integration,  English Language and be self motivated with a desire to experience London.

  I am NOT a recruitment agency and will offer personal help in relocation and culture differencies to handle moving countries. 

  I do not offer a relocation package but will offer advice, guidance and help in relocation.

Last reply 2 March 2019
0   0 1
496

views

0

rating

Breaking news!

InterSystems just announced the availability of the InterSystems IRIS for Health™ Data Platform across the Amazon Web ServicesGoogle Cloud, and Microsoft Azure marketplaces.

With access to InterSystems unified data platform on all three major cloud providers, developers and customers have flexibility to rapidly build and scale the digital applications driving the future of care on the platform of their choice. 

To learn more please follow this link

+ 1   0 2
0

replies

244

views

+ 1

rating

The Amazon Web Services (AWS) Cloud provides a broad set of infrastructure services, such as compute resources, storage options, and networking that are delivered as a utility: on-demand, available in seconds, with pay-as-you-go pricing. New services can be provisioned quickly, without upfront capital expense. This allows enterprises, start-ups, small and medium-sized businesses, and customers in the public sector to access the building blocks they need to respond quickly to changing business requirements.

 

Updated: 15-Oct, 2019 

Last reply 13 February 2019
+ 13   5 3
2,257

views

+ 13

rating

Hi Community!

I'm pleased to announce that we've just launched the new product in the family of InterSystems Data Platforms:

InterSystems IRIS for Health

IRIS for Health — is the world’s first and only data platform engineered specifically for healthcare. It empowers you to rapidly create and scale the industry’s next breakthrough applications.

+ 2   1 1
0

replies

290

views

+ 2

rating

In the last post we scheduled 24-hour collections of performance metrics using pButtons. In this post we are going to be looking at a few of the key metrics that are being collected and how they relate to the underlying system hardware. We will also start to explore the relationship between Caché (or any of the InterSystems Data Platforms) metrics and system metrics. And how you can use these metrics to understand the daily beat rate of your systems and diagnose performance problems.

Last reply 24 March 2018
+ 17   2 15
2,602

views

+ 17

rating

I am often asked by customers, vendors or internal teams to explain CPU capacity planning for large production databases running on VMware vSphere.

In summary there are a few simple best practices to follow for sizing CPU for large production databases:

  • Plan for one vCPU per physical CPU core.
  • Consider NUMA and ideally size VMs to keep CPU and memory local to a NUMA node.
  • Right-size virtual machines. Add vCPUs only when needed.

Generally this leads to a couple of common questions:

Last reply 8 November 2017
+ 5   0 3
4,508

views

+ 5

rating

This post will show you an approach to size shared memory requirements for database applications running on InterSystems data platforms including global and routine buffers, gmheap, and locksize as well as some performance tips you should consider when configuring servers and when virtualizing Caché applications. As ever when I talk about Caché I mean all the data platform (Ensemble, HealthShare, iKnow and Caché).


A list of other posts in this series is here

Last reply 22 August 2017
+ 22   5 10
6,477

views

+ 22

rating

** Revised Feb-12, 2018

While this article is about InterSystems IRIS, it also applies to Caché, Ensemble, and HealthShare distributions.

Introduction

Memory is managed in pages.  The default page size is 4KB on Linux systems.  Red Hat Enterprise Linux 6, SUSE Linux Enterprise Server 11, and Oracle Linux 6 introduced a method to provide an increased page size in 2MB or 1GB sizes depending on system configuration know as HugePages.

At first HugePages required to be assigned at boot time, and if not managed or calculated appropriately could result in wasted resources.  As a result various Linux distributions introduced Transparent HugePages with the 2.6.38 kernel as enabled by default.  This was meant as a means to automate creating, managing, and using HugePages.  Prior kernel versions may have this feature as well however may not be marked as [always] and potentially set to [madvise].  

Transparent Huge Pages (THP) is a Linux memory management system that reduces the overhead of Translation Lookaside Buffer (TLB) lookups on machines with large amounts of memory by using larger memory pages.  However in current Linux releases THP can only map individual process heap and stack space.

Last reply 22 February 2017
+ 4   2 9
3,065

views

+ 4

rating

One of the great availability and scaling features of Caché is Enterprise Cache Protocol (ECP). With consideration during application development distributed processing using ECP allows a scale out architecture for Caché applications. Application processing can scale to very high rates from a single application server to the processing power of up to 255 application servers with no application changes.

Last reply 8 December 2016
+ 9   1 5
1,625

views

+ 9

rating

This post provides guidelines for configuration, system sizing and capacity planning when deploying Caché 2015 and later on a VMware ESXi 5.5 and later environment.

+ 10   4 2
0

replies

3,957

views

+ 10

rating

Myself and the other Technology Architects often have to explain to customers and vendors Caché IO requirements and the way that Caché applications will use storage systems. The following tables are useful when explaining typical Caché IO profile and requirements for a transactional database application with customers and vendors.  The original tables were created by Mark Bolinsky.

In future posts I will be discussing more about storage IO so am also posting these tables now as a reference for those articles. 

Last reply 13 November 2016
+ 9   2 2
1,690

views

+ 9

rating

This is a list of all the posts in the data platforms capacity planning and performance series in order. Also a general list of my other posts. I will update as new posts in the series are added.


Capacity Planning and Performance Series

Generally posts build on previous, but you can also just dive in to subjects that look interesting.

+ 12   2 15
0

replies

3,739

views

+ 12

rating

Your application is deployed and everything is running fine. Great, hi-five! Then out of the blue the phone starts to ring off the hook – it’s users complaining that the application is sometimes ‘slow’. But what does that mean? Sometimes? What tools do you have and what statistics should you be looking at to find and resolve this slowness? Is your system infrastructure up to the task of the user load? What infrastructure design questions should you have asked before you went into production? How can you capacity plan for new hardware with confidence and without over-spec'ing? How can you stop the phone ringing? How could you have stopped it ringing in the first place?

Last reply 30 September 2016
+ 18   4 18
2,741

views

+ 18

rating

++Update: August 2, 2018

This article provides a reference architecture as a sample for providing robust performing and highly available applications based on InterSystems Technologies that are applicable to Caché, Ensemble, HealthShare, TrakCare, and associated embedded technologies such as DeepSee, iKnow, Zen and Zen Mojo.

Azure has two different deployment models for creating and working with resources: Azure Classic and Azure Resource Manager. The information detailed in this article is based on the Azure Resource Manager model (ARM).

Last reply 8 July 2016
+ 11   0 1
8,153

views

+ 11

rating