Clear filter
Announcement
Michelle Spisak · Oct 17, 2017
Learning Services Live Webinars are back! At this year’s Global Summit, InterSystems debuted InterSystems IRIS Data Platform™, a single, comprehensive product that provides capabilities spanning data management, interoperability, transaction processing, and analytics. InterSystems IRIS sets a new level of performance for the rapid development and deployment of data-rich and mission-critical applications. Now is your chance to learn more! Joe Lichtenberg, Director of Product and Industry Marketing for InterSystems, presents "Introducing InterSystems IRIS Data Platform", a high-level description of the business drivers and capabilities behind InterSystems IRIS.Webinar recording available now! Webinar recording is available!
Announcement
Anastasia Dyubaylo · Oct 26, 2017
Hi, Community!
Please find a new session recording from Global Summit 2017:
An InterSystems Guide to the Data Galaxy
This video shows how to understand the concept of an open analytics platform for the enterprise.
@Benjamin.Deboe tells how InterSystems technology pairs up with industry standards and open-source technology to provide a solid platform for analytics.
You can also see the additional resources here.
Don't forget to subscribe to the InterSystems Developers YouTube Channel
Enjoy!
Article
David Loveluck · Nov 8, 2017
Application Performance MonitoringTools in InterSystems technologyBack in August in preparation for Global Summit I published a brief explanation of Application Performance Management (APM). To follow up on that I have written and will be publishing over the coming weeks a series of articles on APM.One major element of APM is the construction of a historic record of application activity, performance and resource usage. Crucially for APM the measurement starts with the application and what users are doing with the application. By relating everything to business activity you can focus on improving the level of service provided to users and value to the line business that is ultimately paying for the application.Ideally an application includes instrumentation that reports on activity in business terms, such as ‘display the operating theater floor plan’ or ‘register student for course’ and give the count, the response time and resources used for each on an hourly or daily basis. However many applications don’t have this capability and you have to make do with the closest approximation you can.There are many tools (some expensive, some open source) available to monitor applications, ranging from java script injection for monitoring user experience to middleware and network probes for measuring application communication. The articles will focus on the tools that are available within InterSystems products. I will describe how I have used these tools to manage the performance of applications and improve customer experiences.The tools described include:CSP Page StatisticsSQL Query StatisticsEnsemble Activity MonitorEven if you do have good application instrumentation, additional system monitoring can provide valuable insights and help you and I will include an explanation of how to configure and use:Caché History MonitorI will also expanded my earlier explanation of APM, the reasons for monitoring performance and the different audiences you are trying to help with the information you gather. added a link to 'Using the Cache History Monitor'
Announcement
Evgeny Shvarov · Dec 24, 2017
Hi, Community!I'm pleased to announce that in this December 2017 we have 2 years of InterSystems Developer Community up and running! Together we did a lot this year, and a lot more is planned for the next year!Our Community is growing: In November we had 3,700 registered members (2,200 last November) and 13,000 web users visited the site in November 2017 (7,000 last year).Thank you for using it, thanks for making it useful, thanks for your knowledge, experience, and your passion!And, may I ask you to share in the comments the article or question which was most helpful for you this year?Happy Birthday, Developer Community! And to start it: for me the most helpful article this year was REST FORMS Queries - yes, I'm using REST FORMS a lot, thanks [@Eduard.Lebedyuk]!Another is Search InterSystems documentation using iKnow and iFind technologiesTwo helpful questions were mine (of course ;):How to find duplicates in a large text fieldand Storage Schema in VCS: to Store Or Not to Store?and How to get the measure for the last day in a month in DeepSee There were many interesting articles and discussions this year. I'd like to thank all of you who participated and helped our community grow. @Murray.Oldfield series on InterSystems Data Platforms Capacity Planning and Performance was a highly informative read.
Article
Developer Community Admin · Oct 21, 2015
InterSystems Caché 2015.1 soars from 6 million to more than 21 million end-user database accesses per second on the Intel® Xeon® processor E7 v2 family compared to Caché 2013.1 on the Intel® Xeon® processor E5 familyOverviewWith data volumes soaring and the opportunities to derive value from data rising, database scalability has become a crucial challenge for a wide range of industries. In healthcare, the rising demands for healthcare services and significant changes in the regulatory and business climates can make the challenges particularly acute. How can organizations scale their databases in an efficient and cost-effective way?The InterSystems Caché 2015.1 data platform offers a solution. Identified as a Leader in the Gartner Magic Quadrant for Operational Database Management Systems,1 Caché combines advanced data management, integration, and analytics. Caché 2015.1 is optimized to take advantage of modern multi-core architectures and represents a new generation of ultra-high-performance database technology. Running on the Intel® Xeon® processor E7 v2 family, Caché 2015.1 provides a robust, affordable solution for scalable, data-intensive computing.To examine the scalability of Caché 2015.1, InterSystems worked with performance engineers at Epic, whose electronic medical records (EMRs) and other healthcare applications are deployed by some of the world’s largest hospitals, delivery systems, and other healthcare organizations. The test team found that Caché 2015.1 with Enterprise Cache Protocol® (ECP®) technology on the Intel Xeon processor E7 v2 family achieved more than 21 million end-user database accesses per second (known in the Caché environment as Global References per Second or GREFs) while maintaining excellent response times. This was more than triple the load levels of 6 million GREFs achieved by Caché 2013.1 on the Intel® Xeon® processor E5 family."The scalability and performance improvements of Caché version 2015.1 are terrific. Almost doubling the scalability, this version provides a key strategic advantage for our user organizations who are pursuing large-scale medical informatics programs as well as aggressive growth strategies in preparation for the volume-to-value transformation in healthcare."– Carl Dvorak, President, Epic
Article
Developer Community Admin · Oct 21, 2015
Providing a reliable infrastructure for rapid, unattended, automated failoverTechnology OverviewTraditional availability and replication solutions often require substantial capital investments in infrastructure, deployment, configuration, software licensing, and planning. Caché Database Mirroring (Mirroring) is designed to provide an economical solution for rapid, reliable, robust, automatic failover between two Caché systems, making mirroring the ideal automatic failover high-availability solution for the enterprise.In addition to providing an availability solution for unplanned downtime, mirroring offers the flexibility to incorporate certain planned downtimes on a particular Caché system while minimizing the overall SLA's for the organization. Combining InterSystems Enterprise Cache Protocol (ECP) application servers with mirroring provides an additional level of availability. Application servers allow processing to seamlessly continue on the new system once the failover is complete, thus greatly minimizing workflow and user disruption. Configuring the two mirror members in separate data centers offers additional redundancy and protection from catastrophic events.Key Features and BenefitsEconomical high availability database solution with automatic failoverRedundant components minimize shared-resource related risksLogical data replication minimizes risks of carry-forward physical corruptionProvides a solution for both planned and unplanned downtimeProvides business continuity benefits via a geographically dispersed disaster recovery configurationProvides Business Intelligence and reporting benefits via a centralized Enterprise Data Warehouse configurationTraditional availability solutions that rely on shared resources (such as shared disk) are often susceptible to a single point of failure with respect to that shared resource. Mirroring reduces that risk by maintaining independent components on the primary and backup mirror systems. Further, by utilizing logical data replication, mirroring reduces the potential risks associated with physical replication, such as out-of-order updates and carry-forward corruption, which are possible with other replication technologies such as SAN-based replication.Finally, mirroring allows for a special Async Member, which can be configured to receive updates from multiple mirrors across the enterprise. This allows a single system to act as a comprehensive enterprise data store, enabling - through the use of InterSystems DeepSee - real-time business intelligence that uses enterprise-wide data. The async member can also be deployed in a Disaster Recovery model in which a single mirror can update up to six geographically-dispersed async members; this model provides a robust framework for distributed data replication, thus ensuring business continuity benefits to the organization. The async member can also be configured as a traditional reporting system so that application reporting can be offloaded from the main production system.
Article
Developer Community Admin · Oct 21, 2015
Providing a reliable infrastructure for rapid, unattended, automated failoverTechnology OverviewTraditional availability and replication solutions often require substantial capital investments in infrastructure, deployment, configuration, software licensing, and planning. Caché Database Mirroring (Mirroring) is designed to provide an economical solution for rapid, reliable, robust, automatic failover between two Caché systems, making mirroring the ideal automatic failover high-availability solution for the enterprise.In addition to providing an availability solution for unplanned downtime, mirroring offers the flexibility to incorporate certain planned downtimes on a particular Caché system while minimizing the overall SLA's for the organization. Combining InterSystems Enterprise Cache Protocol (ECP) application servers with mirroring provides an additional level of availability. Application servers allow processing to seamlessly continue on the new system once the failover is complete, thus greatly minimizing workflow and user disruption. Configuring the two mirror members in separate data centers offers additional redundancy and protection from catastrophic events.Key Features and BenefitsEconomical high availability solution with automatic failover for database systemsRedundant components minimize shared-resource related risksLogical data replication minimizes risks of carry-forward physical corruptionProvides a solution for both planned and unplanned downtimeProvides business continuity benefits via a geographically dispersed disaster recovery configurationTraditional availability solutions that rely on shared resources (such as shared disk) are often susceptible to a single point of failure with respect to that shared resource. Mirroring reduces that risk by maintaining independent components on the primary and backup mirror systems. Further, by utilizing logical data replication, mirroring reduces the potential risks associated with physical replication, such as out-of-order updates and carry-forward corruption, which are possible with other replication technologies such as SAN-based replication.Finally, mirroring allows for a special Async Member, which can be configured to receive updates from multiple mirrors across the enterprise. This allows a single system to act as a comprehensive enterprise data store, enabling - through the use of InterSystems DeepSee - real-time business intelligence that uses enterprise-wide data. The async member can also be deployed in a Disaster Recovery model in which a single mirror can update up to six geographically-dispersed async members; this model provides a robust framework for distributed data replication, thus ensuring business continuity benefits to the organization. The async member can also be configured as a traditional reporting system so that application reporting can be offloaded from the main production system.
Article
Developer Community Admin · Oct 21, 2015
IntroductionTo overcome the performance limitations of traditional relational databases, applications - ranging from those running on a single machine to large, interconnected grids - often use in-memory databases to accelerate data access. While in-memory databases and caching products increase throughput, they suffer from a number of limitations including lack of support for large data sets, excessive hardware requirements, and limits on scalability.InterSystems Caché is a high-performance object database with a unique architecture that makes it suitable for applications that typically use in-memory databases. Caché's performance is comparable to that of in-memory databases, but Caché also provides:Persistence - data is not lost when a machine is turned off or crashesRapid access to very large data setsThe ability to scale to hundreds of computers and tens of thousands of usersSimultaneous data access via SQL and objects: Java, C++, .NET, etc.This paper explains why Caché is an attractive alternative to in-memory databases for companies that need high-speed access to large amounts of data.
Announcement
Janine Perkins · Feb 2, 2016
Do you need to quickly build a web page to interact with your database?
Take a look at these two courses to learn how Zen Mojo can help you display collections and make your collections respond to user interactions.
Displaying Collections and Using the Zen Mojo Documentation
Learn the steps for displaying a collection of Caché data on a Zen Mojo page, find crucial information in the Zen Mojo documentation, and find sample code in the Widget Reference. Learn More.
Zen Mojo: Handling Events and Updating Layouts
This is an entirely hands-on course devoted to event handling and updating the display of a Zen Mojo page in response to user interaction. Learn to create a master-detail view that responds to user selections. Learn More.
Question
Mike Kadow · Jan 15, 2018
How can I access the InterSystems Class Database with the Atelier IDE?Say I want access to the Samples database and Namespace? I'm not clear what you mean by "access the Class Database". Connect Atelier to the SAMPLES namespace and you'll be able to view/edit classes from the SAMPLES database. Please expand on what else you're trying to do. John, maybe I was asking two separate things.When I bring up I/S documentation, there is an option to go to Class Reference.When I select that option I can see all Classes in the InterSystems database, that is what I am really wanting to do.I put the comment in about Samples as an after thought, not really thinking it does not apply to the other question. Ok, I realize I am not being clear, often I say things before I think them through.When I being up I/S documentation, I can select the Class Reference option.From Studio I can look up classes that are in the Class Reference Option.I tried to do the same thing in Atelier, and was unable to find the command to browse through all the Classes I see in the Class Reference Option. That, is what I am trying to do. I hope that is clear. You can also see this information in the Atelier Documentation view as you are moving focus within a class. If you do not see this view you can launch it by selecting Window > Show View > Other > Atelier > Atelier Documentation > Open.For example, I opened Sample.Person on my local Atelier client, selected the tab at the bottom for Atelier Documentation, then clicked on "%Populate" in the list of superclasses. Now I can see this in the Atelier Documentation view: Hey, NicoleThat's excellent !!!What ever I click on shows up in "Atelier Documentation" Tab.Thanks for the hint! Easier:in tab Server Explorer clickk to ADD NEW Sever Connection (green cross) OK, you look for something different than I understood.The CLASS REFERENCE to DocBook seems to be not directly available as in Studio.Just by external access to Documentation ....Part of it is found if you have an class in your editor and move your cursor over a class namethen you get a volatile class description that you can nail down by clicking or <F2>Its's pretty similar to the DocBook version EXCEPT that you have no further references (e.g. Data Types or %Status or ...)So it's not a multi level navigation like in browser!For illustration I have done this for %Persitent. For %Populate, %XML.Adaptor you have do again and again.
Article
Benjamin De Boe · Jan 31, 2018
With the release of InterSystems IRIS, we're also making available a nifty bit of software that allows you to get the best out of your InterSystems IRIS cluster when working with Apache Spark for data processing, machine learning and other data-heavy fun. Let's take a closer look at how we're making your life as a Data Scientist easier, as you're probably already facing tough big data challenges already, just from the influx of job offers in your inbox!
What is Apache Spark?
Together with the technology itself, we're also launching an exciting new learning package called the InterSystems IRIS Experience, an immersive combination of fast-paced courses, crisp videos and hands-on labs in our hosted learning environment. The first of those focuses exclusively on our connector for Apache Spark, so let's not reinvent the introduction wheel here and refer you to the course for a broader introduction. It's 100% free, after all!
In (very!) short, Apache Spark offers you an abstract object representation of a potentially massive dataset. As a developer, you just call methods on this Dataset interface like filter(), map() and orderBy() and Spark will make sure they are executed efficiently, leveraging the servers in your Spark cluster by parallelizing the work as much as possible. It also comes with a growing set of libraries for Machine Learning, streaming and analyzing graph data that leverage this dataset paradigm.
Why combine Spark with InterSystems IRIS?
Spark isn't just good at data processing and attracting large crowds at open source conferences, it's also good at allowing smart database vendors to participate in this drive to efficiency through its Data Source API. While the Dataset object abstracts the user from any complexities of the underlying data store (which could be pretty crude in the case of a file system), it does offer the database vendors on the other side of the object a chance to provide information on what those complexities (which we call features!) may be leveraged by Spark to improve overall efficiency. For example, a filter() call on the Dataset object could be forwarded to an underlying SQL database in the form of a WHERE clause. This predicate pushdown mechanism means the compute work is pushed closer to the data, allowing greater overall efficiency and throughput and part of building a connector for Apache Spark means registering all core functions that can be pushed down into the data source.
Besides this predicate pushdown, which many other databases support for Spark, our InterSystems IRIS Connector offers a few other unique advantages:
The connector is designed to work well with sharding, our new option for horizontal scalability. More specifically, if you're building a Dataset based on a sharded table, we'll make sure Spark slaves connect directly to the shard servers to read the data in parallel, rather than stand in line to pipe the data through the shard master.
As part of our new container deployment options, we're also offering a container that has both InterSystems IRIS and Apache Spark included. This means that when setting up a (sharded) cluster of these using ICM, you'll automatically have a Spark slave running alongside each data shard, allowing them to exploit data locality when reading or writing sharded tables, avoiding any network overhead. Note that if you set up your Spark and InterSystems IRIS clusters manually, with a Spark slave running on each server that has an InterSystems IRIS instance, you'll also benefit from this.
When reading data, the connector can implicitly partition the data being read by exploiting the same mechanism our SQL query optimizer uses in %PARALLEL mode. Hereby, multiple connections to the same InterSystems IRIS instance are opened to read the data in parallel, increasing throughput. With the basics in place already, you'll see further speedups coming up in InterSystems IRIS 2018.2.
Also starting with InterSystems IRIS 2018.2, you'll be able to export predictive models built with SparkML to InterSystems IRIS with a single iscSave() method call. This will automatically generate a PMML class on the database side with native code to run the model in InterSystems IRIS in real time or batch scenarios.
Getting started with the InterSystems IRIS Connector is easy, as it's plug-compatible with the default JDBC connector that ships with Spark. So any Spark program that started with
var dataset = spark.read.format("jdbc")
.option("dbtable", "BigData.MassiveSales")
can now become
import com.intersys.spark._
var dataset = spark.read.format("iris")
.option("dbtable","BigData.MassiveSales")
Now that's just the bare essentials to get you started wit InterSystems IRIS as the data store behind your Apache Spark cluster. The rest will only be constrained by your Data Science imagination, coffee supply and the 24h in a typical day. Come and try it yourself in the InterSystems IRIS Experience for Big Data Analytics! Someone please explain me how to working with InterSystems IRIS . The working stuffs for InterSystems IRIS how to download it and how to work with InterSystems IRIS. contact your sales rep.
Announcement
Evgeny Shvarov · Feb 12, 2018
Hi, Community!
There would be a webinar in two days by analysts Steve Duplessie and Mike Leone from Enterprise Strategy Group and Joe Lichtenberg, director of marketing for Data Platforms at InterSystems. They will present their recent research on operational and analytics workloads on a unified data platform and discuss the top database deployments and infrastructure challenges that organizations are struggling with, including managing data growth and database size and meeting database performance requirements. And Joe Lichtenberg will introduce attendees to the company’s latest product, InterSystems IRIS Data Platform. Join!
Building Smarter, Faster, and Scalable Data-Rich Applications for Businesses that Operate in Real-Time
Announcement
Evgeny Shvarov · Feb 14, 2018
Hi, Community!It is just a small announcement that Community is growing and we just reached 4,000 registered members!You can track the public DC analytics in this DeepSee dashboards in Community->Analytics menu:
Announcement
Mike Morrissey · Mar 7, 2018
The InterSystems FHIR® Sandbox is a virtual testing environment that combines HealthShare technology with synthetic patient data and open source and commercial SMART on FHIR apps, to allow users to play with FHIR functionality.The sandbox is designed to enable developers and innovators to connect and test their own DSTU2 apps to multi-source health records hosted by the latest version of HealthShare. Share your experience with others or ask questions here in the FHIR Implementers Group. Click here to access the InterSystems FHIR® Sandbox.