Search

Clear filter
Article
Mark Bolinsky · Mar 21, 2017

InterSystems IRIS and Caché Application Consistent Backups with Azure Backup

Database systems have very specific backup requirements that in enterprise deployments require forethought and planning. For database systems, the operational goal of a backup solution is to create a copy of the data in a state that is equivalent to when application is shut down gracefully. Application consistent backups meet these requirements and Caché provides a set of APIs that facilitate the integration with external solutions to achieve this level of backup consistency. These APIs are ExternalFreeze and ExternalThaw. ExternalFreeze temporarily pauses writes to disk and during this period Caché commits the changes in memory. During this period the backup operation must complete and be followed by a call to ExternalThaw. This call engages the write daemons to write the cached updated in the global buffer pool (database cache) to disk and resumes normal Caché database write daemon operations. This process is transparent to user processes with Caché. The specific API class methods are: ##Class(Backup.General).ExternalFreeze() ##Class(Backup.General).ExternalThaw() These APIs in conjunction with the new capability of Azure Backup to execute a script prior and after the execution of a snapshot operation, provide a comprehensive backup solution for deployments of Caché on Azure. The pre/post scripting capability of Azure Backup is currently available only on Linux VMs. Prerequisites At the high level, there are three steps that you need to perform before you can backup a VM using Azure Backup: Create a Recovery Services vault Install has the latest version of the VM Agent. Check network access to the Azure services from your VM. The Recovery Services vault manages the backup goals, policies and the items to protect. You can create a Recovery Services vault via the Azure Portal or via scripting using PowerShell. Azure Backup requires an extension that runs in your VM, is controlled by the Linux VM agent and the latest version of the agent is also required. The extension interacts with the external facing HTTPS endpoints of Azure Storage and the Recovery Services vault. Secure access to those services from the VM can be configured using a proxy and network rules in an Azure Network Security Group. For more information about these steps visit Prepare your environment to back up Resource Manager-deployed virtual machines. Pre and Post Scripting Configuration The ability to call a script prior to the backup operation and after is, included in the latest version of the Azure Backup Extension (Microsoft.Azure.RecoveryServices.VMSnapshotLinux). For information about how to install the extension please check the detailed feature documentation. By default, the extension included sample pre and pot scripts located in your Linux VM at: /var/lib/waagent/Microsoft.Azure.RecoveryServices.VMSnapshotLinux-1.0.9110.0/main/tempPlugin And needs to be copied to the following locations respectively. /etc/azure/prescript.sh /etc/azure/postScript.sh You can also download the script template from GitHub. For Caché, the prescript.sh script where a call to the ExternalFreeze API can be implemented and the postScript.sh should contain the implementation that executes ExternalThaw. The following is a sample prescript.sh implementation for Caché. #!/bin/bash # variables used for returning the status of the script success=0 error=1 warning=2 status=$success log_path="/etc/preScript.log" #path of log file printf "Logs:\n" > $log_path # TODO: Replace <CACHE INSTANCE> with the name of the running instance csession <CACHE INSTANCE> -U%SYS "##Class(Backup.General).ExternalFreeze()" >> $log_path status=$? if [ $status -eq 5 ]; then echo "SYSTEM IS FROZEN" printf "SYSTEM IS FROZEN\n" >> $log_path elif [ $status -eq 3 ]; then echo "SYSTEM FREEZE FAILED" printf "SYSTEM FREEZE FAILED\n" >> $log_path status=$error csession <CACHE INSTANCE> -U%SYS "##Class(Backup.General).ExternalThaw()" fi exit $status The following is a sample postScript.sh implementation for Caché. #!/bin/bash # variables used for returning the status of the script success=0 error=1 warning=2 status=$success log_path="/etc/postScript.log" #path of log file printf "Logs:\n" > $log_path # TODO: Replace <CACHE INSTANCE> with the name of the running instance csession <CACHE INSTANCE> -U%SYS "##class(Backup.General).ExternalThaw()" status=$? if [ $status req 5]; then echo "SYSTEM IS UNFROZEN" printf "SYSTEM IS UNFROZEN\n" >> $log_path elif [ $status -eq 3 ]; then echo "SYSTEM UNFREEZE FAILED" printf "SYSTEM UNFREEZE FAILED\n" >> $log_path status=$error fi exit $status Executing a Backup In the Azure Portal, you can trigger the first backup by navigating to the Recovery Service. Please consider that the VM snapshot time should be few seconds irrespective of first backup or subsequent backup. Data transfer of first backup will take longer but data transfer will start after executing post-script to thaw database and should not have any impact on the time between pre & post script. It is highly recommended to regularly restore your backup in a non-production setting and perform database integrity checks to ensure your data protection operations are effective. For more information about how to trigger the backup and other topics such as backup scheduling, please check Back up Azure virtual machines to a Recovery Services vault. I see this was written in March 2017. By chance has this ability to Freeze / Thaw Cache on Windows VM's in Azure been implemented yet?Can a brief description of why this cannot be performed on Windows VM's in Azure be given?Thanks for the excellent research and information, always appreciated. Hi Dean - thanks for the comment. There are no changes required from a Caché standpoint, however Microsoft would need to add the similar functionality to Windows to allow for Azure Backup to call a script within the target Windows VM similar to how it is done with Linux. The scripting from Caché would be exactly the same on Windows except for using .BAT syntax rather then Linux shell scripting once Microsoft provides that capability. Microsoft may already have it this capability? I'll have to look to see if they have extended it to Windows as well.Regards,Mark B- Microsoft only added this functionality to Linux VMs to get around the lack of a VSS-equivalent technology in Linux.They expect Windows applications to be compatible with VSS.We have previously opened a request for InterSystems to add VSS support to Caché but I don't believe progress has been made on it.Am I right in understanding that IF we are happy with crash-consistent backups, as long as a backup solution is a point-in-time snapshot of the whole disk system (including journals and database files) then said backup solution should be safe to use with Caché?Obviously application consistent is better than crash consistent, but with WIJ in there we should be safe. We are receiving more and more requests for VSS integration, so there may be some movement on it, however no guarantees or commitments at this time. In regards to the alternative as a crash consistent backup, yes it would be safe as long as the databases, WIJ, and journals are all included and have a consistent point-in-time snapshot. The databases in the backup archive may be "corrupt", and not until after starting Caché for the WIJ and journals to be applied will it be physically accurate. Just like you said - a crash consistent backup and the WIJ recovery is key to the successful recovery. I will post back if I hear of changes coming with VSS integration. Thanks for the reply Mark, that confirms our understanding. Glad we're not the only people asking for VSS support! For those watching this thread. We have introduced VSS integration starting with version 2018.1. Here is a link to our VSS support announcement. Hi all, Please note that these scripts are also usable with IRIS. In each of the 'pre' and 'post' scripts you only need to change each of the "csession <CACHE INSTANCE> ..." references to "iris <IRIS INSTANCE> ..." Regards,Mark B-
Article
Luca Ravazzolo · Sep 21, 2017

InterSystems Cloud Manager and Containers at GS2017 XP-Lab

Last week saw the launch of the InterSystems IRIS Data Platform in sunny California. For the engaging eXPerience Labs (XP-Labs) training sessions, my first customer and favourite department (Learning Services), was working hard assisting and supporting us all behind the scene. Before the event, Learning Services set up the most complicated part of public cloud :) "credentials-for-free" for a smooth and fast experience for all our customers at the summit. They did extensive testing before the event so that we could all spin up cloud infrastructures to test the various new features of the new InterSystems IRIS data platform without glitches. The reason why they were so agile, nimble & fast in setting up all those complex environments is that they used technologies we provided straight out of our development furnace. OK, I'll be honest, our Online Education Manager, Douglas Foster and his team have worked hard too and deserve a special mention. :-) Last week, at our Global Summit 2017, we had nine XP-Labs over three days. More than 180 people had the opportunity to test-drive new products & features. The labs were repeated each day of the summit and customers had the chance to follow the training courses with a BYOD approach as everything worked (and works in the online training courses that will be provided at https://learning.intersystems.com/) inside a browser. Here is the list of the XP-Lab given and some facts: 1) Build your own cloud Cloud is about taking advantage of the on-demand resources available and the scalability, flexibility, and agility that they offer. The XP-Lab focused on the process of quickly defining and creating a multi-node infrastructure on GCP. Using InterSystems Cloud Manager, students provisioned a multi-node infrastructure which had a dynamically configured InterSystems IRIS data platform cluster that they could test by running few commands. They also had the opportunity to unprovision it all with one single command without having to click all over a time-consuming web portal. I think it is important to understand that each student was actually creating her/his own virtual private cloud (VPC) with her or his dedicated resources and her/his dedicated InterSystems IRIS instances. Everybody was independent of each other. Every student had her or his own cloud solution. There was no sharing of resources. Numbers: we had more than a dozen students per session. Each student had his own VPC with 3 compute-node each. With the largest class of 15 people we ended up with 15 individual clusters. There was then a total of 45 compute-nodes provisioned during the class with 45 InterSystems IRIS instances running & configured in a small shard cluster. There were a total of 225 storage volumes. Respecting our best practices, we provide default volumes for a sharded DB, the JRN & the WIJ files and the Durable %SYS feature (more on this in another post later) + the default boot OS volume. 2) Hands-On with Spark Apache Spark is an open-source cluster-computing framework that is gaining popularity for analytics, particularly predictive analytics and machine learning. In this XP-Lab students used InterSystems' connector for Apache Spark to analyze data that was spread over a multi-node sharded architecture of the new InterSystems IRIS data platform. Numbers: 42 spark cluster were pre-provisioned by 1 person (thank you, Douglas again). Each cluster consisted of 3 compute-nodes for a total of 126 node instances. There were 630 storage volumes for a total of 6.3TB of storage used.The InterSystems person that pre-provisioned the clusters run multiple InterSystems Cloud Manager instances in parallel to pre-provision all 42 clusters. The same Cloud Manager tool was also used to re-set the InterSystems IRIS containers (stop/start/drop_table) and, at the end of the summit, to unprovision/destroy all clusters so to avoid un-necessary charges. 3) RESTful FHIR & Messaging in Health Connect. Students used Health Connect messaging and FHIR data models to transform and search for clinical data. Various transformations were applied to various messages. Numbers: two paired containers per student were used for this class. On one container we provided the web-based Eclipse Orion editor and on the other the actual Health Connect instance. Containers were running over 6 different nodes managed by the orchestrator Docker Swarm. Q&A So how did our team achieve all the above? How were they able to run all those training labs on the Google Compute Platform? Did you know there was a backup plan (you never know in the cloud) to run on AWS? And did you know we could just as easily run on Microsoft Azure? How could all those infrastructures & instances run and be configured so quickly over the practical lab-session of no more than 20 minutes? Furthermore, how can we quickly and efficiently remove hundreds or thousands of resources without wasting hours clicking on web portals? As you must have gathered by now, our Online Education team used the new InterSystems Cloud Manager to define, create, provision, deploy and unprovision the cloud infrastructures and services running on top of it.Secondly, everything customers saw, touched & experienced run in containers. What else these days? :-) Summary InterSystems Cloud Manager is a public, private and on-premises cloud tool that allows you to provision the infrastructure + configure + run InterSystems IRIS data platform instances. Out of the box Cloud Manager supports the top three public IaaS providers AWS GCP and Azure but it can also assist you with a private and/or on-premise solution as it supports the VMware vSphere API and Pre-Existing server nodes (either virtual or physical) When I said "out of the box" above, I did not lie :)InterSystems Cloud Manager comes packaged in a container so that you do not have to install anything and don't have to configure any software or set any variable in your environment. You just run the container, and you're ready to provision your cloud. Don't forget your credentials, though ;-) The InterSystems Cloud Manager, although in the infancy of its MVP (minimum viable product) version, has already proven itself. It allows us to run on and test various IaaS providers quickly, provision a solution on-premise or just carve out a cloud infrastructure according to our definition. I like to define it as a "battery included but swappable" solution.If you already have your installation and configuration solution developed with configuration management (CM) tools (Ansible, Puppet, Chef, Salt or others) and perhaps you want to test an alternative cloud provider, Cloud Manager allows you to create just the cloud infrastructure, while you can still build your systems with your CM tool. Just be careful of the unavoidable system drifts over time.On the other hand, if you want to start embracing a more DevOps type approach, appreciate the difference between the build phase and the run phase of your artefact, become more agile, support multiple deliveries and possibly deployments per day, you can use InterSystems' containers together with the Cloud Manager. The tool can provision and configure both the new InterSystems IRIS data platform sharded cluster and traditional architectures (ECP client-servers + Data server with or without InterSystems Mirroring). At the summit, we also had several technical sessions on Docker containers and two on the Cloud Manager tool itself. All sessions registered a full house. I also heard that many other sessions were packed. I was particularly impressed with the Docker container introductory session on Sunday afternoon where I counted 75 people. I don't think we could have fitted anybody else in the room. I thought people would have gone to the swimming pool :) instead, I think we had a clear sign telling us that our customers like innovation and are keen to learn. Below is a picture depicting how our Learning Services department allowed us to test-drive the Cloud Manager at the XP-Lab. They run a container based on the InterSystems Cloud Manager; they added a nginx web server so that we can http-connect to it. The web server delivers a simple single page where they load a browser-based editor (Eclipse Orion) and at the bottom of the screen, the student is connected directly to the shell (GoTTY via websocket) of the same container so that she or he can run the provisioning & deployment commands. This training container, with all these goodies :) runs on a cloud -of course- and thanks to the pre-installed InterSystems Cloud Manager, students can provision and deploy a cluster solution on any cloud (just provide credentials). To learn more about InterSystems Cloud Manager here is an introductory video https://learning.intersystems.com/course/view.php?id=756and the global summit session https://learning.intersystems.com/mod/page/view.php?id=2864 and InterSystems & Containers here are some of the sessions from GS2017https://learning.intersystems.com/course/view.php?id=685https://learning.intersystems.com/course/view.php?id=696https://learning.intersystems.com/course/view.php?id=737 -- When would Experience Labs be available on learning.intersystems.com? We are currently working to convert the experience labs into online experiences. The FHIR Experience will be ready in the next two weeks, closely followed by the Spark Experience. We have additional details to work out for the Build Your Own Cloud experience as it runs by building in our InterSystems cloud and can consume a lot of resources, but we expect to get that worked out in the next 4 - 6 weeks.Thanks Luca for the mention above, but it was a large team effort with several people from the Online Learning team as well as product manager, sales engineers, etc. @Eduard Lebedyuk & allYou can find a "getting started" course at https://learning.intersystems.com/HTH
Announcement
Evgeny Shvarov · Sep 11, 2017

InterSystems Global Summit 2017 Key Notes Live Stream

Hi, Community! The Global Summit 2017 Key Notes session will start in two hours at 9-00AM (PT). Here is the link for live streaming. Join! Text your questions now to get answers on Global Summit 2017 Key Notes to a number:+16179968827 Is there a recording of this for people who couldn't watch in real time? Hi, Mike! It would be posted on YouTube in a few days, we would make an announcement here. Any update on when the videos would be posted to youtube, specifically ones like the keynotes. You can also find InterSystems Global Summit Keynote Presentations in a dedicated Global Summit 2017 playlist on InterSystems Developers YouTube Channel:InterSystems Global Summit Keynote - Part 1InterSystems Global Summit Keynote - Part 2Enjoy!
Announcement
Evgeny Shvarov · Oct 24, 2017

InterSystems Developer Meetup in Cambridge 25th of October 2017

Hi, Community!We are having InterSystems Developer Meetup tomorrow 25th of October in CIC.What is it?It's an open evening event to:know more about InterSystems products and new technology features;discuss it with other developers in your area and with developers and engineers from InterSystems Corporation;network with developers of the innovative solutions.Why attend?If you are new to InterSystems data platform, Meetup is a great way to know more and get a direct impression. You can listen to what are the new features and best practices of InterSystems products and discuss your tasks with experienced developers who already used it successfully or with InterSystems employees.If you are already using InterSystems products it’s a great way to meet in person other developers who are making and supporting solutions on InterSystems Data Platform in your region and to discuss your problems and questions with InterSystems developers and engineers directly.Why attend tomorrow?Come tomorrow because we have a greatAGENDA!6-00pm InterSystems IRIS: Sharding and ScalabilityWe just launched new data platform InterSystems IRIS which comes with sharding feature. Tomorrow Jeff Miller, one of sharding developers will describe you how can you benefit from it and you can ask him how it works.6-30 pm Optimize Your Workflow with Atelier 1.1And! Hope you've heard a lot already about our new IDE Atelier! Tomorrow you can listen to the update on how Atelier can help you to develop InterSystems solutions more effectively and you can talk directly to Atelier developer [@Michelle.Stolwyk].7-30 pm Clustering options for high availability and scalabilityAlso, InterSystems Data Platform is known for its High Availability features. [@Oren.Wolf], InterSystems product manager, will have a session which more details on InterSystems High Availability solutions.How to find the place?It's in Cambridge Innovation Center, One Broadway, Cambridge, MA. Come at 5.30pm, bring your ID, come up to the 5th floor and join us in Venture Cafe.Join us for food, beverages, and networking and discuss powerful new InterSystems solutions with other developers in Boston metro area. See the live stream recording! Join Live Stream today and ask your questions online! Thanks for this, Evgeny. It doesn't look like i'll be driving down tonight given the weather here in Maine, so I'll be participating via live stream! Sure, Jeff! Hope you can make the next one. Prepare your questions! ) InterSystems IRIS Data Platfrom: Sharding and Scalability by [@Jeff.Miller] And here's the complete recording:https://www.youtube.com/watch?v=J3QLibe15xs[ including 30 min break ] Yep, we will post remastered version soon ) I've posted the 'Clustering options for high availability and scalability' slides on SlideShare (here) Here's my slide deck - The Power Boost of Atelier! It has private access for now, and available only for you, yet. Fixed! Hi!Here is the remastered version of Meetup live stream recording.
Announcement
Evgeny Shvarov · Oct 31, 2017

Key Notes Videos From InterSystems Global Summit 2017

Hi, Community!See the Key Notes videos from Global Summit 2017 with new InterSystems IRIS Data Platform announcement.InterSystems Global Summit Keynote - Part 1InterSystems Global Summit Keynote - Part 2
Article
Developer Community Admin · Oct 21, 2015

Performance Comparison of InterSystems Caché and Oracle in a Data Mart Application

AbstractA global provider of mobile telecommunications software tested the performance of InterSystems Caché and Oracle as the database in a simulated data mart application. They found Caché to be 41% faster than Oracle at building a data mart. When testing the response time to SQL queries of the data mart, Caché's performance ranged from 1.8 times to 513 times faster than Oracle.IntroductionTelecommunications companies, because they generate and must analyze enormous amounts of information, are among the most demanding database users in the world. In order to make practicable business intelligence solutions, telecommunications firms typically select key pieces of raw data to be loaded into a "data mart", where it is indexed and aggregated in various ways before being made available for analysis. Even so, the data marts in question may be hundreds of gigabytes in size. Database performance, both in the creation of the data mart, and in the query response time of the data mart, is critical to the timely analysis of information, and ultimately to the ability of the enterprise to identify and act upon changes in their business environment.This paper presents the results of comparative performance tests between InterSystems Caché and Oracle. They were performed by a global provider of mobile telecommunications software, as they evaluated database technology for incorporation into a new business intelligence applications relating to mobile phone usage.
Article
Developer Community Admin · Oct 21, 2015

Using InterSystems Caché for Securely Storing Credit Card Data

IntroductionIn today's world, an ever-increasing number of purchases and payments are being made by credit card. Although merchants and service providers who accept credit cards have an obligation to protect customers' sensitive information, the software solutions they use may not support "best practices" for securing credit card information. To help combat this issue, a security standard for credit card information has been developed and is being widely adopted.The Payment Card Industry (PCI) Data Security Standard (DSS) is a set of guidelines for securely handling credit card information. Among its provisions are recommendations for storing customer information in a database. This paper will outline how software vendors can take advantage of the InterSystems Caché database - now and in the future - to comply with data storage guidelines within the PCI DSS.
Announcement
Steve Brunner · Jan 31, 2018

InterSystems IRIS Data Platform 2018.1.0 Release

InterSystems is pleased to announce that InterSystems IRIS Data Platform 2018.1.0 is now released.This press release was issued this morning.InterSystems IRIS is a complete, unified data platform that makes it faster and easier to build real-time, data-rich applications. It mirrors our design philosophy that software should be interoperable, reliable, intuitive, and scalable.For information about InterSystems IRIS, please visit our website here. You'll find out why InterSystems IRIS is the first complete data platform suitable for transactional, analytic, and hybrid transactional-analytic applications. See how our InterSystems Cloud Manager enables open and rapid deployment on public and private clouds, bare metal or virtual machines. Review our vertical and horizontal scaling, to ensure high performance regardless of your workloads, data sizes or concurrency needs. Discover how to use familiar tools and languages like Java and REST to interoperate with all your data.To interactively learn more about InterSystems IRIS, we've introduced the InterSystems IRIS Experience, a new environment that allows guided and open access to InterSystems IRIS Data Platform for exploring its powerful features. The Experience offers you the ability to solve challenges and build solutions using real-world data sets. With compelling scenarios, such as big data analytics and predicting patterns of financial fraud, developers and organizations can experience the power of InterSystems IRIS firsthand.You can also review the complete online documentation. What happens with ZEN in IRIS? %ZEN classes hidden in Class Reference and nothing in the documentation. Is it deprecated, anything else also disappear since Caché/Ensemble? - no DOCBOOK- no SAMPLES- no JDBC Driver in installbut there is C:\InterSystems\IRIS\dev\java\lib\JDK18\isc-jdbc-3.0.0.jar .....- and a very small icon for csystray.exe in Win Hi Robert,DocBook has now moved fully online, which is what the mgmt portal will link to: http://docs.intersystems.com/irisSAMPLES included quite a few outdated examples and was also not appropriate for many non-dev deployments, so we've also moved to a different model there, posting the most relevant ones on GitHub, giving us more flexibility to provide updates and new ones: https://github.com/intersystems?q=samplesJDBC driver: to what extent is this different from the past? It's always just been available as a jarfile, as is customary for JDBC drivers. We do hope to be able to post it through Maven repositories in the near future though.Small icons: yeah, to make our installer and (more importantly) the container images more lightweight, we had to economize on space. Next to the removal of DocBook and Samples, using smaller icons also reduces the size in bytes ;) ;)InterSystems IRIS is giving us the opportunity to adopt a contemporary deployment model, where we were somewhat restricted by long-term backwards compatibility commitments with Caché & Ensemble. Some of these will indeed catch your eye and might even feel a little strange at first, but we really believe the new model makes developing and deploying applications easier and faster. Of course, we're open to feedback on all of these evolution and this is a good channel to hear from you.Thanks!benjamin Hi Dmitry,Zen is indeed no longer a central piece of our application development strategy. We'll support it for some time to come (your Zen app still works on IRIS), but our focus is on providing a fast and scalable data management platform rather than GUI libraries. In that sense, you may already have noticed that recent courses we published on the topic of application development focus on leveraging the right technologies to connect to the backend (i.e. REST) and suggest using best-of-breed third-party technologies (i.e. Angular) for web development.InterSystems IRIS is a new product where we're taking advantage of our Caché & Ensemble heritage. It's meant to address today's challenges when building critical applications and we've indeed leveraged a number of capabilities from those products, but also added a few significant new ones like containers, cloud & horizontal scalability. We'll be providing an overview of elements to check for Caché & Ensemble customers that would like to migrate to InterSystems IRIS shortly (i.e. difference in supported platforms), but please don't consider this as merely an upgrade. You may already have noticed the installer doesn't support upgrading anyhow.Thanks,benjamin Hi Ben,- I like the idea of external samples. That offers definitely more flexibility .- DOCUMATIC is unchanged and works local! That's important. OK - JDBC: it isn't visible in Custom Install. You only see xDBC -> ODBC . Not an issue, rather a surprise. The .jar are where they used to be before.I'm really happy that we finally can get out of old chains imposed by 40yrs (DSM-11 and others) backward compatibility .Robert Hooray :) What date is the Zen end-of-life planned for (ie. when is it supported until and when will it be removed)? Hi Ben,I just installed a few samples as in description . CONGRATULATIONS !- not only that I get what I want and leave the rest aside- I also can split samples by subjects in several DBs and namespaces without being trapped by (hidden) dependencies.I think this makes live for training and teaching significantly easier!And allows central bug fixing independent from core release dates.A great idea! Thanks! And even anybody can offer own examples. Right !!A major step forward! Regarding Samples see also from the InterSystems IRIS documentation:Downloading Samples for Use with InterSystems IRIS What about Health Connect clients, does IRIS include components that are part of HealthShare Elite?ThanksYury Hi i need IR how to download it. Do you have login credentials for WRC Online at http://wrc.intersystems.com/ ?Once you have logged in, use the "Online Distributions" link. You'll need to contact your InterSystems account manager to request a license key.Alternatively, get your hands on InterSystems IRIS in the cloud here.
Announcement
Steve Brunner · Jun 5, 2018

InterSystems IRIS Data Platform 2018.1.1 Release

InterSystems is pleased to announce the release of InterSystems IRIS Data Platform 2018.1.1 This is our first follow up to InterSystems IRIS 2018.1.0, released earlier this year. InterSystems IRIS is a unified data platform for building scalable multi-workload, multi-model data management applications with native interoperability and an open analytics platform. As a reminder, you can learn more about InterSystems IRIS Data Platform from these Learning Services resources. Hundreds of fixes have been made in this release: Many changes that will greatly reduce future compatibility issues. Improvements to the documentation, user interface, and packaging to standardize naming of InterSystems IRIS components. Significant reliability improvements to our sharded query processing architecture, as well as unifying all application code into a single database. This unification eliminates redundancy and the potential for inconsistency, and provides a foundation for sharding and distributed processing across all our data models. Please read this document for complete details about the supported platforms, including cloud platforms and docker containers. For those of you who want to get hands-on quickly, we urge you to try out our InterSystems IRIS Experience. You’ll notice there is a brand new Java experience. The build corresponding to this release is 2018.1.1.643.0 To all the InterSystems Engineers. Great job and I am happy to use this product soon in my solutions. Great, but where is Caché 2018 (or at least the fieldtest) ? Are there any release notes? I can't seem to find any.RegardsGeorgewww.georgejames.com Caché and Ensemble 2018.1 Field Test is coming soon this summer. Unlike most maintenance releases we've not created separate release notes for this release. Relevant information has been added directly to the main documents. GREAT ! We're waiting for Caché 2018 to upgrade all of our instances, because 2017 has issues with ODBC. ...2017 has issues with ODBCHi Kurt,could you drop a few words on the subject: what kind of issues it has?(Just started moving the clients to 2017.2.1...) Hello,We're having issues with our Delphi-client where it receives empty datasets.Response fr om WRC : The problem was that Delphi switches from read uncommited to read commited mode and we had a problem there with the joins on null values. The fix will be included in the next maintenance kits for 2017.2.x. The change already went into 2018.1 so it will be in all versions 2018+. Has InterSystems yet published any guidance to help existing Caché or Ensemble users move their code onto IRIS? I haven't yet found that kind of document at https://docs.intersystems.com/ Yes, we maintain an adoption guide that covers exactly that purpose. In order to be able to properly follow up on questions you'd have, we're making it available through your technical account team (sales engineer or TAM) rather than ship it with the product. Maybe worth stating that in the product docs? Hi, what issues are you having with ODBC? We have a lot of clients connecting to our Cache DB through ODBC.
Announcement
Janine Perkins · Apr 20, 2017

Featured InterSystems Online Course: Designing Productions Non-Healthcare

Design a production in a development environment using best practices. After you have built your first production in a test environment, you can start applying what you have learned and begin building in a development environment. Take the Designing Productions Non-Healthcare course to learn additional information to enable you to successfully design your production. Much of the information in this course is considered best practices.Learn More.
Article
Evgeny Shvarov · May 7, 2017

InterSystems iKnow analytics against Developer Community posts

Hi, Community!Hope you know and use the Developer Community Analytics page which build with InterSystems DeepSee and DeepSee Web.We are playing with InterSystems iKnow analytics against Developer Community posts and introduced the new dashboard, which shows Top 60 concepts for all the posts:Or play with filters and see the top concepts for Atelier tag:Or the top concepts of a particular member:Click on the concept on the left and see the related articles on Developer Community. Here is small gif how it works: Next we plan to introduce concepts on Answers too and to fix tags and introduce new tags according to the concept stats.Your ideas and feedback would be much appreciated. Hi Evgeny,nice work!Maybe you can enhance the interface by also including an iKnow-based KPI to the dashboard exposing the similar or related entities for the concept clicked in the heat map. You can subclass this generic KPI and specify the query you want it to invoke, and then use it as the data source for a table widget. Let me know if I can help. thanks,benjamin Thank you, Benjamin!Yes, similar and related are concepts are in the roadmap too, thanks for the useful link!
Announcement
Janine Perkins · May 24, 2017

New InterSystems Online Course: Health Insight Data Flow

Take this course to learn how data flows from HealthShare Information Exchange to Health Insight, along with the details of that data flow.Learn how to : - Relate a clinical scenario supported by Health Insight to its internal data structures and processes.- Identify the main data management components of HealthShare Information Exchange and Health Insight.- Describe the details of the data flow between HealthShare Information Exchange and Health Insight.- Differentiate between HL7 and CCD data handling in HealthShare Information Exchange.- Recognize configuration points in the system and how they affect system performance.- Define the HealthShare Information Exchange internal data structures and how they are used.Audience: HealthShare Customers. This course is for anyone who customizes or supports Health Insight, as well as power users of Health Insight who need an understanding of its technical details.Learn More.
Announcement
Evgeny Shvarov · Jun 30, 2017

Get Your Free Registration on InterSystems Global Summit 2017!

Hi, Community!Hope you have already put in your schedule the visit to InterSystems Global Summit 2017 which will take place on 10-13 of September in remarkable JW Marriott Desert Springs Resort and Spa.This year we have Experience Lab, The Unconference, and 50 more other sessions, regarding performance, cloud, scalability, FHIR, high availability and other solutions and best practices.Global Summit it the most effective way to know what's new and what are the most powerful practices to make successful solutions with InterSystems technology.Today is the last day for early bird $999 tickets.But! You can get a free of charge ticket on InterSystems Global Masters Advocate hub!There are numerous ways to earn the points: write articles or answer the questions, publish testimonials or provide referrals, or simply watch and read articles and share it on social networks.To join Global Masters leave your comment in this post and we'll send you a personal invite link.Note!Allow us to recognize your contribution to Developer Community in Global Masters and register with the same email you have in Developer Community.Also, Community moderators are getting free tickets to Global Summit.This year they are -- [@Eduard.Lebedyuk], [@John.Murray], and [@Dmitry.Maslennikov].See you on InterSystems Global Summit 2017! Hi, Community!Just want to share the good news.Early bird registration for $999 prolonged until 14th of July, we also have $200 and $300 discounts for you on Global Masters. See the agenda of daily sessions on Solution Developers Conference.
Announcement
Anastasia Dyubaylo · Aug 14, 2018

New Video: Deploying Shards Using InterSystems Cloud Manager

Hi Community! New video "Deploying Shards Using InterSystems Cloud Manager" is available now on InterSystems Developers YouTube: In this demonstration, you will learn how to deploy a sharded cluster using InterSystems Cloud Manager (ICM), including defining the deployment, provisioning the architecture, and deploying and managing services. What is more? You are very welcome to watch all the videos about ICM in a dedicated InterSystems Cloud Manager playlist. Don`t forget to subscribe our InterSystems Developers YouTube Channel. Enjoy and stay tuned!
Article
Niyaz Khafizov · Aug 3, 2018

The way to launch Jupyter Notebook + Apache Spark + InterSystems IRIS

Hi all. Today we are going to install Jupyter Notebook and connect it to Apache Spark and InterSystems IRIS. Note: I have done the following on Ubuntu 18.04, Python 3.6.5. Introduction If you are looking for well-known, widely-spread and mainly popular among Python users notebook instead of Apache Zeppelin, you should choose Jupyter notebook. Jupyter notebook is a very powerful and great data science tool. it has a big community and a lot of additional software and integrations. Jupyter notebook allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more. And most importantly, it is a big community that will help you solve the problems you face. Check requirements If something doesn't work, look at the "Possible problems and solutions" paragraph in the bottom. First of all, ensure that you have Java 8 (java -version returns "1.8.x"). Next, download apache spark and unzip it. After, run the following in the terminal: pip3 install jupyter pip3 install toree jupyter toree install --spark_home=/path_to_spark/spark-2.3.1-bin-hadoop2.7 --interpreters=PySpark --user Now, open the terminal and run vim ~/.bashrc . Paste in the bottom the following code (this is environment variables): export JAVA_HOME=/usr/lib/jvm/installed java 8 export PATH="$PATH:$JAVA_HOME/bin" export SPARK_HOME=/path to spark/spark-2.3.1-bin-hadoop2.7 export PATH="$PATH:$SPARK_HOME/bin" export PYSPARK_DRIVER_PYTHON=jupyter export PYSPARK_DRIVER_PYTHON_OPTS="notebook" And run source ~/.bashrc. Check that it works Now, let us launch Jupyter notebook. Run pyspark in the terminal. Open in your browser the returned URL. Should be something like the image below: Click on new, choose Python 3, paste the following code into a paragraph: import sysprint(sys.version)sc Your output should look like this: Stop jupyter using ctrl-c in the terminal. Note: To add custom jars just move desired jars into $SPARK_HOME/jars. So, we want to work with intersystems-jdbc and intersystems-spark (we will also need a jpmml library). Let us copy required jars into spark. Run the following in the terminal: sudo cp /path to intersystems iris/dev/java/lib/JDK18/intersystems-jdbc-3.0.0.jar /path to spark/spark-2.3.1-bin-hadoop2.7/jars sudo cp /path to intersystems iris/dev/java/lib/JDK18/intersystems-spark-1.0.0.jar /path to spark/spark-2.3.1-bin-hadoop2.7/jars sudo cp /path to jpmml/jpmml-sparkml-executable-version.jar /path to spark/spark-2.3.1-bin-hadoop2.7/jars Ensure that is works. Run pyspark in the terminal again and run the following code (from the previous article): from pyspark.ml.linalg import Vectorsfrom pyspark.ml.feature import VectorAssemblerfrom pyspark.ml.clustering import KMeansfrom pyspark.ml import Pipelinefrom pyspark.ml.feature import RFormulafrom pyspark2pmml import PMMLBuilder dataFrame=spark.read.format("com.intersystems.spark").\option("url", "IRIS://localhost:51773/NAMESPACE").option("user", "dev").\option("password", "123").\option("dbtable", "DataMining.IrisDataset").load() # load iris dataset (trainingData, testData) = dataFrame.randomSplit([0.7, 0.3]) # split the data into two setsassembler = VectorAssembler(inputCols = ["PetalLength", "PetalWidth", "SepalLength", "SepalWidth"], outputCol="features") # add a new column with features kmeans = KMeans().setK(3).setSeed(2000) # clustering algorithm that we use pipeline = Pipeline(stages=[assembler, kmeans]) # First, passed data will run against assembler and after will run against kmeans.modelKMeans = pipeline.fit(trainingData) # pass training data pmmlBuilder = PMMLBuilder(sc, dataFrame, modelKMeans)pmmlBuilder.buildFile("KMeans.pmml") # create pmml model My output: The output file is a jpmml kmeans model. Everything works! Possible problems and solutions command not found: 'jupyter': vim ~/bashrc; add in the bottom export PATH="$PATH:~/.local/bin"; in terminal source ~/.bashrc. If it doesn't help, reinstall pip3 and jupyter. env: 'jupyter': No such file or directory: In ~/.bashrc export PYSPARK_DRIVER_PYTHON=/home/.../.local/bin/jupyter. TypeError: 'JavaPackage' object is not callable: Check that the required .jar file in /.../spark-2.3.1-bin-hadoop2.7/jars; Restart notebook. Java gateway process exited before sending the driver its port number: Your Java version should be 8 (probably works with Java 6/7 too, but I didn't check it); echo $JAVA_HOME should return to you Java 8 version. If not, change the path in ~/.bashrc; Paste sudo update-alternatives --config java in the terminal and choose a proper java version; Paste sudo update-alternatives --config javac in the terminal and choose a proper java version. PermissionError: [Errno 13] Permission denied: '/usr/local/share/jupyter' Add --user at the end of your command in the terminal Error executing Jupyter command 'toree': [Errno 2] No such file or directory Run the command without sudo. A specific error may appear if you use system variables like PYSPARK_SUBMIT_ARGS and other spark/pyspark variables or because of /.../spark-2.3.1-bin-hadoop2.7/conf/spark-env.sh changes. Delete these variables and check spark-env.sh. Links Jupyter Apache Toree Apache Spark Load a ML model into InterSystems IRIS K-Means clustering of the Iris Dataset The way to launch Apache Spark + Apache Zeppelin + InterSystems IRIS 💡 This article is considered as InterSystems Data Platform Best Practice.