Search

Clear filter
Article
Evgeny Shvarov · Jul 28, 2019

How to Learn InterSystems IRIS on InterSystems Developers community? Part 1

Hi Developers! Recently I was asked, “How can a beginner in InterSystems technologies learn from InterSystems Developers community content to improve his developer skills”? This is a really good question. It has several answers so I decided to write the post with the hope it could be useful for developers. So! How to learn Intersystems Data Platforms(IRIS, IRIS for Health) from InterSystems Developers community content if you are a beginner? Beginner and Tutorial tags First, it worth to check articles with Beginner tag and Tutorial tags. Developers who write articles put these tags if they describe the basics or are the tutorials for some InterSystems IRIS features. InterSystems Best Practices Next, if you are looking for some serious stuff check the Best Practices tag. Every week InterSystems product management selects a new article and put Best Practices tag on it. So if you subscribe it you have a new InterSystems best practice every week. Views and Votes Another way to find helpful posts is to pay attention to what developers on community vote for and what developers read most. Indeed, if a developer finds the article helpful he votes up so we can consider that articles with the most of votes are the most useful. This is the filter of the most voted articles. Also, the articles with most of the views could be considered useful too, cause not all the developers vote (and voting need the registration) but we count all the reads. So the articles with most of the views maybe not only are the most popular but also are useful. And often the questions which have accepted answers could be very useful cause it is a solved problem. Same as with articles you can filter questions with accepted answers by votes and by views and hopefully find your answers but definitely is a helpful read. Ask your questions! And of course, the best way to learn and get the experience in technology is to practice with it, run into the problems and solve it. So, you’ll have questions during the process and don’t hesitate to ask your questions on InterSystems Developers - sometimes you’ll get the answers even before you end typing the question! Resources for InterSystems Beginners And of course, it worth to mention the general resources for developers-beginners in InterSystems data platforms. They are Documentation, Online Learning, Try InterSystems IRIS sandbox, classroom learning, InterSystems Developers Videos. Developers, you are very welcome to submit other ways and advice on how to learn InterSystems Technology on Developer Community in the comments below. In addition to the learning opportunities for beginners listed above here are two programs on the Online Learning platform: Getting started with ObjectScript and Getting Started with Interoperability. Yes, and there is more information about the two programs in this post: Get Started with InterSystems IRIS. The programs include videos, courses, learning paths, and other content -- and are great for onboarding! Getting Started with InterSystems IRIS for Coders Getting Started with InterSystems IRIS for Implementers
Announcement
Andreas Dieckow · Oct 10, 2019

Full kit versions of InterSystems IRIS and InterSystems IRIS for Health for Developers!

InterSystems is pleased to announce a new Developer Download site providing full kit versions of InterSystems IRIS Community Edition and InterSystems IRIS for Health Community Edition. These are available free of charge for application development use. You can download directly from the InterSystems Developer Community by selecting Download InterSystems IRIS. Those instances include a free built-in 13-month license. They are limited to 10GB of user data, will run on machines up to 8 cores, support 5 concurrent connections and are supported for application development. Available Platforms: RedHat, Ubuntu, SUSE, Windows and macOS. InterSystems IRIS and InterSystems IRIS for Health are also available in container format from the Docker Hub. Please check here for information on how to get started, visit the InterSystems IRIS Data Platform page and the InterSystems IRIS for Health page on our website to learn more about our products, and visit the Developer resource page to get deeper with development. If you previously registered for an InterSystems Login account (e.g. for the Developer Community or WRC Direct), you can use those credentials to gain access to the Developer Download. Why is there no Container option? Hi Angelo! This is a very good question. But there are community versions in containers on docker: image name are for InterSystems IRIS: store/intersystems/iris-community:2019.3.0.309.0 and for InterSystems IRIS for Health: store/intersystems/irishealth:2019.3.0.308.0-community Check this template app to start using this immediately. Hi Evgeny, thanks for informations, even though I already knew that, but there should be everything produced on the download site. I second having the conatiners available in the same place. Also could you show the version number available before downloading and are these production releases or previews? David, Thanks for your feedback! In general, Developer Download always makes available the latest GA version of our two Community Edition products. In this case we wanted to be able to launch by Global Summit and so we went with the Preview since 2019.1.1 kits were not full GA yet. There are ongoing discussions about the pros/cons of making containers available here rather than people just fetching them directly from Docker. We'll let you know the final decision! We will post container kits here as well. Those are released versions and not previews, that can be used for Development, but not for deployments. Thanks for the clarification. The fact that the current download is slightly different from the norm due to a Global Summit launch does highlight the need for a brief explanation of the version you are downloading though. David - I completely agree. It's an excellent suggestion and we'll get it put in place. Hi David, thanks for this suggestion. File names that include full version strings are now visible prior to download. Thanks for adding that, in the case of a preview it might be worth adding this to the "Notes" section to save cross referencing with release notifications or the usual downloads page. I was just able to install IRIS on an Apple MacBook Pro, with Parallels and Windows 10 Pro. I installed the current FOIA VistA with about 8.2 GB of data. Everything worked good until getting to the Taskman and RPC Broker startup. This has always been an issue with the developer version from Intersystems. It's almost enough users to bring up Taskman. Not nearly enough to bring up Taskman and the RPC Broker. It's great to have IRIS Studio. Everything seems just like Cache with a new name. Thanks Intersystems. David - FYI, the full Release Version of 2019.1.1 Community Editions is now available at Download.InterSystems.com I just installed the IRIS for Windows (x86-64) 2021.1 (Build 215U) Wed Jun 9 2021 12:15:53 EDT [Health:3.3.0] ... and the license key expires 10/30/2021. Where can I get a new license, or newer version? I read where it should be for 13 months. I could barely get started in 40 days... ExpirationDate=10/30/2021
Question
Kranthi kumar Merugu · Nov 6, 2020

Need to know the difference between InterSystems Cache DB and InterSystems IRIS

Hi Team, What is the difference between Intersystems Cache DB and Intersystems IRIS ? I am looking for Cache DB installation details, but getting IRIS only everywhere. we are looking for a POC to test it. Thanks, Kranthi. The scope and the dimension of possibilities are much wider with IRIS than it was with Caché.Very generally speaking: There is nothing in Caché that you can't do with IRIS.The only thing you might miss eventually, are some ancient compatibility hooks back to the previous millennium. Also, support of some outdated operating system versions is gone In addition to what @Robert.Cemper1003 said InterSystems IRIS is much faster than Caché in SQL, it has officially released Docker images, it has IRIS Interoperability and IRIS BI is included and it has Community Edition which gives you the way to develop and evaluate your solution on IRIS which is less than 10GB without ordering a license. And IRIS has a Package Manager ZPM which gives you a universal and robust way to deploy your solutions. I would also add better JSON support with %JSON.Adapter (the lack of which currently causes a lot of struggle for us on the older version) Hi Kranthi, I believe that you are just finding information about IRIS because it is something that is "in fashion". Part of Summit was about this, if you look at the developer community, basically just talk about it. In a way, we are almost unconsciously induced to migrate to IRIS. It is a fact that they have differences as friends have already commented, however, in general, it is more of the same. For differences between InterSystems CACHE and InterSystems IRIS Data Platform I suggest you have a look here. Specifically you can find there a table comparing the products (including InterSystems Ensemble). As well as a document going into detail about various new features and capabilities If you want to perform a PoC for a new system definitely use InterSystems IRIS. At a high level you can find the feature differences between Cache, Ensemble and IRIS here: https://www.intersystems.com/wp-content/uploads/2019/05/InterSystems_IRIS_Data_Platform_Comparison_chart.pdf
Question
Kevin Johnson · Sep 10, 2020

Is InterSystems Global Masters and InterSystems Global Masters Gamification Platform the same?

Hello there Community, I am so pumped and curious about this platform. The more I search about it, the more deeper it goes just as the maze. I followed the following link https://community.intersystems.com/post/global-masters-advocate-hub-start-here and it took me to a page where i learnt more on what Global Masters is and also more about the Advocate Hub. But then I came across an very confusing this named as the Global Masters Gamification Platform. I was denied permission when i tried to access it. (Screen Shot has been attached.) Well, my question is, are both the Global Masters and Global Masters Gamification Platform the same or are they two different platforms ? If they are different I would like to know on how to gain access to it as well. Hoping to hear soon from you all. Regards. @Olga.Zavrazhnova2637 , it looks like we have a bad link at https://community.intersystems.com/post/global-masters-advocate-hub-start-here (leading to the certificate error shown). @Kevin.Johnson3273 see https://intersystems.influitive.com/users/sign_in instead of the link in that article Hi Kevin, thank you for your question. Global Masters and Global Masters Gamification Platform are the same. @Timothy.Leavitt thank you! The link was broken indeed - already fixed it 👌 Good day to all! Hello @Kevin Johnson, Answering to your question, Yes, You are correct. Both are the same and has no difference. The only difference is that the way they have named it. Both are the same. #Good Day.
Announcement
Anastasia Dyubaylo · Sep 29, 2023

[Video] InterSystems Cloud Services - InterSystems IRIS SQL & IntegratedML

Hey Developers, Enjoy watching the new video on InterSystems Developers YouTube: ⏯ InterSystems Cloud Services - InterSystems IRIS SQL & IntegratedML @ Global Summit 2023 InterSystems is rolling out cloud services and the first of these includes InterSystems IRIS Cloud SQL and InterSystems IRIS Cloud IntegratedML. You'll learn what services are now available and what creative developers have done with them. Presenters:🗣 @Bhavya.Kandimalla, Sales Engineer, InterSystems🗣 @Mengqi.Li, Sales Engineer, InterSystems Enjoy this video and stay tuned for more! 👍
Announcement
Bob Kuszewski · Jul 31, 2024

InterSystems announces General Availability of InterSystems IRIS, InterSystems IRIS for Health, & HealthShare Health Connect 2024.2

The 2024.2 releases of InterSystems IRIS Data Platform, InterSystems IRIS for Health, and HealthShare Health Connect are now Generally Available (GA). RELEASE HIGHLIGHTS 2024.2 is a Continuous Delivery (CD) release. Many updates and enhancements have been added in this release: Enhancing Developer Experience Studio removal - 2024.2 Windows installations do not include the Studio IDE, and upgrading an existing instance removes Studio from the instance’s bin directory. Developers who wish to keep using Studio should download the 2024.1 Studio independent component from the WRC component distribution page. Foreign Tables are now fully supported - With the 2024.2 release, we have addressed feedback from early access users, including better metadata management, improved predicate pushdown and further alignment with the LOAD DATA command that ingests rather than projects external data into IRIS SQL tables. Flexible Python Runtime for Microsoft Windows - Customers running Microsoft Windows are now able to select the Python runtime used for Embedded Python. Enhancing AI & Analytics InterSystems IRIS BI - Added standard KPI plugins to calculate standard deviation and variance based on measures from fact tables as well as better integration with the PowerBI connector privileges so that the PowerBI user has access to all cubes that are not restricted by a resource or are public cubes InterSystems Reports - New version of Logi Report 2024.1 SP2 including enhancements to PDF exports. For additional details, see the Logi Report release notes. Enhancing Interoperability and FHIR NEW HL7 DTL Generator: A new addition to the Productivity Toolkit intended to help users reduce time-to-value when a) migrating from other vendors OR b) building new DTLs. Based on source and target message pairs, the DTL Generator creates a skeleton DTL with simple transformation logic for the interface engineer to validate and review for any missing more complex logic required. X12 / CMS0057 Support: In anticipation of the upcoming CMS0057 Prior Authorization requirements, support for additional schema packages include HIPAA 4010, HIPAA 5010, and HIPAA 6020. Additionally, now included is support for X12 long segments. New FHIR Configuration REST API (Control Plane): Configuration for multi-namespace FHIR Servers is made easier with a single REST Handler to support each entire instance. A new control plane endpoint (BaseURL: /csp/fhir-management/api) is available offering enhanced security allowing only namespace authorized users access to the control plane REST API. Enhanced IRIS OAuth Server Customization: Customizing IRIS for Health OAuth is now simplier with an out-of-the-box supported and documented framework with an IRIS OAuth server customization logic embedded in an IRIS for Health setting. Default values for "Customization Options" are populated when selecting the appropriate OAuth class hook. Improved UI for FHIR Server Configuration & Bulk FHIR Configuration: The improved FHIR Server Configuration Dashboard enhances the user's experience by taking advantage of updated back-end APIs for configuring a FHIR server. The improved Bulk FHIR Coordinator enhances the user's experience by creating an intuitive workflow when configuring Bulk FHIR servers. Platform Updates IRIS 2024.2 now supports Ubuntu 24.04, to go along with support for Ubuntu 22.04. Containers are now based on Ubuntu 24.04. DOCUMENTATION Details on all the highlighted features are available through these links below: InterSystems IRIS 2024.2 documentation, release notes and deprecated & discontinued technologies and features. InterSystems IRIS for Health 2024.2 documentation, release notes and deprecated & discontinued technologies and features. Here’s the always-handy InterSystems IRIS upgrade checklist for 2024.2. HOW TO GET THE SOFTWARE As usual, Continuous Delivery (CD) releases come with classic installation packages for all supported platforms, as well as container images in Docker container format. For a complete list, refer to the Supported Platforms page. Classic installation packages Installation packages are available from the WRC's Continuous Delivery Releases page. Additionally, kits can also be found in the Evaluation Services website. InterSystems IRIS Studio is still available in the release, and you can get it from the WRC's Components distribution page. Containers Container images for both Enterprise and Community Editions of InterSystems IRIS and IRIS for Health and all corresponding components are available from the InterSystems Container Registry web interface. ✅ Build number for this release is: 2024.2.0.247.0, except for the OpenSSL 1.0 AIX kits which are 2024.2.0.247.1. If you're in the ICR, containers are tagged as "latest-cd". > For additional details, see the Logi Report release notes. This link "https://docs-report.zendesk.com/hc/en-us/articles/24548641487373-Report-v24-1-Release-Notes#SP2" looks wrong. If possible, could you share the correct URL for us, please? thank you for making us aware. I have escalated this internally and I hope it can be corrected quickly. My apologies. The correct URL is https://community.intersystems.com/post/intersystems-reports-version-241-release-announcement Windows users who need to use IPM/ZPM to install packages that pull in Python packages may want to hold back from adoption of 2024.2 until https://github.com/intersystems/ipm/issues/540 gets closed and an updated IPM ships.
Announcement
Developer Community Admin · Dec 2, 2015

InterSystems Atelier Field Test

InterSystems Atelier Field Test is now available.Supported customers who have credentials to access the WRC can find the client download here:https://wrc.intersystems.com/wrc/BetaPortalCloud2.csp Thank you for your work! Atelier field test is really cool. Just two questions: - Is there any chance to extend 15 days period of test? It would be great (I can't test it full time). - Can we access to Atelier Ensemble Test server with Studio? I've tried, but I couldn't. It would be very helpful to compare working with Atelier and Studio at the same time with the same code. Kind regards! Since you work on the client with Atelier, and only use the server for deployment/compilation/debugging, it does not matter if you need to restart the server every two days, and it allows bug fixes to be deployed automatically. If you export to XML from Atelier and use source control on a system using Studio, then you should see the code automically refresh in Studio when you open the file. The test is not limited to two weeks. That was simply the expiration on the license. We updated the beta facility with a new version and a more generous license. Happy coding! I can confirm that it doesn't matter when my cloud instance goes away, I just go back to the portal and deploy the new version and bugs that I found over the last few weeks are usually simply fixed. Is there a way to force my "instance" to expire/update without waiting the two weeks? What's the right support channel to use for the cloud-based Atelier FT server? I log in at https://wrc.intersystems.com/wrc/BetaPortalCloud.csp and click the "Launch Server" button. The button disappears immediately and is replaced by text saying "Your Atelier Server has been launched, it will run for the next (expired)." and when I click on the xxx-.isc.appsembler.com/csp/sys/UtilHome.csp URL I am offered I get a page reading " No Application Configured This domain is not associated with an application. " John - The WRC is handling any calls with regard to the beta cloud portal. I saw this and reached out to the cloud provider as the issue is coming from their side. I will let you know as soon as I hear back. Hi Bill,When will the official release be out? Have been waiting since the 2015 after our 2014 conference when we heard it would by done by end of 2015? Really exited not to work on Windows anymore and also not using studio in remote connections as it chews bandwidth and tends to be really slow. Please read the announcement here: https://community.intersystems.com/post/announcement-about-cach%C3%A9-20162-and-20163-field-test-programs We have been holding updates of Atelier to finish refactoring to conform to the changes outlined there. Kits are being published for 16.2 today and a new Atelier kit will be available in "Check for Updates" today or tomorrow. Any progress on this yet? My 1.0.90 Atelier's "Check for Updates" still tells me it finds none. I've already updated my 2016.2 FT instance to the latest you've published (build 721), so I expect my Atelier won't play well with that until it gets updated too. A new kit will be posted this week, probably today. Updated just now and received build 232. Thanks.
Announcement
Evgeny Shvarov · Mar 7, 2017

InterSystems GitHub Topics

Hi! Recently GitHub introduced topics for the projects. So you can change your InterSystems related projects introducing the topics to let it be categorized, more visible and searchable. Here is the list of good examples for your projects (some of them are clickable already): intersystems, intersystems-cache, intersystems-ensemble, intersystems-healthshare, healthshare, intersystems-iknow, iknow, intersystems-deepsee, deepsee, cache-objectscript, csp, intersystems-zen, zen. If you have any good ideas for topics or already using something, please introduce it here in the comments? Better with a working link. Thank you in advance! It's called topics. Tag is a git pointer to a commit (usually to mark release point). Thanks Eduard! Fixed and changed ) If you are looking for the projects with Caché ObjectScript with sources in UDL you can find it with cacheobjectscript-udl topic on Github. And if you have some open projects on Github with sources in UDL please mark it with cacheobjectscript-udl topic? Thank you in advance!

#InterSystems Package Manager (IPM)

141 Posts11 Followers
Question
Ponnumani Gurusamy · Sep 20, 2016

InterSystems iKnow concepts

How to retrive the unstructure data using iknow concept in Cache . Given real time Example of these concepts? Here are some articles on community about that:Free Text Search by Kyle BaxteriKnow demo apps (1, 2, 3, 4) by Benjamin DeBoe I want to take a moment here and advise you to be very careful with iKnow. iKnow is NOT a solution, it is a way for you to develop your own solutions (much like Caché and Ensemble, actually). While iKnow can give structure to your free text fields, it cannot tell you what to do with that information once you gather it. So before implementing iKnow and developing a solution, you need to know what it is you want to look for, the purpose of putting the iKnow structure on your data, and what you are going to display or show off once you get it.
Question
li yu · Jul 22, 2016

How to install InterSystems Caché?

As Title! For a simplified version, see NewBie's Corner Session 1 Bob, I fully agree with you about the more complete installation guide. However, for someone who is new, and does not understand the terms used, and just wants to see what Caché is all about on their own PC, I feel the more compact install guide may be a better fit. And if they have trouble or questions, they can always consult the more complete guide. I just wanted to add some additional details re the $ set file/attrib command mentioned above for OpenVMS kits to clarify which command is the appropriate one to use and when.In order to make the file sizes of the OpenVMS Caché kits smaller to save on storage space and speedup transfer times, from Caché version 2015.2.0 upwards we use the OpenVMS backup qualifier "/DATA_FORMAT=COMPRESS" which can make the kits around 50% smaller in size.So for Caché versions prior to 2015.2.0 to set the correct file attributes use:$ set file/attrib=(rfm:fix,lrl=32256)and for versions of Caché from 2015.2.0 upwards use:$ set file/attrib=(rfm:var,mrs:32256,lrl=32256,rat=none) thank you, Eric. i should have been more specific about the new file attribute command. internally we tend to focus on the most recent releases, i will remember next time that there are plenty of sites using older kits. i agree, Mike, i wouldn't discourage anyone from just trying it out. but even the simplest installation of Caché presents choices that have lots of significant implications, for example 8-bit vs. unicode and security settings. it doesn't add a lot of overhead to have the documentation open so you can read along with the procedure as you execute it, and at the least it may save you the trouble of having to uninstall and start over again. Please provide link for cache-2016.1.0.656.0-win_x86.exe Adam,you can download it directly from your WRC account. Here's the documentation detailing the process.Or do you have more specific questions? The Documentation that Eduard provided has all of the details for installing cache, but the simple answer, for someone looking for a quick tip is here:We provide distribution packages available on our distribution page:https://wrc.intersystems.com/wrc/Distribution.cspLogin in with your wrc direct account and download the package that is the correct version you want, and that is correct for your operating system. The packages are named such that their version on os should be clear.Windows: The package will be an executable like so:cache-2016.1.0.656.0-win_x64.exeDownload this to a location on your system that is not in the pathway where you want to install. Double click and it will start the installation wizard. From there is will prompt with questions you answer about how you want to install cache. If you are not sure of some of those answers the document that Edward pointed out will have lots of answers. When the installation is complete there will be a blue cube on the dashboard from which you can access cache.Unix and Mac: The package will be a zipped tar file like so:cache-2016.1.0.656.0-ppc64.tar.gzDownload and place in a pathway where you are not installing cache. Unzip this with:#gunzip cache-2016.1.0.656.0-ppc64.tar.gzthen untar it like so:#tar -xvf cache-2016.1.0.656.0-ppc64.tarWhen done it will expand several files that are part of the distribution. Start the installation script like so:#./cinstallIt will prompt you with answers to questions about the install. If you have questions about this you can refer to the documentation that Edward posted. When done it will tell you the url to use to access the instance with the system management portal. You can use csession <instance name> to get into cache.vms: This is similar in install to unix, but opening the kit up is a bit different. The package will be a bck file like so:cache-2016.1.0.657.0-alphavms.bckSet the attributes of the file correctly so that it can be opened:$ SET FILE/ATTRIB=(RFM:FIX,LRL=32256) cache-2016.1.0.657.0-alphavms.bckexpand the bck file like so:backup cache-2016.1.0.657.0-alphavms.bck/save DISK:[dir...]Note, you will have to put the appropriate disk and directory designation. The ... following the directory name is necessary so that it properly expands the directories. You will have to get into the directories that it creates to get to the cinstall executable. There type @cinstall in order to start the installation. Again it will prompt for answers just like with unix.Hope this helps. While this overview is accurate and helpful, I strongly recommend consulting the Caché Installation Guide before beginning the installation process, and having it at hand while installing. There are a number of things to consider for each platform (Windows, OpenVMS, UNIX®/Linux, and Mac) before you install; the install guide covers general preparation in the chapter for each platform, and also includes appendixes on system parameters for OpenVMS and UNIX®/Linux, preparing for Caché security, and file system and storage recommendations. (There is also a chapter on upgrading Caché.) There are also a lot of options to choose among during the installation process and it useful to have the explanations available when deciding. Finally, the install guide offers alternatives to interactive installation, including unattended installation on Windows and on UNIX®/Linux systems and the use of the %Installer class with an installation manifest. OpenVMS note: the SET FILE/ATTRIB command cited by Richard, which may be necessary to restore to the install package attributes that were stripped when it was conveyed to the target system, was recently updated, as follows: $ SET file/attrib=(rfm:var,mrs:32256,lrl=32256,rat=none) cache-2016.1.0.657.0-alphavms.bck Hi Eric, I hope you’re well and that you may not remember me from Systime field engineering 1980-1986 then ICM until 2007. You worked with Neil Turner & Brian Lindley I recall. Could you drop me a line at meakin2003 at gmail.com or call me on 07802 491155 to discus some OVMS support work we’re about to undertake? It involves my working with Chris Walker whom you may recall adapted the Vax780 to address 4Mb before DEC released the 785? We need more guys like him and you right now. Many thanks Colin.
Question
Ankita JAin · May 31, 2017

Unit test in InterSystems with Jenkins

Hi ,I am stuck with unit test failure with intersystem . In case of unit test failure, the build in jenkins is succeding while the build in jenkins should fail in case unit test failure .In cache programming i am using %UnitTest.Manager class and DebugRunTestCase method within it. I'm able to link studio with jenkins. I wanna fail my build in jenkins, if any of the test cases fails. Could anyone help? Please, provide how you call Caché from Jenkins? Please tell us very clearly how to exit the cache terminal via command line and then that batch script should fail the jenkins job ? Please tell us if it is feassible in intersystem . We have tried with all methods from intersystem doc library but still we could not progress (Jenkins job is not getting triggered based on boolean value passed from cache). Please suggest Hi Ankita,Would you be able to provide some more details about how you currently run the unit tests from your batch script? Regards,Caza Hi Caza , We are facing the following issues :- 1. We are getting the boolean value for the pass and fail unit test cases in cache Intersystem, but we are not able to assingn this boolean value into a variable in a batch script( r3 in this case) 2. @ECHO %FAILUREFLAG% is not giving any output to us, can you help with that also. Please suggest us for the following problems or some alternative approach. The Batch script code is here :- :: Switch output mode to utf8 - for comfortable log reading @chcp 65001 @SET WORKSPACE="https://github.com/intersystems-ru/CacheGitHubCI.git" :: Check the presence of the variable initialized by Jenkins @IF NOT DEFINED WORKSPACE EXIT 1 @SET SUCCESSFLG =%CD%\successflag.txt @SET FAILUREFLAG =%CD%\failureflag.txt @ECHO %FAILUREFLAG% @DEL "%SUCCESSFLG%" @DEL "%FAILUREFLAG%" :: The assembly may end with different results :: Let the presence of a specific file in the directory tell us about the problems with the assembly ::% CD% - [C] urrent [D] irectory is a system variable :: it contains the name of the directory in which the script is run (bat) :: Delete the bad completion file from the previous start ::@DEL "% ERRFLAG%" :::::::::::::::::::::::::::::::::::::::::: ::::::::::::::::::::: :: CREATING CACHE CONTROL SCRIPT :: line by line in the build.cos command to manage the Cache :::::::::::::::::::::::::::::::::::::::::: ::::::::::::::::::::: :: If the Cache is installed with normal or increased security :: then in the first two lines the name and password of the user Cache @ECHO user>build.cos @ECHO password>>build.cos :: Go to the necessary NAMESPACE @ECHO zn "MYNAMESPACE" >>build.cos @ECHO do ##class(Util.SourceControl).Init() >>build.cos @ECHO zn "User" >>build.cos @ECHO set ^^UnitTestRoot ="C:\test" >>build.cos @ECHO do ##class(%%UnitTest.Manager).RunTest("myunittest","/nodelete") >>build.cos @Echo set i=$order(^^UnitTest.Result(""),-1) >>build.cos @Echo write i >>build.cos @Echo set unitResults=^^UnitTest.Result(i,"myunittest") >>build.cos @ECHO Set r3 = $LISTGET(unitResults,1,1) >>build.cos @ECHO if r3'= 0 set file = "C:/unittests/successflag.txt" o file:("NWS") u file do $SYSTEM.OBJ.DisplayError(r3) >>build.cos @ECHO if r3'= 1 set file = "C:/unittests/failureflag.txt" o file:("NWS") u file do $SYSTEM.OBJ.DisplayError(r3) >>build.cos ::@ECHO if 'r3 do $SYSTEM.Process.Terminate(,1) ::@ECHO halt @IF EXIST "%FAILUREFLAG%" EXIT 1 ::@ECHO do ##class(%%UnitTest.Manager).RunTest("User") >>build.cos ::@ECHO set ut = do ##class(%%UnitTest.Manager).RunTest("User") >>build.cos ::set var = 1 >>build.coss :: If it did not work out, we will show the error to see it in the logs of Jenkins ::@ECHO if ut'= 1 do $SYSTEM.OBJ.DisplayError(sc) >>build.cos ::echo %var% @Echo On @ECHO zn "%%SYS" >>build.cos @ECHO set pVars("Namespace")="MYDEV" >>build.cos @ECHO set pVars("SourceDir")="C:\source\MYNAMESPACE\src\cls\User" >>build.cos @ECHO do ##class(User.Import).setup(.pVars)>>build.cos :: Download and compile all the sources in the assembly directory; ::% WORKSPACE% - variable Jenkins ::@ECHO set sc = $SYSTEM.OBJ.ImportDir("%WORKSPACE%", "*. Xml", "ck",. Err, 1) >>build.cos :: If it did not work out, we will show the error to see it in the logs of Jenkins ::@ECHO if sc'= 1 do $SYSTEM.OBJ.DisplayError(sc) >>build.cos :: and from the cos script create an error flag file to notify the script bat ::@ECHO if sc'= 1 set file = "% ERRFLAG%" o file :( "NWS") u file do $ SYSTEM.OBJ.DisplayError (sc) with file >> build.cos :: Finish the Cache process ::@ECHO if sc'= 1 set file = "% ERRFLAG%" o file :( "NWS") u file do $ SYSTEM.OBJ.DisplayError (sc) with file >> build.cos :::::::::::::::::::::::::::::::::::::::::: ::::::::::::::::::: :: Call the Cache control program and pass the generated script to it :: :::::::::::::::::::::::::::::::::::::::::: :::::::::::::::: C:\InterSystems\HealthShare\bin\cache.exe -s C:\InterSystems\HealthShare\mgr -U %%SYS <%CD%\build.cos :: If the error file was created during the cos script execution - we notify Jenkins about this ::@IF EXIST "%ERRFLAG%" EXIT 1 Hi Ankita, I found a couple of issues in the script that might affect your end results:- the folder C:\unittests doesn't exist (at least not on my computer); unless the value of the WORKSPACE env variable is C:\untittest then you have to ensure the folder exists (you can create it either using batch mkdir or using COS ##class(%%File).CreateDirectoryChain() method )- what is stored in the ^UnitTest.Result(i,"myunittest") global is not a status code but a numeric value; so I would suggest replacing Do $system.OBJ.DisplayError(r3) with a simple write command, like this: @ECHO if r3'= 0 set file = "C:/unittests/successflag.txt" o file:("NWS") u file write r3 >>build.cos @ECHO if r3'= 1 set file = "C:/unittests/failureflag.txt" o file:("NWS") u file write r3 >>build.cos Regarding "@ECHO %FAILUREFLAG%" - make sure there are no spaces before or after the = character in the following two commands: @SET SUCCESSFLG =%CD%\successflag.txt @SET FAILUREFLAG =%CD%\failureflag.txt When I did copy/paste of the example script I ended up with a space character before the = character. Can you try these changes and let me know how you go? Cheers,Caza Thanks for your response it worked If you add to your batch script something like: do ##class(com.isc.UnitTest.Manager).OutputResultsXml("c:/unittests/unit_test_results.xml") then you could pass on the unit_test_results.xml file to the JUnit plugin from Jenkins. This will give you useful reports, duration and individual results breakdown, etc. For example: https://support.smartbear.com/testcomplete/docs/_images/working-with/integration/jenkins/trend-graph.png Yay! You can create a windows batch file, where you can write commands to call Cache terminal using cterm.exe Here's an object and SQL approaches to get Unit Tests status. One gotcha in windows is making sure the encoding on your .scr file is in UTF-8. Windows thinks a .scr file is a screensaver file and can give it a weird encoding. Once I switched it to UTF-8 it worked as described in the documentation. I don't know how you call Caché method from Jenkins, but anyway you can use $SYSTEM.Process.Terminate in Caché script to exit with an exit status. Something like this. set tSC=##class(%UnitTest.Manager).DebugRunTestCase(....) if 'tSC do $SYSTEM.Process.Terminate(,1) halt I suggest that you may use csession or cterm to call Caché code, then you should get exit code and send it to Jenkins, which will be recognized by Jenkins as an error and will fail the job. Hi, Thanks Dmitry Maslennikov for the Response.Could you please help us in Clarifying the below Query.Is there any methods/ways through which will get to know whether any of the Unit Test cases is/are failing (we are getting an url of the csp page with the report which has the passed failed status) as we need to send this failure status to Jenkins for the Build to Fail (where in we have acheive this part making the build failure/success based on harcoded boolean) Hi, We use something like the below to output the unit test results to an xml file in JUnit format. /// Extend %UnitTest manager to output unit test results in JUnit format. /// This relies on the fact that unit test results are stored in <b>^UnitTest.Result</b> global. Results displayed on CSP pages come from this global. Class com.isc.UnitTest.Manager Extends %UnitTest.Manager { ClassMethod OutputResultsXml(pFileName As %String) As %Status { set File=##class(%File).%New(pFileName) set i=$order(^UnitTest.Result(""),-1) if i="" quit $$$OK // no results kill ^||TMP // results global set suite="" for { set suite=$order(^UnitTest.Result(i,suite)) quit:suite="" set ^||TMP("S",suite,"time")=$listget(^UnitTest.Result(i,suite),2) set case="" for { set case=$order(^UnitTest.Result(i,suite,case)) quit:case="" if $increment(^||TMP("S",suite,"tests")) set ^||TMP("S",suite,"C",case,"time")=$listget(^UnitTest.Result(i,suite),2) set method="" for { set method=$order(^UnitTest.Result(i,suite,case,method)) quit:method="" set ^||TMP("S",suite,"C",case,"M",method,"time")=$listget(^UnitTest.Result(i,suite,case,method),2) set assert="" for { set assert=$order(^UnitTest.Result(i,suite,case,method,assert)) quit:assert="" if $increment(^||TMP("S",suite,"assertions")) if $increment(^||TMP("S",suite,"C",case,"assertions")) if $increment(^||TMP("S",suite,"C",case,"M",method,"assertions")) if $listget(^UnitTest.Result(i,suite,case,method,assert))=0 { if $increment(^||TMP("S",suite,"failures")) if $increment(^||TMP("S",suite,"C",case,"failures")) if $increment(^||TMP("S",suite,"C",case,"M",method,"failures")) set ^||TMP("S",suite,"C",case,"M",method,"failure")=$get(^||TMP("S",suite,"C",case,"M",method,"failure")) _$listget(^UnitTest.Result(i,suite,case,method,assert),2) _": "_$listget(^UnitTest.Result(i,suite,case,method,assert),3) _$char(13,10) } } if ($listget(^UnitTest.Result(i,suite,case,method))=0) && ('$data(^||TMP("S",suite,"C",case,"M",method,"failures"))) { if $increment(^||TMP("S",suite,"failures")) if $increment(^||TMP("S",suite,"C",case,"failures")) if $increment(^||TMP("S",suite,"C",case,"M",method,"failures")) set ^||TMP("S",suite,"C",case,"M",method,"failure")=$get(^||TMP("S",suite,"C",case,"M",method,"failure")) _$listget(^UnitTest.Result(i,suite,case,method),3) _": "_$listget(^UnitTest.Result(i,suite,case,method),4) _$char(13,10) } } if $listget(^UnitTest.Result(i,suite,case))=0 && ('$data(^||TMP("S",suite,"C",case,"failures"))) { if $increment(^||TMP("S",suite,"failures")) if $increment(^||TMP("S",suite,"C",case,"failures")) if $increment(^||TMP("S",suite,"C",case,"M",case,"failures")) set ^||TMP("S",suite,"C",case,"M",case,"failure")=$get(^||TMP("S",suite,"C",case,"M",case,"failure")) _$listget(^UnitTest.Result(i,suite,case),3) _": "_$listget(^UnitTest.Result(i,suite,case),4) _$char(13,10) } } } do File.Open("WSN") do File.WriteLine("<?xml version=""1.0"" encoding=""UTF-8"" ?>") do File.WriteLine("<testsuites>") set suite="" for { set suite=$order(^||TMP("S",suite)) quit:suite="" do File.Write("<testsuite") do File.Write(" name="""_$zconvert(suite,"O","XML")_"""") do File.Write(" assertions="""_$get(^||TMP("S",suite,"assertions"))_"""") do File.Write(" time="""_$get(^||TMP("S",suite,"time"))_"""") do File.Write(" tests="""_$get(^||TMP("S",suite,"tests"))_"""") do File.WriteLine(">") set case="" for { set case=$order(^||TMP("S",suite,"C",case)) quit:case="" do File.Write("<testsuite") do File.Write(" name="""_$zconvert(case,"O","XML")_"""") do File.Write(" assertions="""_$get(^||TMP("S",suite,"C",case,"assertions"))_"""") do File.Write(" time="""_$get(^||TMP("S",suite,"C",case,"time"))_"""") do File.Write(" tests="""_$get(^||TMP("S",suite,"C",case,"tests"))_"""") do File.WriteLine(">") set method="" for { set method=$order(^||TMP("S",suite,"C",case,"M",method)) quit:method="" do File.Write("<testcase") do File.Write(" name="""_$zconvert(method,"O","XML")_"""") do File.Write(" assertions="""_$get(^||TMP("S",suite,"C",case,"M",method,"assertions"))_"""") do File.Write(" time="""_$get(^||TMP("S",suite,"C",case,"M",method,"time"))_"""") do File.WriteLine(">") if $data(^||TMP("S",suite,"C",case,"M",method,"failure")) { do File.Write("<failure type=""cache-error"" message=""Cache Error"">") do File.Write($zconvert(^||TMP("S",suite,"C",case,"M",method,"failure"),"O","XML")) do File.WriteLine("</failure>") } do File.WriteLine("</testcase>") } do File.WriteLine("</testsuite>") } do File.WriteLine("</testsuite>") } do File.WriteLine("</testsuites>") do File.Close() kill ^||TMP quit $$$OK } }
Article
Benjamin De Boe · Sep 19, 2017

Horizontal Scalability with InterSystems IRIS

Last week, we announced the InterSystems IRIS Data Platform, our new and comprehensive platform for all your data endeavours, whether transactional, analytics or both. We've included many of the features our customers know and loved from Caché and Ensemble, but in this article we'll shed a little more light on one of the new capabilities of the platform: SQL Sharding, a powerful new feature in our scalability story. Should you have exactly 4 minutes and 41 seconds, take a look at this neat video on scalability. If you can't find your headphones and don't trust our soothing voiceover will please your co-workers, just read on! Scaling up and out Whether it's processing millions of stock trades a day or treating tens of thousands of patients a day, a data platform supporting those businesses should be able to cope with those large scales transparently. Transparently means that developers and business users shouldn't worry about those numbers and can concentrate on their core business and applications, with the platform taking care of the scale aspect. For years, Caché has supported vertical scalability, where advancements in hardware are taken advantage of transparently by the software, efficiently leveraging very high core counts and vast amounts of RAM. This is called scaling up, and while a good upfront sizing effort can get you a perfectly balanced system, there's an inherent limit to what you can achieve on a single system in a cost-effective way. In comes horizontal scalability, where the workload is spread over a number of separate servers working in a cluster, rather than a single one. Caché has supported ECP Application Servers as a means to scale out for a while already, but InterSystems IRIS now also adds SQL sharding. What's new? So what's the difference between ECP Application Servers and the new sharding capability? In order to understand how they differ, let's take a closer look at workloads. A workload may consist of tens of thousands of small devices continuously writing small batches of data to the database, or just a handful of analysts issuing analytical queries each spanning GBs of data at a time. Which one has the largest scale? Hard to tell, just like it's hard to say whether a fishing rod or a beer keg is largest. Workloads have more than one dimension and therefore scaling to support them needs a little more subtlety too. In a rough simplification, let's consider the following components in an application workload: N represents the user workload and Q the query size. In our earlier examples, the first workload has a high N but low Q and the latter low N but high Q. ECP Application Servers are very good at helping support a large N, as they allow partitioning the application users across different servers. However, it doesn't necessarily help as much if the dataset gets very large and the working set doesn't fit in a single machine's memory. Sharding addresses large Q, allowing you to partition the dataset across servers, with work also being pushed down to those shard servers as much as possible. SQL Sharding So what does sharding really do? It's a SQL capability that will split the data in a sharded table into disjoint sets of rows that are stored on the shard servers. When connecting to the shard master, you'll still see this table as if it were a single table that contains all the data, but queries against it are split into shard-local queries that are sent to all shard servers. There, the shard servers calculate the results based on the data they have stored locally and send their results back to the shard master. The shard master aggregates these results, performs any relevant combination logic and returns the results back to the application. While this system is trivial for a simple SELECT * FROM table, there's a lot of smart logic under the hood that ensures that you can use (almost) any SQL query and a maximum amount of work gets pushed to the shards to maximize parallelism. The shard key, which defines which rows go where, is where you anticipate typical query patterns. Most importantly, if you can ensure that tables often JOINed together are sharded along the same keys, the JOINs can be fully resolved at the shard level, giving you the high performance you're looking for. Of course this is only a teaser and there is much more to explore, but the essence is what's pictured above: SQL sharding is a new recipe in the book of highly scalable dishes you can cook up with InterSystems IRIS. It's complementary to ECP Application Servers and focuses on challenging dataset sizes, making it a good fit for many analytical use cases. Like ECP app servers, it's entirely transparent to the application has a few more creative architectural variations for very specific scenarios. Where can I learn more? Recordings from the following Global Summit 2017 sessions on the topic are available on http://learning.intersystems.com: What's Lurking in Your Data Lake, a technical overview of scalability & sharding in particular We Want More! Solving Scalability, an overview of relevant use cases demanding for a highly scalable platform See also this resource guide on InterSystems IRIS on learning.intersystems.com for more on the other capabilities of the new platform. If you'd like to give sharding a try on your particular use case, check out http://www.intersystems.com/iris and fill out the form at the bottom to apply for our early adopter program, or watch out for the field test version due later this year. Great article! awesome But what will this do with Object storage/access or (direct) global storage/access ? Is this data SQL-only ? Hi Herman,We're supporting SQL only in this first release, but are working hard to add Object and other data models in the future. Sharding any globals is unfortunately not possible as we need some level of abstraction (such as SQL tables or Objects) to hook into in order to automate the distribution of data and work to shards. This said, if your SQL (or soon Object) based application has the odd direct global reference to a "custom" global (not related to a sharded table), we'll still support that by just mapping those to the shard master database.Thanks,benjamin Can we infer form this that sharding can be applied to globals that have been mapped to classes (thus providing SQL access)? Hi Warlin,I'm not sure whether you have something specific in mind, but it sort of works the other way around. You shard a table and, under the hood, invisible to application code, the table's data gets distributed to globals in the data shards. You cannot shard globals.thanks,benjamin Let's say I have an orders global with the following structure:^ORD(<ID>)=customerId~locationId.....And I create a mapping class for this global: MyPackage.OrderCan I use sharding over this table? To my understanding the structure of your global is irrelevant in this context.If you want to use sharding forget about ALL global access.You only access works over SQL ! (at least at the moment, objects may follow in some future)It's the decision of the sharing logic where and how data are stored in globals.If you ignore this and continue with direct global access you have a good chance to break it. I understand the accessing part but by creating a class mapping I'm enabling SQL access to the existing global. I guess that the question is more in line on whether sharding will be able to properly partition (shard) SQL tables that are the result of global mapping? Are there any constraints on how the %Persistent class (and the storage) is defined in order for it to work with sharding? Should they all use %CacheStorage or can they use %CacheSQLStorage (as with mappings)? If you have a global structure that you mapped a class to afterwards, that data is already in one physical database and therefore not sharded or shardable. Sharding really is a layer in between your SQL accesses and the physical storage and it expects you not to touch that physical storage directly. So yes you can still picture how that global structure looks like and under certain circumstances (and when we're not looking ;-) ) read from those globals, but new records have to go through INSERT statements (or %New in a future version), but can never go against the global directly.We currently only support sharding for %CacheStorage. There's been so many improvements in that model over the past 5-10 years that there aren't many reasons left to choose %CacheSQLStorage for new SQL/Object development. The only likely reason would be that you still have legacy global structures to start from, but as explained above, that's not a scenario we can support with sharding. Maybe a nice reference in this context is that of one of our early adopters who was able to migrate their existing SQL-based application to InterSystems IRIS in less than a day without any code changes, so they could use the rest of the day to start sharding a few of their tables and were ready to scale before dinner, so to speak. With the release of InterSystems IRIS, we're also publishing a few new great online courses on the technologies that come with it. That includes two on sharding, so don't hesitate to check them out!