Search

Clear filter
Announcement
Pete Greskoff · Jun 25, 2019

June 25, 2019 – Advisory: Memory Leak in InterSystems IRIS

InterSystems has corrected a memory leak in applications that pass by reference to a formal parameter that accepts a variable number of arguments. This problem exists for: InterSystems IRIS Data Platform – all currently released versions InterSystems IRIS for Health – all currently released versions HealthShare Health Connect 2019.1.0 If this defect occurs, the process partition will eventually be exhausted, resulting in a <STORE> error. The defect occurs if application code calls a subroutine passing an argument by reference to a parameter that accepts a variable number of arguments using the … syntax. For background on these topics and more examples of code that uses them, see the “Variable Number of Parameters” and the “Passing By Reference” sections in the “Callable User-defined Code Modules” chapter of Using ObjectScript in the documentation (Docs.InterSystems.com). Here is an example to demonstrate the defect: test // CDS3148 test set (var1,var2,var3)=0 do sub(var1,.var2,var3) quit sub(arg1,args…) quit USER>for i=1:1:1000 { do ^test } write $S 268301128 USER>for i=1:1:1000 { do ^test } write $S 268276552 USER> This subroutine call would also demonstrate the defect: do sub(var1,var2,.var3) But this one would not: do sub(.var1,var2,var3) The correction for this defect is identified as CDS3148. It will be included in all future product releases. It is also available via Ad hoc distribution from the InterSystems Worldwide Response Center (WRC). If you have any questions regarding this alert, please contact the Worldwide Response Center.
Announcement
Anastasia Dyubaylo · Jul 23, 2019

InterSystems is nominated for the Computable Awards 2019! Help us to WIN!

Hi Everyone!InterSystems HealthShare is nominated for the Computable Awards 2019! The Unified Health Record that we implemented together with our partner Itzos in the St. Maartens Clinic has a chance to become the "ICT Project of the Year in Healthcare". A great honour, but of course we also want to win. We need 4,000 votes! Therefore we'd like to ask you to vote! The process of voting is a bit complicated, but we created a step-by-step guide to make it easier:1. Register on → Computable website2. You will receive an email from Computable with a link. Click on the link and you will get to the voting page.3. Use CTRL/COMMAND + F to search for InterSystems on the page, or just scroll down to the award category near the bottom “IT project of the Year in Healthcare.” Vote for the second group:4. Done? Awesome! You will get a "Thank you" email from Computable Awards.So,We hope for you! Help InterSystems to win! Please share, like and send as direct message to your customers, friends and prospects.In addition, we prepare a special challenge to vote on InterSystems Global Masters Advocacy Hub. Please complete it and get a good amount of extra points.Stay tuned! For what it's worth, it looks like it's possible to vote without subscribing to any of the updates - I filled in my information, left all four subscription check-boxes blank, and selected "verstuur", and received an email with a link to vote. Hi Samuel, this is such an awesome correction! Thank you!! I edited the instructions :) Hi ,I filled in my information, and received an email with a link to vote. and I have voted for Intersystems. Thanks Hi Uthman,Thanks for your support! Done! Hi Esther,Thanks for your attention to us! Thanks to all who voted for us, on behalf of the Benelux team!
Announcement
Janine Perkins · Apr 20, 2017

Featured InterSystems Online Course: Designing Productions Non-Healthcare

Design a production in a development environment using best practices. After you have built your first production in a test environment, you can start applying what you have learned and begin building in a development environment. Take the Designing Productions Non-Healthcare course to learn additional information to enable you to successfully design your production. Much of the information in this course is considered best practices.Learn More.
Announcement
Evgeny Shvarov · Jun 30, 2017

Get Your Free Registration on InterSystems Global Summit 2017!

Hi, Community!Hope you have already put in your schedule the visit to InterSystems Global Summit 2017 which will take place on 10-13 of September in remarkable JW Marriott Desert Springs Resort and Spa.This year we have Experience Lab, The Unconference, and 50 more other sessions, regarding performance, cloud, scalability, FHIR, high availability and other solutions and best practices.Global Summit it the most effective way to know what's new and what are the most powerful practices to make successful solutions with InterSystems technology.Today is the last day for early bird $999 tickets.But! You can get a free of charge ticket on InterSystems Global Masters Advocate hub!There are numerous ways to earn the points: write articles or answer the questions, publish testimonials or provide referrals, or simply watch and read articles and share it on social networks.To join Global Masters leave your comment in this post and we'll send you a personal invite link.Note!Allow us to recognize your contribution to Developer Community in Global Masters and register with the same email you have in Developer Community.Also, Community moderators are getting free tickets to Global Summit.This year they are -- [@Eduard.Lebedyuk], [@John.Murray], and [@Dmitry.Maslennikov].See you on InterSystems Global Summit 2017! Hi, Community!Just want to share the good news.Early bird registration for $999 prolonged until 14th of July, we also have $200 and $300 discounts for you on Global Masters. See the agenda of daily sessions on Solution Developers Conference.
Announcement
Evgeny Shvarov · Oct 31, 2017

Key Notes Videos From InterSystems Global Summit 2017

Hi, Community!See the Key Notes videos from Global Summit 2017 with new InterSystems IRIS Data Platform announcement.InterSystems Global Summit Keynote - Part 1InterSystems Global Summit Keynote - Part 2
Announcement
Evgeny Shvarov · Oct 24, 2017

InterSystems Developer Meetup in Cambridge 25th of October 2017

Hi, Community!We are having InterSystems Developer Meetup tomorrow 25th of October in CIC.What is it?It's an open evening event to:know more about InterSystems products and new technology features;discuss it with other developers in your area and with developers and engineers from InterSystems Corporation;network with developers of the innovative solutions.Why attend?If you are new to InterSystems data platform, Meetup is a great way to know more and get a direct impression. You can listen to what are the new features and best practices of InterSystems products and discuss your tasks with experienced developers who already used it successfully or with InterSystems employees.If you are already using InterSystems products it’s a great way to meet in person other developers who are making and supporting solutions on InterSystems Data Platform in your region and to discuss your problems and questions with InterSystems developers and engineers directly.Why attend tomorrow?Come tomorrow because we have a greatAGENDA!6-00pm InterSystems IRIS: Sharding and ScalabilityWe just launched new data platform InterSystems IRIS which comes with sharding feature. Tomorrow Jeff Miller, one of sharding developers will describe you how can you benefit from it and you can ask him how it works.6-30 pm Optimize Your Workflow with Atelier 1.1And! Hope you've heard a lot already about our new IDE Atelier! Tomorrow you can listen to the update on how Atelier can help you to develop InterSystems solutions more effectively and you can talk directly to Atelier developer [@Michelle.Stolwyk].7-30 pm Clustering options for high availability and scalabilityAlso, InterSystems Data Platform is known for its High Availability features. [@Oren.Wolf], InterSystems product manager, will have a session which more details on InterSystems High Availability solutions.How to find the place?It's in Cambridge Innovation Center, One Broadway, Cambridge, MA. Come at 5.30pm, bring your ID, come up to the 5th floor and join us in Venture Cafe.Join us for food, beverages, and networking and discuss powerful new InterSystems solutions with other developers in Boston metro area. See the live stream recording! Join Live Stream today and ask your questions online! Thanks for this, Evgeny. It doesn't look like i'll be driving down tonight given the weather here in Maine, so I'll be participating via live stream! Sure, Jeff! Hope you can make the next one. Prepare your questions! ) InterSystems IRIS Data Platfrom: Sharding and Scalability by [@Jeff.Miller] And here's the complete recording:https://www.youtube.com/watch?v=J3QLibe15xs[ including 30 min break ] Yep, we will post remastered version soon ) I've posted the 'Clustering options for high availability and scalability' slides on SlideShare (here) Here's my slide deck - The Power Boost of Atelier! It has private access for now, and available only for you, yet. Fixed! Hi!Here is the remastered version of Meetup live stream recording.
Announcement
Evgeny Shvarov · Mar 1, 2016

InterSystems Global Summit 2016 Free Registration Contest. And the winner is...

Hi Community!I'm pleased to announce that the winner of the Global Summit Free Registration Contest is... Dmitry Maslennikov!Final leaderboard is:To win the prize Dmitry published 3 posts and 13 comments in two weeks. Thanks to your votes for posts and comments Dmitry gathered maximum number of points.We award Dmitry with Free Registration promo code for InterSystems Global Summit 2016 and cover expenses for 4 night stay in Waldorf Astoria Arizona Biltmore.Welcome to InterSystems Global Summit 2016! Thanks a lot Congrats! Congratulations! I am looking forward to meeting you at the Global Summit.Stefan So first off, I think Dmitry deserved to win, but I do have a question.In this post, dated Feb 19th, I had 55 points. On the 26th I posted asking about it and my score was at 50. 4 days later and my final score is still 50 (after a lot more posts). What's the deal? edit: Just realized that posts to the Developer Community Feedback forum didn't count, which explains why my points didn't go up. But it still doesn't explain why they went down. :) Scott - I am sure Evgeny will provide some clarification here once he gets online. Thanks for your patience! Hi, Scott!You are right!The explanation is very simple - we had a bug in formula. Sorry about this.But the final leaderboard is quite right! We have points for: posting, commenting, votes in your posts and comments. And yes, we filter Announcements and Developer Community Feedback groups. Scott!Want to invite you to participate in the Second Developer Community Contest!Hope we will not show errors in leaderboard this time :)
Article
Murray Oldfield · Apr 27, 2016

InterSystems Data Platforms and performance - Part 5 Monitoring with SNMP

# InterSystems Data Platforms and performance - Part 5 Monitoring with SNMP In previous posts I have shown how it is possible to collect historical performance metrics using pButtons. I go to pButtons first because I know it is installed with every Data Platforms instance (Ensemble, Caché, …). However there are other ways to collect, process and display Caché performance metrics in real time either for simple monitoring or more importantly for much more sophisticated operational analytics and capacity planning. One of the most common methods of data collection is to use SNMP (Simple Network Management Protocol). SNMP a standard way for Caché to provide management and monitoring information to a wide variety of management tools. The Caché online documentation includes details of the interface between Caché and SNMP. While SNMP should 'just work' with Caché there are some configuration tricks and traps. It took me quite a few false starts and help from other folks here at InterSystems to get Caché to talk to the Operating System SNMP master agent, so I have written this post so you can avoid the same pain. In this post I will walk through the set up and configuration of SNMP for Caché on Red Hat Linux, you should be able to use the same steps for other \*nix flavours. I am writing the post using Red Hat because Linux can be a little more tricky to set up - on Windows Caché automatically installs a DLL to connect with the standard Windows SNMP service so should be easier to configure. Once SNMP is set up on the server side you can start monitoring using any number of tools. I will show monitoring using the popular PRTG tool but there are many others - [Here is a partial list.](https://en.wikipedia.org/wiki/Comparison_of_network_monitoring_systems) Note the Caché and Ensemble MIB files are included in the `Caché_installation_directory/SNMP` folder, the file are: `ISC-CACHE.mib` and `ISC-ENSEMBLE.mib`. #### Previous posts in this series: - [Part 1 - Getting started on the Journey, collecting metrics.](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-1) - [Part 2 - Looking at the metrics we collected.](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-2) - [Part 3 - Focus on CPU.](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-3-focus-cpu) - [Part 4 - Looking at memory.](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-part-4-looking-memory) # Start here... Start by reviewing Monitoring Caché Using SNMP in the [Caché online documentation](http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_snmp). ## 1. Caché configuration Follow the steps in _Managing SNMP in Caché_ section in the [Caché online documentation](http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_snmp) to enable the Caché monitoring service and configure the Caché SNMP subagent to start automatically at Caché startup. Check that the Caché process is running, for example look on the process list or at the OS: ps -ef | grep SNMP root 1171 1097 0 02:26 pts/1 00:00:00 grep SNMP root 27833 1 0 00:34 pts/0 00:00:05 cache -s/db/trak/hs2015/mgr -cj -p33 JOB^SNMP Thats all, Caché configuration is done! ## 2. Operating system configuration There is a little more to do here. First check that the snmpd daemon is installed and running. If not then install and start snmpd. Check snmpd status with: service snmpd status Start or Stop snmpd with: service snmpd start|stop If snmp is not installed then you will have to install as per OS instructions, for example: yum -y install net-snmp net-snmp-utils ## 3. Configure snmpd As detailed in the Caché documentation, on Linux systems the most important task is to verify that the SNMP master agent on the system is compatible with the Agent Extensibility (AgentX) protocol (Caché runs as a subagent) and the master is active and listening for connections on the standard AgentX TCP port 705. This is where I ran into problems. I made some basic errors in the `snmp.conf` file that meant the Caché SNMP subagent was not communicating with the OS master agent. The following sample `/etc/snmp/snmp.conf` file has been configured to start agentX and provide access to the Caché and ensemble SNMP MIBs. _Note you will have to confirm whether the following configuration complies with your organisations security policies._ At a minimum the following lines must be edited to reflect your system set up. For example change: syslocation "System_Location" to syslocation "Primary Server Room" Also edit the at least the following two lines: syscontact "Your Name" trapsink Caché_database_server_name_or_ip_address public Edit or replace the existing `/etc/snmp/snmp.conf` file to match the following: ############################################################################### # # snmpd.conf: # An example configuration file for configuring the NET-SNMP agent with Cache. # # This has been used successfully on Red Hat Enterprise Linux and running # the snmpd daemon in the foreground with the following command: # # /usr/sbin/snmpd -f -L -x TCP:localhost:705 -c./snmpd.conf # # You may want/need to change some of the information, especially the # IP address of the trap receiver of you expect to get traps. I've also seen # one case (on AIX) where we had to use the "-C" option on the snmpd command # line, to make sure we were getting the correct snmpd.conf file. # ############################################################################### ########################################################################### # SECTION: System Information Setup # # This section defines some of the information reported in # the "system" mib group in the mibII tree. # syslocation: The [typically physical] location of the system. # Note that setting this value here means that when trying to # perform an snmp SET operation to the sysLocation.0 variable will make # the agent return the "notWritable" error code. IE, including # this token in the snmpd.conf file will disable write access to # the variable. # arguments: location_string syslocation "System Location" # syscontact: The contact information for the administrator # Note that setting this value here means that when trying to # perform an snmp SET operation to the sysContact.0 variable will make # the agent return the "notWritable" error code. IE, including # this token in the snmpd.conf file will disable write access to # the variable. # arguments: contact_string syscontact "Your Name" # sysservices: The proper value for the sysServices object. # arguments: sysservices_number sysservices 76 ########################################################################### # SECTION: Agent Operating Mode # # This section defines how the agent will operate when it # is running. # master: Should the agent operate as a master agent or not. # Currently, the only supported master agent type for this token # is "agentx". # # arguments: (on|yes|agentx|all|off|no) master agentx agentXSocket tcp:localhost:705 ########################################################################### # SECTION: Trap Destinations # # Here we define who the agent will send traps to. # trapsink: A SNMPv1 trap receiver # arguments: host [community] [portnum] trapsink Caché_database_server_name_or_ip_address public ############################################################################### # Access Control ############################################################################### # As shipped, the snmpd demon will only respond to queries on the # system mib group until this file is replaced or modified for # security purposes. Examples are shown below about how to increase the # level of access. # # By far, the most common question I get about the agent is "why won't # it work?", when really it should be "how do I configure the agent to # allow me to access it?" # # By default, the agent responds to the "public" community for read # only access, if run out of the box without any configuration file in # place. The following examples show you other ways of configuring # the agent so that you can change the community names, and give # yourself write access to the mib tree as well. # # For more information, read the FAQ as well as the snmpd.conf(5) # manual page. # #### # First, map the community name "public" into a "security name" # sec.name source community com2sec notConfigUser default public #### # Second, map the security name into a group name: # groupName securityModel securityName group notConfigGroup v1 notConfigUser group notConfigGroup v2c notConfigUser #### # Third, create a view for us to let the group have rights to: # Make at least snmpwalk -v 1 localhost -c public system fast again. # name incl/excl subtree mask(optional) # access to 'internet' subtree view systemview included .1.3.6.1 # access to Cache MIBs Caché and Ensemble view systemview included .1.3.6.1.4.1.16563.1 view systemview included .1.3.6.1.4.1.16563.2 #### # Finally, grant the group read-only access to the systemview view. # group context sec.model sec.level prefix read write notif access notConfigGroup "" any noauth exact systemview none none After editing the `/etc/snmp/snmp.conf` file restart the snmpd deamon. service snmpd restart Check the snmpd status, note that AgentX has been started see the status line: __Turning on AgentX master support.__ h-4.2# service snmpd restart Redirecting to /bin/systemctl restart snmpd.service sh-4.2# service snmpd status Redirecting to /bin/systemctl status snmpd.service ● snmpd.service - Simple Network Management Protocol (SNMP) Daemon. Loaded: loaded (/usr/lib/systemd/system/snmpd.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2016-04-27 00:31:36 EDT; 7s ago Main PID: 27820 (snmpd) CGroup: /system.slice/snmpd.service └─27820 /usr/sbin/snmpd -LS0-6d -f Apr 27 00:31:36 vsan-tc-db2.iscinternal.com systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon.... Apr 27 00:31:36 vsan-tc-db2.iscinternal.com snmpd[27820]: Turning on AgentX master support. Apr 27 00:31:36 vsan-tc-db2.iscinternal.com snmpd[27820]: NET-SNMP version 5.7.2 Apr 27 00:31:36 vsan-tc-db2.iscinternal.com systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon.. sh-4.2# After restarting snmpd you must restart the Caché SNMP subagent using the `^SNMP` routine: %SYS>do stop^SNMP() %SYS>do start^SNMP(705,20) The operating system snmpd daemon and Caché subagent should now be running and accessible. ## 4. Testing MIB access MIB access can be checked from the command line with the following commands. `snmpget` returns a single value: snmpget -mAll -v 2c -c public vsan-tc-db2 .1.3.6.1.4.1.16563.1.1.1.1.5.5.72.50.48.49.53 SNMPv2-SMI::enterprises.16563.1.1.1.1.5.5.72.50.48.49.53 = STRING: "Cache for UNIX (Red Hat Enterprise Linux for x86-64) 2015.2.1 (Build 705U) Mon Aug 31 2015 16:53:38 EDT" And `snmpwalk` will 'walk' the MIB tree or branch: snmpwalk -m ALL -v 2c -c public vsan-tc-db2 .1.3.6.1.4.1.16563.1.1.1.1 SNMPv2-SMI::enterprises.16563.1.1.1.1.2.5.72.50.48.49.53 = STRING: "H2015" SNMPv2-SMI::enterprises.16563.1.1.1.1.3.5.72.50.48.49.53 = STRING: "/db/trak/hs2015/cache.cpf" SNMPv2-SMI::enterprises.16563.1.1.1.1.4.5.72.50.48.49.53 = STRING: "/db/trak/hs2015/mgr/" etc etc There are also several windows and \*nix clients available for viewing system data. I use the free iReasoning MIB Browser. You will have to load the ISC-CACHE.MIB file into the client so it knows the structure of the MIB. The following image shows the iReasoning MIB Browser on OSX. ![free iReasoning MIB Browser](https://community.intersystems.com/sites/default/files/inline/images/mib_browser_0.png) ## Including in Monitoring tools This is where there can be wide differences in implementation. The choice of monitoring or analytics tool I will leave up to you. _Please leave comments to the post detailing the tools and value you get from them for monitoring and managing your systems. This will be a big help for other community members._ Below is a screen shot from the popular _PRTG_ Network Monitor showing Caché metrics. The steps to include Caché metrics in PRTG are similar to other tools. ![PRTG Monitoring tool](https://community.intersystems.com/sites/default/files/inline/images/prtg_0.png) ### Example workflow - adding Caché MIB to monitoring tool. #### Step 1. Make sure you can connect to the operating system MIBs. A tip is to do your trouble-shooting against the operating system not Caché. It is most likely that monitoring tools already know about and are preconfigured for common operating system MIBs so help form vendors or other users may be easier. Depending on the monitoring tool you choose you may have to add an SNMP 'module' or 'application', these are generally free or open source. I found the vendor instructions pretty straight forward for this step. Once you are monitoring the operating system metrics its time to add Caché. #### Step 2. Import the `ISC-CACHE.mib` and `ISC-ENSEMBLE.mib` into the tool so that it knows the MIB structure. The steps here will vary; for example PRTG has a 'MIB Importer' utility. The basic steps are to open the text file `ISC-CACHE.mib` in the tool and import it to the tools internal format. For example Splunk uses a Python format, etc. _Note:_ I found the PRTG tool timed out if I tried to add a sensor with all the Caché MIB branches. I assume it was walking the whole tree and timed out for some metrics like process lists, I did not spend time troubleshooting this, instead I worked around this problem by only importing the performance branch (cachePerfTab) from the `ISC-CACHE.mib`. Once imported/converted the MIB can be reused to collect data from other servers in your network. The above graphic shows PRTG using Sensor Factory sensor to combine multiple sensors into one chart. # Summary There are many monitoring, alerting and some very smart analytics tools available, some free, others with licences for support and many and varied functionality. You must monitor your system and understand what activity is normal, and what activity falls outside normal and must be investigated. SNMP is a simple way to expose Caché and Ensemble metrics. I was asked a couple of questions offline, so the following is to answer them: _Q1. In your article, why do you say it is necessary to change information strings in snmpd.conf? (ie. syslocation/syscontact)?_ A1. What I mean is that you should change syslocation and syscontact to reflect your site, but leaving them as the defaults in the sample will not stop SNMP working using this sample `snmpd.conf`file. _Q2. you also mention basic errors you made in configuring it, which were these? It might be helpful to mention the debugging facilities for snmp `(^SYS("MONITOR","SNMP","DEBUG") )` as well?_ A2. One problem was misconfiguring the security settings in `snmpd.conf`. Following the example above will get you there. I also spun my wheels with what turned out to be a spelling (or case) error on the line `agentXSocket tcp:localhost:705`. In the end I figured out the problem was to do with agentX not starting by looking at the logs written to the `install-dir/mgr/SNMP.log` file. Caché logs any problems encountered while establishing a connection or answering requests in the `SNMP.log`. You should also check `cconsole.log` and the logs for snmpd in the OS. On Windows, iscsnmp.dll logs any errors it encounters in %System%\System32\snmpdbg.log (on a 64–bit Windows system, this file is in the SysWOW64 subdirectory). As pointed out in Fabian's question more information can be logged to the SNMP.log if you set ^SYS("MONITOR","SNMP","DEBUG")=1 in the %SYS namespace and restart the ^SNMP Caché subagent process. This logs details about each message received and sent. Thanks for the questions MO BTW Although I have not tried implementing - due to high number of Healthcare applications using Caché I thought it may be of interest that Paessler PRTG has developed new sensors for monitoring medical equipment that communicate via HL7 and DICOM.https://www.paessler.com/blog/2016/04/13/all-about-prtg/ehealth-sensors-know-what-is-going-on-in-your-medical-it I've installed PRTG on a windows 10 system... enabled SNMP services... I've configured SNMP Service to Community "public" destination "127.0.0.1"... PRTG is able to see and graph the system statistics... OK.Then I imported ISC-Cache.mib with Paesler MIB Imported, OK, and "Save for PRTG Network Monitor"... everything seems fine, but, then, where is supposed to be? When I go to PRTG NM I cannot see anyhing related to Caché ... no clue about the library that I supposedly just imported... S of SNMP means Simple... so I'm pretty sure I'm missing something really basic here, but I don't know how to go on. Found... I just got to go to add sensors to a group, windows type and snmp library... there it is the oidlib I had just imported!! When using containers (IRIS) how do use the host to proxy the snmp calls? is there a way to have the container use the hosts ip (and port 705) to do its reporting? Hi Jay, for the docker run command look at the --net=host flag. Also SAM might be of interest to you. See the recent announcement and a user case I hope it's helpful do we hv a similar topic for Cache on windows?Thx!
Announcement
Janine Perkins · Apr 27, 2016

Featured InterSystems Online Course: DeepSee Analyzer Basics

Learn the fundamentals of how and when to use the DeepSee Analyzer. DeepSee Analyzer BasicsThis course describes typical use cases for the Analyzer and shows how to access the Analyzer from the Management Portal. You will learn how use drag and drop to create simple queries, and then refine these queries by sorting, applying filters, and drilling down. Learn More.
Article
Gevorg Arutiunian · Nov 21, 2016

Caché Localization Manager or i18N in InterSystems Caché

Caché Localization Manager CLM is a tool for localization/internationalization/adding multi-language support to a project based on InterSystems Caché. Imagine that you have a ready project where all the content is in Russian, and you need to add an English localization to it. You wrap all your strings into resources, translate them into English and call the necessary resource for Russian or English when necessary. Nothing tricky, if you think about it. But what if there are lots of strings and there are mistakes in Russian (or English)? What if you need to localize in more than one language – say, ten? This is exactly the kind of project where you should use CLM. It will help you localize the entire content of your project into the necessary language and retain the possibility to correct entries. CLM allows you to do the following: Add a new localization. Delete a localization. Export a localization. Import a localization. View two tables at a time. Conveniently switch between spaces. Check spelling. Let’s “look under the hood” now Caché has a standard approach to implementing I11N using the $$$TEXT macros: $$$TEXT("Text", "Domain", "Language") where: Text — the text to be used for localization in the future. Domain — modules in your applications. Language — the language of "Text". If you use $$$TEXT in COS code, data is added to the ^CacheMsg global during class compilation. And this is the global that CLM works with. In ^CacheMsg, everything is identical to $$$TEXT, you just add "ID" as the text hash. ^CacheMsg("Domain", "Language", "ID") = "Text" If you are using CSP, then the use of $$$TEXT in CSP will look as follows: <csp:text id="14218931" domain="HOLEFOODS">Date Of Sale</csp:text> Installation First of all, you need to download the Installed class from GitHub and import it to any convenient space in Caché. I will use the USER space. Once done, open the terminal and switch to the USER space. To start the installation, enter the following command in the terminal: USER> do ##class(CLM.Installer).setup() Installation process: You can make sure the application is installed correctly by following this link: http://localhost:57772/csp/clm/index.csp (localhost:57772 — the path to your server). Settings CLM uses Yandex to perform the translation. You will now need to obtain a Yandex API key to let it work. It's free, but needs registration. Let’s now deal with spellchecking. CLM uses Caché Native Access for SpellCheck implementation. CNA was written for calling external functions from dynamic libraries, such as .dll or .so. SpellCheck works with the Hunspell library and needs two files for spellchecking. The first file is a dictionary containing words, the second one contains affixes and defines the meaning of special marks (flags) in the dictionary. How words are checked: All words are packed from CLM and sent to Hunspell via CNA, where CNA converts them into a language that Hunspell understands. Hunspell checks every word, finds the root form and all possible variations, and returns the result. But where do we get all these dictionaries and libraries? — CNA: use an available release or build it on your own. — Hunspell: same thing, use an available version or download sources for making your own build. — We will also need a standard C language library. In Windows, it is located here: C:\Windows\System32\msvcrt.dll. — Dictionaries can be downloaded here. At the moment, 51 languages are supported: Albanian Czech German Latin Romanian Vietnamese Arabian Chinese Greek Latvian Russian Armenian Danish Hebrew Lithuanian Spanish Azeri Dutch Hungarian Macedonian Serbian Belarusian English Icelandic Malay Slovak Bosnian Estonian Indonesian Maltese Slovenian Basque Esperanto Italian Norwegian Swedish Bulgarian Finnish Japanese Polish Thai Catalan French Kazan Portuguese Turkish Croatian Georgian Korean Brazil Ukrainian The entire configuration process boils down to entering the paths to everything you got before. Open CLM in a browser. There is a “Set Paths” button in the upper right corner. Click it and you’ll see the next window. Use it to enter the required paths. Here’s what I got: Demonstration of a registration form localization Link to demonstration. Password and login: demo. Your critique, comments and suggestion are very welcome. The source code and instructions are available on github under an MIT license. An interesting article, but I'm puzzled by your use of the term "I11n". I'm familiar with "I18n" as an abbreviation for Internationalization (because there are 18 letters between the "I" and the "n". Likewise, I understand "L10n" as standing for Localization. Some Googling suggests that "I11n" is short for Introspection. Or have I missed something? Hi, John!i11n here stands for "internationalization" i18n - internationalization, but i11n is something else True. Title is fixed
Announcement
Olga Zavrazhnova · Jan 9, 2020

InterSystems Open Exchange Survey 2019 - 7 questions that matter!

Hi Community! Thank you for being a part of the InterSystems Open Exchange! We want to know what you like about Open Exchange and how we can make it better in 2020. Could you please go through this short survey which will let us know what do you think. ➡️ Open Exchange Survey 2019 (2 minutes, 7 questions) Your answers are very important to us! Sincerely, Your InterSystems Developer Community Team
Announcement
Anastasia Dyubaylo · Jan 24, 2020

New Video: What Developers Love About InterSystems IRIS

Hi Community, New video, recorded by @Benjamin.DeBoe, is available on InterSystems Developers YouTube: ⏯ What Developers Love About InterSystems IRIS InterSystems Product Manager @Benjamin.DeBoe talks about what developers love about InterSystems IRIS Data Platform - the data and code are "next to one another" making your code very efficient. Try InterSystems IRIS: https://www.intersystems.com/try Enjoy watching the video! 👍🏼
Announcement
Jeff Fried · Jan 27, 2020

InterSystems IRIS and IRIS for Health 2020.1 preview is published

Preview releases are now available for the 2020.1 version of InterSystems IRIS and IRIS for Health! Kits and Container images are available via the WRC's preview download site. The build number for these releases is 2020.1.0.199.0. (Note: first release was build 197, updated to 199 on 2/12/20) InterSystems IRIS Data Platform 2020.1 has many new capabilities including: Kernel Performance enhancements, including reduced contention for blocks and cache lines Universal Query Cache - every query (including embedded & class ones) now gets saved as a cached query Universal Shard Queue Manager - for scale-out of query load in sharded configurations Selective Cube Build - to quickly incorporate new dimensions or measures Security improvements, including hashed password configuration Improved TSQL support, including JDBC support Dynamic Gateway performance enhancements Spark connector update MQTT support in ObjectScript (NOTE: this preview build does not include TLS 1.3 and OpenLDAP updates, which are planned for General Availability) InterSystems IRIS for Health 2020.1 includes all of the enhancements of InterSystems IRIS. In addition, this release includes: In-place conversion to IRIS for Health HL7 Productivity Toolkit including Migration Tooling and Cloverleaf conversion X12 enhancements FHIR R4 base standard support As this is an EM (Extended Maintenance) release, customers may want to know the differences between 2020.1 and 2019.1. These are listed in the release notes: InterSystems IRIS 2020.1 release notes IRIS for Health 2020.1 release notes Draft documentation can be found here: InterSystems IRIS 2020.1 documentation IRIS for Health 2020.1 documentation The platforms on which InterSystems IRIS and IRIS for Health 2020.1 are supported for development and production are detailed in the Supported Platforms document. Jeffrey, thank you for the info. Do you already know that the Supported Platforms document link is broken? (404) Hi Jeff! What are the Docker image tags for Community Editions? I've just uploaded the Community Editions to the Docker Store (2/13-updated with new preview build): docker pull store/intersystems/iris-community:2020.1.0.199.0 docker pull store/intersystems/irishealth-community:2020.1.0.199.0 Thanks, Steve! Will native install kits be available for the Community Editions as well? Yes, full kit versions of the 2020.1 Community Edition Preview are available through the WRC download site as well. I'm getting this error when I attempt to access the link ... Jeffery If you don't use that link and first log into the WRC application: https://wrc.intersystems.com/wrc/enduserhome.csp Can you then go to: https://wrc.intersystems.com/wrc/coDistribution2.csp Then select Preview? Some customers have had problems with the distib pages because their site restricts access to some JS code we get from a third party. I get the same result using your suggested method, Brendan. I'm not technically a customer; I work for a Services Partner of ISC. I am a DC Moderator though (if that carries any weight) so it would be nice to keep abreast of the new stuff OK I needed to do one more click, your Org does not have a support contract so you can't have access to these pages, sorry. Maybe Learning Services could help you out but I can't grant you access to the kits on the WRC. Hello, I took this for a spin and noticed that the new Prometheus metrics are not available on it like they were in 2019.4 ? ( ie: https://community.intersystems.com/post/monitoring-intersystems-iris-using-built-rest-api ). Am I missing something or is the metrics api still under consideration to make it into this build ? The correct link is https://docs.intersystems.com/iris20201/csp/docbook/platforms/index.html. I fixed the typo in the post. Tanks for pointing that out! Seems to be there for me... Hello Jeffrey, We're currently working on IRIS for Health 20.1 build 197, and we were wondering what fixes or additions went to latest build 199. Intesystems used publish all fixes with each FT build version, is there such list? Thank you Yuriy The Preview has been updated with build 2020.1.0.199.0. This includes a variety of changes, primarily corrections for issues found under rare conditions in install, upgrade, and certain distributed configurations. None of these changes impacts any published API. Thank you for working with the preview and for your feedback! Hi Yuriy - Thanks for pointing this out. We did not prepare a list for this, but I did make a comment on this thread, including verifying that none of these changes impacts any published API. If there is a change resolving an issue you reported through the WRC, you'll see that this is resolved via the normal process. We will be publishing detailed changenotes with the GA release. -Jeff
Article
Mark Bolinsky · Mar 6, 2020

InterSystems IRIS for Health 2020.1 HL7 Benchmark

Introduction InterSystems has recently completed a performance and scalability benchmark of IRIS for Health 2020.1, focusing on HL7 version 2 interoperability. This article describes the observed throughput for various workloads, and also provides general configuration and sizing guidelines for systems where IRIS for Health is used as an interoperability engine for HL7v2 messaging. The benchmark simulates workloads that have been designed to closely match live environments. The details of the simulation are described in the Workload Description and Methodology section. The tested workloads comprised HL7v2 Patient Administration (ADT) and Observation Result (ORU) payloads and included transformations and re-routing. The 2020.1 version of IRIS for Health has demonstrated a sustained throughput of over 2.3 billion (total inbound and outbound) messages per day with a commodity server using the 2nd Generation Intel® Xeon® Scalable Processors and Intel® Optane™ SSD DC P4800X Series SSD storage. These results have more than doubled the scalability from the prior Ensemble 2017.1 HL7v2 throughput benchmarking. Throughout these tests, IRIS for Health was configured to preserve first-in/first-out (FIFO) ordering, and to fully persist messages and queues for each inbound and outbound message. By persisting the queues and messages, IRIS for Health provides data protection in the event of a system crash, and full search and resend capabilities for historic messages. Further, configuration guidelines are discussed in the sections below, which will assist you in choosing an appropriate configuration and deployment to adequately meet your workload’s performance and scalability requirements. The results demonstrate that IRIS for Health is capable of satisfying extreme messaging throughput on commodity hardware, and in most cases allowing a single small server to provide HL7 interoperability for an entire organization. Overview of Results Three workloads were used to represent different aspects of HL7 interoperability activity: T1 workload: uses simple pass-through of HL7 messages, with one outbound message for each inbound message. The messages were passed directly from the Ensemble Business Service to the Ensemble Business Operation, without a routing engine. No routing rules were used and no transformations were executed. One HL7 message instance was created in the database per inbound message. T2 workload: uses a routing engine to modify an average of 4 segments of the inbound message and route it to a single outbound interface (1-to-1 with a transform). For each inbound message, one data transformation was executed and two HL7 message objects were created in the database. T4 workload: uses a routing engine to route separately modified messages to each of four outbound interfaces. On average, 4 segments of the inbound message were modified in each transformation (1 inbound to 4 outbound with 4 transforms). For each inbound message four data transformations were executed, four messages were sent outbound, and five HL7 message objects were created in the database. The three workloads were run on a physical 48-core system with two Intel® Scalable Gold 6252 processors with two 750GB Intel® Optane™ SSD DC P4800X SSD drives running Red Hat Enterprise Linux 8. The data is presented as the number of messages per second (and per hour) inbound, the number per second (and per hour) outbound, as well as the total messages (inbound plus outbound) in a 10-hour day. Additionally, CPU utilization is presented as a measure of available system resources at a given level of throughput. Scalability Results Table-1: Summary of throughput of the four workloads on this tested hardware configuration: * Combined workload with 25% of T1 / 25% of T2 / 50% T4 workload mix Workload Description and Methodology The tested workloads included HL7v2 Patient Administration (ADT) and Observation Result (ORU) messages, which had an average size of 1.2KB and an average of 14 segments. Roughly 4 segments were modified by the transformations (for T2 and T4 workloads). The tests represent 48 to 128 inbound and 48 to 128 outbound interfaces receiving and sending messages over TCP/IP. In the T1 workload, four separate namespaces each with 16 interfaces were used, and the T2 workload used three namespaces each with 16 interfaces, the T4 workload used four namespaces each with 32 interfaces, and the final “mixed workload” used three namespaces with 16 for T1 workload, 16 for T2 workload, and 32 for T4 workload. The scalability was measured by gradually increasing traffic on each interface to find the highest throughput with acceptable performance criteria. For the performance to be acceptable the messages must be processed at a sustained rate, with no queuing, no measurable delays in delivery of messages and the average CPU usage must remain below 80%. Previous testing has demonstrated that the type of HL7 message used is not significant to the performance or scalability of Ensemble; the significant factors are the number of inbound messages, the size of inbound and outbound messages, the number of new messages created in the routing engine, and the number of segments modified. Additionally, previous testing has shown that processing individual fields of an HL7 message in a data transformation is not usually significant to performance. The transformations in these tests used fairly straightforward assignments to create new messages. Note that complex processing (such as use of extensive SQL queries in a data transformation) may cause results to vary. Previous testing has also verified that rules processing is not usually significant. The routing rule sets used in these tests averaged 32 rules, with all rules being simple. Note that extremely large or complex rule sets may cause results to vary. Hardware Server Configuration The tests utilized a server with 2nd Generation Intel® Scalable Gold 6252 “Cascade Lake” processors providing 48 cores @ 2.1GHz on a 2-socket system, 24 cores per socket with 192 GB DDR4-2933 DRAM, and 10Gb Ethernet network interface. Red Hat Enterprise Linux Server 8 operating system was used for this test with InterSystems IRIS for Health 2020.1 Disk Configuration Messages passing through IRIS for Health are fully persisted to disk. In the case of this test two Intel 750GB Intel® Optane™ SSD DC P4800X SSD drives internal to the system were used splitting the databases on one drive and the journals on another. In addition to ensure real-world comparison synchronous commit is enabled on the journals to force data durability. For the T4 workload as described previously in this document, each inbound HL7 message generates roughly 50KB of data, which can be broken down as described in Table 2. Transaction journals are typically kept on line for less time than message data or logs and this should be taken into account when calculating the total disk space required. Table 2: Disk Requirement per inbound HL7 T4 Message Contributor Data Requirement Segment Data 4.5 KB HL7 Message Object 2 KB Message Header 1.0 KB Routing Rule Log 0.5 KB Transaction Journals 42 KB Total 50 KB Recall that the T4 workload used a routing engine to route separate modified messages to each of four outbound interfaces. On average, 4 segments of the inbound message were modified in each transformation (1 inbound to 4 outbound with 4 transforms). For each inbound message four data transformations were executed, four messages were sent outbound, and five HL7 message objects were created in the database. When configuring systems for production utilization, net requirements should be calculated by considering the daily inbound volumes as well as the purging schedule for HL7 messages and the retention policy for journal files. Additionally, appropriate journal file space should be configured on the system so as to prevent the journal disk volumes from filling up. The journal files should reside on physically separate disk from the database files, for both performance as well as reliability considerations. Conclusion InterSystems IRIS for Health HL7v2 message throughput demonstrated in these tests illustrates the massive throughput capabilities with a modest 2-socket commodity server configuration to support the most demanding message workloads of any organization. Additionally, InterSystems is committed to constantly improving on the performance and scalability from version to version along with taking advantage of the latest server and cloud technologies. The following graph provides an overview and comparison of the increase in throughput from the previous Ensemble 2015.1 and Ensemble 2017.1 benchmarks with the Intel® E5-2600 v3 (Haswell) processors and Ensemble 2017.1 benchmark with 1st Generation Intel® Scalable Platinum Series (Skylake) processors respectively to latest results with the 2nd Generation Intel® Scalable Gold Series (Cascade Lake) processors running IRIS for Health 2020.1. Graph-1: Message throughput (in millions) per 10-hour day on a single server InterSystems IRIS for Health continues to raise the bar on interoperability throughput from version to version along with offering flexibility in connectivity capabilities. As the above graph shows the message throughput has increased significantly and, in case of T2 workloads, doubled from 2017, and comparing to 2015 more than tripled throughput in the same 10-hour window and sustained over 2.3 billion total 24-hour message rates. Another key indicator of the advancements of IRIS for Health is the throughput improvement in the more complex T2 and T4 workloads which incorporates transformations and routing rules as opposed to pure pass-through operation of the T1 workload. InterSystems is available to discuss solutions for your organizations interoperability needs. Hi Mark! These are impressive results! Do you have the link to the previous Ensemble 2017.1 HL7v2 throughput benchmarking results? They used to be available on our website, but have since been removed since the results where from 3 years ago. The summary results from 2015 and 2017 have been included in graph-1 above in this new report for comparison. Thanks. hi Mark Bolinsky ; I am student , I need HL7 Benchmark to test the process that I implemented. Please, can you help me get it?
Announcement
Anastasia Dyubaylo · Mar 30, 2020

MIT COVID19 Challenge Virtual Hackathon: Join with InterSystems!

Hey Developers! Want to beat the COVID-19 pandemic with InterSystems and MIT? Please take part in the MIT COVID19 Challenge! It's a 48-hour virtual hackathon with the goal to develop solutions that address the most pressing technical, social, and financial issues caused by the COVID-19 outbreak. And it's your chance to build a solution on InterSystems IRIS for the COVID-19 crisis! In this 48-hour virtual event, MIT will help tackle the most pressing technical, social, and financial issues caused by the COVID-19 outbreak. Participants will form teams on Friday, April 3rd to generate solutions, including proof of concepts and a preliminary vision for execution. On Sunday, April 5th, teams will reconvene to present their solutions. InterSystems as a partner of the MIT COVID-19 Challenge virtual hackathon will provide templates, mentors and hosting resources to help teams to build and deploy your solutions with InterSystems IRIS for Health. ➡️ Apply to participate now! Apply for the hackathon! You'd be able to form teams and you'll get our mentors' support during the hackathon along with a set of templates to build on IRIS for Health effectively! Stay tuned!