Search

Clear filter
Announcement
Evgeny Shvarov · Jun 30, 2017

Get Your Free Registration on InterSystems Global Summit 2017!

Hi, Community!Hope you have already put in your schedule the visit to InterSystems Global Summit 2017 which will take place on 10-13 of September in remarkable JW Marriott Desert Springs Resort and Spa.This year we have Experience Lab, The Unconference, and 50 more other sessions, regarding performance, cloud, scalability, FHIR, high availability and other solutions and best practices.Global Summit it the most effective way to know what's new and what are the most powerful practices to make successful solutions with InterSystems technology.Today is the last day for early bird $999 tickets.But! You can get a free of charge ticket on InterSystems Global Masters Advocate hub!There are numerous ways to earn the points: write articles or answer the questions, publish testimonials or provide referrals, or simply watch and read articles and share it on social networks.To join Global Masters leave your comment in this post and we'll send you a personal invite link.Note!Allow us to recognize your contribution to Developer Community in Global Masters and register with the same email you have in Developer Community.Also, Community moderators are getting free tickets to Global Summit.This year they are -- [@Eduard.Lebedyuk], [@John.Murray], and [@Dmitry.Maslennikov].See you on InterSystems Global Summit 2017! Hi, Community!Just want to share the good news.Early bird registration for $999 prolonged until 14th of July, we also have $200 and $300 discounts for you on Global Masters. See the agenda of daily sessions on Solution Developers Conference.
Announcement
Evgeny Shvarov · Oct 31, 2017

Key Notes Videos From InterSystems Global Summit 2017

Hi, Community!See the Key Notes videos from Global Summit 2017 with new InterSystems IRIS Data Platform announcement.InterSystems Global Summit Keynote - Part 1InterSystems Global Summit Keynote - Part 2
Announcement
Evgeny Shvarov · Oct 24, 2017

InterSystems Developer Meetup in Cambridge 25th of October 2017

Hi, Community!We are having InterSystems Developer Meetup tomorrow 25th of October in CIC.What is it?It's an open evening event to:know more about InterSystems products and new technology features;discuss it with other developers in your area and with developers and engineers from InterSystems Corporation;network with developers of the innovative solutions.Why attend?If you are new to InterSystems data platform, Meetup is a great way to know more and get a direct impression. You can listen to what are the new features and best practices of InterSystems products and discuss your tasks with experienced developers who already used it successfully or with InterSystems employees.If you are already using InterSystems products it’s a great way to meet in person other developers who are making and supporting solutions on InterSystems Data Platform in your region and to discuss your problems and questions with InterSystems developers and engineers directly.Why attend tomorrow?Come tomorrow because we have a greatAGENDA!6-00pm InterSystems IRIS: Sharding and ScalabilityWe just launched new data platform InterSystems IRIS which comes with sharding feature. Tomorrow Jeff Miller, one of sharding developers will describe you how can you benefit from it and you can ask him how it works.6-30 pm Optimize Your Workflow with Atelier 1.1And! Hope you've heard a lot already about our new IDE Atelier! Tomorrow you can listen to the update on how Atelier can help you to develop InterSystems solutions more effectively and you can talk directly to Atelier developer [@Michelle.Stolwyk].7-30 pm Clustering options for high availability and scalabilityAlso, InterSystems Data Platform is known for its High Availability features. [@Oren.Wolf], InterSystems product manager, will have a session which more details on InterSystems High Availability solutions.How to find the place?It's in Cambridge Innovation Center, One Broadway, Cambridge, MA. Come at 5.30pm, bring your ID, come up to the 5th floor and join us in Venture Cafe.Join us for food, beverages, and networking and discuss powerful new InterSystems solutions with other developers in Boston metro area. See the live stream recording! Join Live Stream today and ask your questions online! Thanks for this, Evgeny. It doesn't look like i'll be driving down tonight given the weather here in Maine, so I'll be participating via live stream! Sure, Jeff! Hope you can make the next one. Prepare your questions! ) InterSystems IRIS Data Platfrom: Sharding and Scalability by [@Jeff.Miller] And here's the complete recording:https://www.youtube.com/watch?v=J3QLibe15xs[ including 30 min break ] Yep, we will post remastered version soon ) I've posted the 'Clustering options for high availability and scalability' slides on SlideShare (here) Here's my slide deck - The Power Boost of Atelier! It has private access for now, and available only for you, yet. Fixed! Hi!Here is the remastered version of Meetup live stream recording.
Announcement
Evgeny Shvarov · Mar 1, 2016

InterSystems Global Summit 2016 Free Registration Contest. And the winner is...

Hi Community!I'm pleased to announce that the winner of the Global Summit Free Registration Contest is... Dmitry Maslennikov!Final leaderboard is:To win the prize Dmitry published 3 posts and 13 comments in two weeks. Thanks to your votes for posts and comments Dmitry gathered maximum number of points.We award Dmitry with Free Registration promo code for InterSystems Global Summit 2016 and cover expenses for 4 night stay in Waldorf Astoria Arizona Biltmore.Welcome to InterSystems Global Summit 2016! Thanks a lot Congrats! Congratulations! I am looking forward to meeting you at the Global Summit.Stefan So first off, I think Dmitry deserved to win, but I do have a question.In this post, dated Feb 19th, I had 55 points. On the 26th I posted asking about it and my score was at 50. 4 days later and my final score is still 50 (after a lot more posts). What's the deal? edit: Just realized that posts to the Developer Community Feedback forum didn't count, which explains why my points didn't go up. But it still doesn't explain why they went down. :) Scott - I am sure Evgeny will provide some clarification here once he gets online. Thanks for your patience! Hi, Scott!You are right!The explanation is very simple - we had a bug in formula. Sorry about this.But the final leaderboard is quite right! We have points for: posting, commenting, votes in your posts and comments. And yes, we filter Announcements and Developer Community Feedback groups. Scott!Want to invite you to participate in the Second Developer Community Contest!Hope we will not show errors in leaderboard this time :)
Article
Murray Oldfield · Apr 27, 2016

InterSystems Data Platforms and performance - Part 5 Monitoring with SNMP

# InterSystems Data Platforms and performance - Part 5 Monitoring with SNMP In previous posts I have shown how it is possible to collect historical performance metrics using pButtons. I go to pButtons first because I know it is installed with every Data Platforms instance (Ensemble, Caché, …). However there are other ways to collect, process and display Caché performance metrics in real time either for simple monitoring or more importantly for much more sophisticated operational analytics and capacity planning. One of the most common methods of data collection is to use SNMP (Simple Network Management Protocol). SNMP a standard way for Caché to provide management and monitoring information to a wide variety of management tools. The Caché online documentation includes details of the interface between Caché and SNMP. While SNMP should 'just work' with Caché there are some configuration tricks and traps. It took me quite a few false starts and help from other folks here at InterSystems to get Caché to talk to the Operating System SNMP master agent, so I have written this post so you can avoid the same pain. In this post I will walk through the set up and configuration of SNMP for Caché on Red Hat Linux, you should be able to use the same steps for other \*nix flavours. I am writing the post using Red Hat because Linux can be a little more tricky to set up - on Windows Caché automatically installs a DLL to connect with the standard Windows SNMP service so should be easier to configure. Once SNMP is set up on the server side you can start monitoring using any number of tools. I will show monitoring using the popular PRTG tool but there are many others - [Here is a partial list.](https://en.wikipedia.org/wiki/Comparison_of_network_monitoring_systems) Note the Caché and Ensemble MIB files are included in the `Caché_installation_directory/SNMP` folder, the file are: `ISC-CACHE.mib` and `ISC-ENSEMBLE.mib`. #### Previous posts in this series: - [Part 1 - Getting started on the Journey, collecting metrics.](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-1) - [Part 2 - Looking at the metrics we collected.](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-2) - [Part 3 - Focus on CPU.](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-3-focus-cpu) - [Part 4 - Looking at memory.](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-part-4-looking-memory) # Start here... Start by reviewing Monitoring Caché Using SNMP in the [Caché online documentation](http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_snmp). ## 1. Caché configuration Follow the steps in _Managing SNMP in Caché_ section in the [Caché online documentation](http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_snmp) to enable the Caché monitoring service and configure the Caché SNMP subagent to start automatically at Caché startup. Check that the Caché process is running, for example look on the process list or at the OS: ps -ef | grep SNMP root 1171 1097 0 02:26 pts/1 00:00:00 grep SNMP root 27833 1 0 00:34 pts/0 00:00:05 cache -s/db/trak/hs2015/mgr -cj -p33 JOB^SNMP Thats all, Caché configuration is done! ## 2. Operating system configuration There is a little more to do here. First check that the snmpd daemon is installed and running. If not then install and start snmpd. Check snmpd status with: service snmpd status Start or Stop snmpd with: service snmpd start|stop If snmp is not installed then you will have to install as per OS instructions, for example: yum -y install net-snmp net-snmp-utils ## 3. Configure snmpd As detailed in the Caché documentation, on Linux systems the most important task is to verify that the SNMP master agent on the system is compatible with the Agent Extensibility (AgentX) protocol (Caché runs as a subagent) and the master is active and listening for connections on the standard AgentX TCP port 705. This is where I ran into problems. I made some basic errors in the `snmp.conf` file that meant the Caché SNMP subagent was not communicating with the OS master agent. The following sample `/etc/snmp/snmp.conf` file has been configured to start agentX and provide access to the Caché and ensemble SNMP MIBs. _Note you will have to confirm whether the following configuration complies with your organisations security policies._ At a minimum the following lines must be edited to reflect your system set up. For example change: syslocation "System_Location" to syslocation "Primary Server Room" Also edit the at least the following two lines: syscontact "Your Name" trapsink Caché_database_server_name_or_ip_address public Edit or replace the existing `/etc/snmp/snmp.conf` file to match the following: ############################################################################### # # snmpd.conf: # An example configuration file for configuring the NET-SNMP agent with Cache. # # This has been used successfully on Red Hat Enterprise Linux and running # the snmpd daemon in the foreground with the following command: # # /usr/sbin/snmpd -f -L -x TCP:localhost:705 -c./snmpd.conf # # You may want/need to change some of the information, especially the # IP address of the trap receiver of you expect to get traps. I've also seen # one case (on AIX) where we had to use the "-C" option on the snmpd command # line, to make sure we were getting the correct snmpd.conf file. # ############################################################################### ########################################################################### # SECTION: System Information Setup # # This section defines some of the information reported in # the "system" mib group in the mibII tree. # syslocation: The [typically physical] location of the system. # Note that setting this value here means that when trying to # perform an snmp SET operation to the sysLocation.0 variable will make # the agent return the "notWritable" error code. IE, including # this token in the snmpd.conf file will disable write access to # the variable. # arguments: location_string syslocation "System Location" # syscontact: The contact information for the administrator # Note that setting this value here means that when trying to # perform an snmp SET operation to the sysContact.0 variable will make # the agent return the "notWritable" error code. IE, including # this token in the snmpd.conf file will disable write access to # the variable. # arguments: contact_string syscontact "Your Name" # sysservices: The proper value for the sysServices object. # arguments: sysservices_number sysservices 76 ########################################################################### # SECTION: Agent Operating Mode # # This section defines how the agent will operate when it # is running. # master: Should the agent operate as a master agent or not. # Currently, the only supported master agent type for this token # is "agentx". # # arguments: (on|yes|agentx|all|off|no) master agentx agentXSocket tcp:localhost:705 ########################################################################### # SECTION: Trap Destinations # # Here we define who the agent will send traps to. # trapsink: A SNMPv1 trap receiver # arguments: host [community] [portnum] trapsink Caché_database_server_name_or_ip_address public ############################################################################### # Access Control ############################################################################### # As shipped, the snmpd demon will only respond to queries on the # system mib group until this file is replaced or modified for # security purposes. Examples are shown below about how to increase the # level of access. # # By far, the most common question I get about the agent is "why won't # it work?", when really it should be "how do I configure the agent to # allow me to access it?" # # By default, the agent responds to the "public" community for read # only access, if run out of the box without any configuration file in # place. The following examples show you other ways of configuring # the agent so that you can change the community names, and give # yourself write access to the mib tree as well. # # For more information, read the FAQ as well as the snmpd.conf(5) # manual page. # #### # First, map the community name "public" into a "security name" # sec.name source community com2sec notConfigUser default public #### # Second, map the security name into a group name: # groupName securityModel securityName group notConfigGroup v1 notConfigUser group notConfigGroup v2c notConfigUser #### # Third, create a view for us to let the group have rights to: # Make at least snmpwalk -v 1 localhost -c public system fast again. # name incl/excl subtree mask(optional) # access to 'internet' subtree view systemview included .1.3.6.1 # access to Cache MIBs Caché and Ensemble view systemview included .1.3.6.1.4.1.16563.1 view systemview included .1.3.6.1.4.1.16563.2 #### # Finally, grant the group read-only access to the systemview view. # group context sec.model sec.level prefix read write notif access notConfigGroup "" any noauth exact systemview none none After editing the `/etc/snmp/snmp.conf` file restart the snmpd deamon. service snmpd restart Check the snmpd status, note that AgentX has been started see the status line: __Turning on AgentX master support.__ h-4.2# service snmpd restart Redirecting to /bin/systemctl restart snmpd.service sh-4.2# service snmpd status Redirecting to /bin/systemctl status snmpd.service ● snmpd.service - Simple Network Management Protocol (SNMP) Daemon. Loaded: loaded (/usr/lib/systemd/system/snmpd.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2016-04-27 00:31:36 EDT; 7s ago Main PID: 27820 (snmpd) CGroup: /system.slice/snmpd.service └─27820 /usr/sbin/snmpd -LS0-6d -f Apr 27 00:31:36 vsan-tc-db2.iscinternal.com systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon.... Apr 27 00:31:36 vsan-tc-db2.iscinternal.com snmpd[27820]: Turning on AgentX master support. Apr 27 00:31:36 vsan-tc-db2.iscinternal.com snmpd[27820]: NET-SNMP version 5.7.2 Apr 27 00:31:36 vsan-tc-db2.iscinternal.com systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon.. sh-4.2# After restarting snmpd you must restart the Caché SNMP subagent using the `^SNMP` routine: %SYS>do stop^SNMP() %SYS>do start^SNMP(705,20) The operating system snmpd daemon and Caché subagent should now be running and accessible. ## 4. Testing MIB access MIB access can be checked from the command line with the following commands. `snmpget` returns a single value: snmpget -mAll -v 2c -c public vsan-tc-db2 .1.3.6.1.4.1.16563.1.1.1.1.5.5.72.50.48.49.53 SNMPv2-SMI::enterprises.16563.1.1.1.1.5.5.72.50.48.49.53 = STRING: "Cache for UNIX (Red Hat Enterprise Linux for x86-64) 2015.2.1 (Build 705U) Mon Aug 31 2015 16:53:38 EDT" And `snmpwalk` will 'walk' the MIB tree or branch: snmpwalk -m ALL -v 2c -c public vsan-tc-db2 .1.3.6.1.4.1.16563.1.1.1.1 SNMPv2-SMI::enterprises.16563.1.1.1.1.2.5.72.50.48.49.53 = STRING: "H2015" SNMPv2-SMI::enterprises.16563.1.1.1.1.3.5.72.50.48.49.53 = STRING: "/db/trak/hs2015/cache.cpf" SNMPv2-SMI::enterprises.16563.1.1.1.1.4.5.72.50.48.49.53 = STRING: "/db/trak/hs2015/mgr/" etc etc There are also several windows and \*nix clients available for viewing system data. I use the free iReasoning MIB Browser. You will have to load the ISC-CACHE.MIB file into the client so it knows the structure of the MIB. The following image shows the iReasoning MIB Browser on OSX. ![free iReasoning MIB Browser](https://community.intersystems.com/sites/default/files/inline/images/mib_browser_0.png) ## Including in Monitoring tools This is where there can be wide differences in implementation. The choice of monitoring or analytics tool I will leave up to you. _Please leave comments to the post detailing the tools and value you get from them for monitoring and managing your systems. This will be a big help for other community members._ Below is a screen shot from the popular _PRTG_ Network Monitor showing Caché metrics. The steps to include Caché metrics in PRTG are similar to other tools. ![PRTG Monitoring tool](https://community.intersystems.com/sites/default/files/inline/images/prtg_0.png) ### Example workflow - adding Caché MIB to monitoring tool. #### Step 1. Make sure you can connect to the operating system MIBs. A tip is to do your trouble-shooting against the operating system not Caché. It is most likely that monitoring tools already know about and are preconfigured for common operating system MIBs so help form vendors or other users may be easier. Depending on the monitoring tool you choose you may have to add an SNMP 'module' or 'application', these are generally free or open source. I found the vendor instructions pretty straight forward for this step. Once you are monitoring the operating system metrics its time to add Caché. #### Step 2. Import the `ISC-CACHE.mib` and `ISC-ENSEMBLE.mib` into the tool so that it knows the MIB structure. The steps here will vary; for example PRTG has a 'MIB Importer' utility. The basic steps are to open the text file `ISC-CACHE.mib` in the tool and import it to the tools internal format. For example Splunk uses a Python format, etc. _Note:_ I found the PRTG tool timed out if I tried to add a sensor with all the Caché MIB branches. I assume it was walking the whole tree and timed out for some metrics like process lists, I did not spend time troubleshooting this, instead I worked around this problem by only importing the performance branch (cachePerfTab) from the `ISC-CACHE.mib`. Once imported/converted the MIB can be reused to collect data from other servers in your network. The above graphic shows PRTG using Sensor Factory sensor to combine multiple sensors into one chart. # Summary There are many monitoring, alerting and some very smart analytics tools available, some free, others with licences for support and many and varied functionality. You must monitor your system and understand what activity is normal, and what activity falls outside normal and must be investigated. SNMP is a simple way to expose Caché and Ensemble metrics. I was asked a couple of questions offline, so the following is to answer them: _Q1. In your article, why do you say it is necessary to change information strings in snmpd.conf? (ie. syslocation/syscontact)?_ A1. What I mean is that you should change syslocation and syscontact to reflect your site, but leaving them as the defaults in the sample will not stop SNMP working using this sample `snmpd.conf`file. _Q2. you also mention basic errors you made in configuring it, which were these? It might be helpful to mention the debugging facilities for snmp `(^SYS("MONITOR","SNMP","DEBUG") )` as well?_ A2. One problem was misconfiguring the security settings in `snmpd.conf`. Following the example above will get you there. I also spun my wheels with what turned out to be a spelling (or case) error on the line `agentXSocket tcp:localhost:705`. In the end I figured out the problem was to do with agentX not starting by looking at the logs written to the `install-dir/mgr/SNMP.log` file. Caché logs any problems encountered while establishing a connection or answering requests in the `SNMP.log`. You should also check `cconsole.log` and the logs for snmpd in the OS. On Windows, iscsnmp.dll logs any errors it encounters in %System%\System32\snmpdbg.log (on a 64–bit Windows system, this file is in the SysWOW64 subdirectory). As pointed out in Fabian's question more information can be logged to the SNMP.log if you set ^SYS("MONITOR","SNMP","DEBUG")=1 in the %SYS namespace and restart the ^SNMP Caché subagent process. This logs details about each message received and sent. Thanks for the questions MO BTW Although I have not tried implementing - due to high number of Healthcare applications using Caché I thought it may be of interest that Paessler PRTG has developed new sensors for monitoring medical equipment that communicate via HL7 and DICOM.https://www.paessler.com/blog/2016/04/13/all-about-prtg/ehealth-sensors-know-what-is-going-on-in-your-medical-it I've installed PRTG on a windows 10 system... enabled SNMP services... I've configured SNMP Service to Community "public" destination "127.0.0.1"... PRTG is able to see and graph the system statistics... OK.Then I imported ISC-Cache.mib with Paesler MIB Imported, OK, and "Save for PRTG Network Monitor"... everything seems fine, but, then, where is supposed to be? When I go to PRTG NM I cannot see anyhing related to Caché ... no clue about the library that I supposedly just imported... S of SNMP means Simple... so I'm pretty sure I'm missing something really basic here, but I don't know how to go on. Found... I just got to go to add sensors to a group, windows type and snmp library... there it is the oidlib I had just imported!! When using containers (IRIS) how do use the host to proxy the snmp calls? is there a way to have the container use the hosts ip (and port 705) to do its reporting? Hi Jay, for the docker run command look at the --net=host flag. Also SAM might be of interest to you. See the recent announcement and a user case I hope it's helpful do we hv a similar topic for Cache on windows?Thx!
Announcement
Janine Perkins · Apr 27, 2016

Featured InterSystems Online Course: DeepSee Analyzer Basics

Learn the fundamentals of how and when to use the DeepSee Analyzer. DeepSee Analyzer BasicsThis course describes typical use cases for the Analyzer and shows how to access the Analyzer from the Management Portal. You will learn how use drag and drop to create simple queries, and then refine these queries by sorting, applying filters, and drilling down. Learn More.
Article
Gevorg Arutiunian · Nov 21, 2016

Caché Localization Manager or i18N in InterSystems Caché

Caché Localization Manager CLM is a tool for localization/internationalization/adding multi-language support to a project based on InterSystems Caché. Imagine that you have a ready project where all the content is in Russian, and you need to add an English localization to it. You wrap all your strings into resources, translate them into English and call the necessary resource for Russian or English when necessary. Nothing tricky, if you think about it. But what if there are lots of strings and there are mistakes in Russian (or English)? What if you need to localize in more than one language – say, ten? This is exactly the kind of project where you should use CLM. It will help you localize the entire content of your project into the necessary language and retain the possibility to correct entries. CLM allows you to do the following: Add a new localization. Delete a localization. Export a localization. Import a localization. View two tables at a time. Conveniently switch between spaces. Check spelling. Let’s “look under the hood” now Caché has a standard approach to implementing I11N using the $$$TEXT macros: $$$TEXT("Text", "Domain", "Language") where: Text — the text to be used for localization in the future. Domain — modules in your applications. Language — the language of "Text". If you use $$$TEXT in COS code, data is added to the ^CacheMsg global during class compilation. And this is the global that CLM works with. In ^CacheMsg, everything is identical to $$$TEXT, you just add "ID" as the text hash. ^CacheMsg("Domain", "Language", "ID") = "Text" If you are using CSP, then the use of $$$TEXT in CSP will look as follows: <csp:text id="14218931" domain="HOLEFOODS">Date Of Sale</csp:text> Installation First of all, you need to download the Installed class from GitHub and import it to any convenient space in Caché. I will use the USER space. Once done, open the terminal and switch to the USER space. To start the installation, enter the following command in the terminal: USER> do ##class(CLM.Installer).setup() Installation process: You can make sure the application is installed correctly by following this link: http://localhost:57772/csp/clm/index.csp (localhost:57772 — the path to your server). Settings CLM uses Yandex to perform the translation. You will now need to obtain a Yandex API key to let it work. It's free, but needs registration. Let’s now deal with spellchecking. CLM uses Caché Native Access for SpellCheck implementation. CNA was written for calling external functions from dynamic libraries, such as .dll or .so. SpellCheck works with the Hunspell library and needs two files for spellchecking. The first file is a dictionary containing words, the second one contains affixes and defines the meaning of special marks (flags) in the dictionary. How words are checked: All words are packed from CLM and sent to Hunspell via CNA, where CNA converts them into a language that Hunspell understands. Hunspell checks every word, finds the root form and all possible variations, and returns the result. But where do we get all these dictionaries and libraries? — CNA: use an available release or build it on your own. — Hunspell: same thing, use an available version or download sources for making your own build. — We will also need a standard C language library. In Windows, it is located here: C:\Windows\System32\msvcrt.dll. — Dictionaries can be downloaded here. At the moment, 51 languages are supported: Albanian Czech German Latin Romanian Vietnamese Arabian Chinese Greek Latvian Russian Armenian Danish Hebrew Lithuanian Spanish Azeri Dutch Hungarian Macedonian Serbian Belarusian English Icelandic Malay Slovak Bosnian Estonian Indonesian Maltese Slovenian Basque Esperanto Italian Norwegian Swedish Bulgarian Finnish Japanese Polish Thai Catalan French Kazan Portuguese Turkish Croatian Georgian Korean Brazil Ukrainian The entire configuration process boils down to entering the paths to everything you got before. Open CLM in a browser. There is a “Set Paths” button in the upper right corner. Click it and you’ll see the next window. Use it to enter the required paths. Here’s what I got: Demonstration of a registration form localization Link to demonstration. Password and login: demo. Your critique, comments and suggestion are very welcome. The source code and instructions are available on github under an MIT license. An interesting article, but I'm puzzled by your use of the term "I11n". I'm familiar with "I18n" as an abbreviation for Internationalization (because there are 18 letters between the "I" and the "n". Likewise, I understand "L10n" as standing for Localization. Some Googling suggests that "I11n" is short for Introspection. Or have I missed something? Hi, John!i11n here stands for "internationalization" i18n - internationalization, but i11n is something else True. Title is fixed
Announcement
Olga Zavrazhnova · Jan 9, 2020

InterSystems Open Exchange Survey 2019 - 7 questions that matter!

Hi Community! Thank you for being a part of the InterSystems Open Exchange! We want to know what you like about Open Exchange and how we can make it better in 2020. Could you please go through this short survey which will let us know what do you think. ➡️ Open Exchange Survey 2019 (2 minutes, 7 questions) Your answers are very important to us! Sincerely, Your InterSystems Developer Community Team
Announcement
Anastasia Dyubaylo · Jan 24, 2020

New Video: What Developers Love About InterSystems IRIS

Hi Community, New video, recorded by @Benjamin.DeBoe, is available on InterSystems Developers YouTube: ⏯ What Developers Love About InterSystems IRIS InterSystems Product Manager @Benjamin.DeBoe talks about what developers love about InterSystems IRIS Data Platform - the data and code are "next to one another" making your code very efficient. Try InterSystems IRIS: https://www.intersystems.com/try Enjoy watching the video! 👍🏼
Announcement
Jeff Fried · Jan 27, 2020

InterSystems IRIS and IRIS for Health 2020.1 preview is published

Preview releases are now available for the 2020.1 version of InterSystems IRIS and IRIS for Health! Kits and Container images are available via the WRC's preview download site. The build number for these releases is 2020.1.0.199.0. (Note: first release was build 197, updated to 199 on 2/12/20) InterSystems IRIS Data Platform 2020.1 has many new capabilities including: Kernel Performance enhancements, including reduced contention for blocks and cache lines Universal Query Cache - every query (including embedded & class ones) now gets saved as a cached query Universal Shard Queue Manager - for scale-out of query load in sharded configurations Selective Cube Build - to quickly incorporate new dimensions or measures Security improvements, including hashed password configuration Improved TSQL support, including JDBC support Dynamic Gateway performance enhancements Spark connector update MQTT support in ObjectScript (NOTE: this preview build does not include TLS 1.3 and OpenLDAP updates, which are planned for General Availability) InterSystems IRIS for Health 2020.1 includes all of the enhancements of InterSystems IRIS. In addition, this release includes: In-place conversion to IRIS for Health HL7 Productivity Toolkit including Migration Tooling and Cloverleaf conversion X12 enhancements FHIR R4 base standard support As this is an EM (Extended Maintenance) release, customers may want to know the differences between 2020.1 and 2019.1. These are listed in the release notes: InterSystems IRIS 2020.1 release notes IRIS for Health 2020.1 release notes Draft documentation can be found here: InterSystems IRIS 2020.1 documentation IRIS for Health 2020.1 documentation The platforms on which InterSystems IRIS and IRIS for Health 2020.1 are supported for development and production are detailed in the Supported Platforms document. Jeffrey, thank you for the info. Do you already know that the Supported Platforms document link is broken? (404) Hi Jeff! What are the Docker image tags for Community Editions? I've just uploaded the Community Editions to the Docker Store (2/13-updated with new preview build): docker pull store/intersystems/iris-community:2020.1.0.199.0 docker pull store/intersystems/irishealth-community:2020.1.0.199.0 Thanks, Steve! Will native install kits be available for the Community Editions as well? Yes, full kit versions of the 2020.1 Community Edition Preview are available through the WRC download site as well. I'm getting this error when I attempt to access the link ... Jeffery If you don't use that link and first log into the WRC application: https://wrc.intersystems.com/wrc/enduserhome.csp Can you then go to: https://wrc.intersystems.com/wrc/coDistribution2.csp Then select Preview? Some customers have had problems with the distib pages because their site restricts access to some JS code we get from a third party. I get the same result using your suggested method, Brendan. I'm not technically a customer; I work for a Services Partner of ISC. I am a DC Moderator though (if that carries any weight) so it would be nice to keep abreast of the new stuff OK I needed to do one more click, your Org does not have a support contract so you can't have access to these pages, sorry. Maybe Learning Services could help you out but I can't grant you access to the kits on the WRC. Hello, I took this for a spin and noticed that the new Prometheus metrics are not available on it like they were in 2019.4 ? ( ie: https://community.intersystems.com/post/monitoring-intersystems-iris-using-built-rest-api ). Am I missing something or is the metrics api still under consideration to make it into this build ? The correct link is https://docs.intersystems.com/iris20201/csp/docbook/platforms/index.html. I fixed the typo in the post. Tanks for pointing that out! Seems to be there for me... Hello Jeffrey, We're currently working on IRIS for Health 20.1 build 197, and we were wondering what fixes or additions went to latest build 199. Intesystems used publish all fixes with each FT build version, is there such list? Thank you Yuriy The Preview has been updated with build 2020.1.0.199.0. This includes a variety of changes, primarily corrections for issues found under rare conditions in install, upgrade, and certain distributed configurations. None of these changes impacts any published API. Thank you for working with the preview and for your feedback! Hi Yuriy - Thanks for pointing this out. We did not prepare a list for this, but I did make a comment on this thread, including verifying that none of these changes impacts any published API. If there is a change resolving an issue you reported through the WRC, you'll see that this is resolved via the normal process. We will be publishing detailed changenotes with the GA release. -Jeff
Article
Mark Bolinsky · Mar 6, 2020

InterSystems IRIS for Health 2020.1 HL7 Benchmark

Introduction InterSystems has recently completed a performance and scalability benchmark of IRIS for Health 2020.1, focusing on HL7 version 2 interoperability. This article describes the observed throughput for various workloads, and also provides general configuration and sizing guidelines for systems where IRIS for Health is used as an interoperability engine for HL7v2 messaging. The benchmark simulates workloads that have been designed to closely match live environments. The details of the simulation are described in the Workload Description and Methodology section. The tested workloads comprised HL7v2 Patient Administration (ADT) and Observation Result (ORU) payloads and included transformations and re-routing. The 2020.1 version of IRIS for Health has demonstrated a sustained throughput of over 2.3 billion (total inbound and outbound) messages per day with a commodity server using the 2nd Generation Intel® Xeon® Scalable Processors and Intel® Optane™ SSD DC P4800X Series SSD storage. These results have more than doubled the scalability from the prior Ensemble 2017.1 HL7v2 throughput benchmarking. Throughout these tests, IRIS for Health was configured to preserve first-in/first-out (FIFO) ordering, and to fully persist messages and queues for each inbound and outbound message. By persisting the queues and messages, IRIS for Health provides data protection in the event of a system crash, and full search and resend capabilities for historic messages. Further, configuration guidelines are discussed in the sections below, which will assist you in choosing an appropriate configuration and deployment to adequately meet your workload’s performance and scalability requirements. The results demonstrate that IRIS for Health is capable of satisfying extreme messaging throughput on commodity hardware, and in most cases allowing a single small server to provide HL7 interoperability for an entire organization. Overview of Results Three workloads were used to represent different aspects of HL7 interoperability activity: T1 workload: uses simple pass-through of HL7 messages, with one outbound message for each inbound message. The messages were passed directly from the Ensemble Business Service to the Ensemble Business Operation, without a routing engine. No routing rules were used and no transformations were executed. One HL7 message instance was created in the database per inbound message. T2 workload: uses a routing engine to modify an average of 4 segments of the inbound message and route it to a single outbound interface (1-to-1 with a transform). For each inbound message, one data transformation was executed and two HL7 message objects were created in the database. T4 workload: uses a routing engine to route separately modified messages to each of four outbound interfaces. On average, 4 segments of the inbound message were modified in each transformation (1 inbound to 4 outbound with 4 transforms). For each inbound message four data transformations were executed, four messages were sent outbound, and five HL7 message objects were created in the database. The three workloads were run on a physical 48-core system with two Intel® Scalable Gold 6252 processors with two 750GB Intel® Optane™ SSD DC P4800X SSD drives running Red Hat Enterprise Linux 8. The data is presented as the number of messages per second (and per hour) inbound, the number per second (and per hour) outbound, as well as the total messages (inbound plus outbound) in a 10-hour day. Additionally, CPU utilization is presented as a measure of available system resources at a given level of throughput. Scalability Results Table-1: Summary of throughput of the four workloads on this tested hardware configuration: * Combined workload with 25% of T1 / 25% of T2 / 50% T4 workload mix Workload Description and Methodology The tested workloads included HL7v2 Patient Administration (ADT) and Observation Result (ORU) messages, which had an average size of 1.2KB and an average of 14 segments. Roughly 4 segments were modified by the transformations (for T2 and T4 workloads). The tests represent 48 to 128 inbound and 48 to 128 outbound interfaces receiving and sending messages over TCP/IP. In the T1 workload, four separate namespaces each with 16 interfaces were used, and the T2 workload used three namespaces each with 16 interfaces, the T4 workload used four namespaces each with 32 interfaces, and the final “mixed workload” used three namespaces with 16 for T1 workload, 16 for T2 workload, and 32 for T4 workload. The scalability was measured by gradually increasing traffic on each interface to find the highest throughput with acceptable performance criteria. For the performance to be acceptable the messages must be processed at a sustained rate, with no queuing, no measurable delays in delivery of messages and the average CPU usage must remain below 80%. Previous testing has demonstrated that the type of HL7 message used is not significant to the performance or scalability of Ensemble; the significant factors are the number of inbound messages, the size of inbound and outbound messages, the number of new messages created in the routing engine, and the number of segments modified. Additionally, previous testing has shown that processing individual fields of an HL7 message in a data transformation is not usually significant to performance. The transformations in these tests used fairly straightforward assignments to create new messages. Note that complex processing (such as use of extensive SQL queries in a data transformation) may cause results to vary. Previous testing has also verified that rules processing is not usually significant. The routing rule sets used in these tests averaged 32 rules, with all rules being simple. Note that extremely large or complex rule sets may cause results to vary. Hardware Server Configuration The tests utilized a server with 2nd Generation Intel® Scalable Gold 6252 “Cascade Lake” processors providing 48 cores @ 2.1GHz on a 2-socket system, 24 cores per socket with 192 GB DDR4-2933 DRAM, and 10Gb Ethernet network interface. Red Hat Enterprise Linux Server 8 operating system was used for this test with InterSystems IRIS for Health 2020.1 Disk Configuration Messages passing through IRIS for Health are fully persisted to disk. In the case of this test two Intel 750GB Intel® Optane™ SSD DC P4800X SSD drives internal to the system were used splitting the databases on one drive and the journals on another. In addition to ensure real-world comparison synchronous commit is enabled on the journals to force data durability. For the T4 workload as described previously in this document, each inbound HL7 message generates roughly 50KB of data, which can be broken down as described in Table 2. Transaction journals are typically kept on line for less time than message data or logs and this should be taken into account when calculating the total disk space required. Table 2: Disk Requirement per inbound HL7 T4 Message Contributor Data Requirement Segment Data 4.5 KB HL7 Message Object 2 KB Message Header 1.0 KB Routing Rule Log 0.5 KB Transaction Journals 42 KB Total 50 KB Recall that the T4 workload used a routing engine to route separate modified messages to each of four outbound interfaces. On average, 4 segments of the inbound message were modified in each transformation (1 inbound to 4 outbound with 4 transforms). For each inbound message four data transformations were executed, four messages were sent outbound, and five HL7 message objects were created in the database. When configuring systems for production utilization, net requirements should be calculated by considering the daily inbound volumes as well as the purging schedule for HL7 messages and the retention policy for journal files. Additionally, appropriate journal file space should be configured on the system so as to prevent the journal disk volumes from filling up. The journal files should reside on physically separate disk from the database files, for both performance as well as reliability considerations. Conclusion InterSystems IRIS for Health HL7v2 message throughput demonstrated in these tests illustrates the massive throughput capabilities with a modest 2-socket commodity server configuration to support the most demanding message workloads of any organization. Additionally, InterSystems is committed to constantly improving on the performance and scalability from version to version along with taking advantage of the latest server and cloud technologies. The following graph provides an overview and comparison of the increase in throughput from the previous Ensemble 2015.1 and Ensemble 2017.1 benchmarks with the Intel® E5-2600 v3 (Haswell) processors and Ensemble 2017.1 benchmark with 1st Generation Intel® Scalable Platinum Series (Skylake) processors respectively to latest results with the 2nd Generation Intel® Scalable Gold Series (Cascade Lake) processors running IRIS for Health 2020.1. Graph-1: Message throughput (in millions) per 10-hour day on a single server InterSystems IRIS for Health continues to raise the bar on interoperability throughput from version to version along with offering flexibility in connectivity capabilities. As the above graph shows the message throughput has increased significantly and, in case of T2 workloads, doubled from 2017, and comparing to 2015 more than tripled throughput in the same 10-hour window and sustained over 2.3 billion total 24-hour message rates. Another key indicator of the advancements of IRIS for Health is the throughput improvement in the more complex T2 and T4 workloads which incorporates transformations and routing rules as opposed to pure pass-through operation of the T1 workload. InterSystems is available to discuss solutions for your organizations interoperability needs. Hi Mark! These are impressive results! Do you have the link to the previous Ensemble 2017.1 HL7v2 throughput benchmarking results? They used to be available on our website, but have since been removed since the results where from 3 years ago. The summary results from 2015 and 2017 have been included in graph-1 above in this new report for comparison. Thanks. hi Mark Bolinsky ; I am student , I need HL7 Benchmark to test the process that I implemented. Please, can you help me get it?
Announcement
Anastasia Dyubaylo · Mar 30, 2020

MIT COVID19 Challenge Virtual Hackathon: Join with InterSystems!

Hey Developers! Want to beat the COVID-19 pandemic with InterSystems and MIT? Please take part in the MIT COVID19 Challenge! It's a 48-hour virtual hackathon with the goal to develop solutions that address the most pressing technical, social, and financial issues caused by the COVID-19 outbreak. And it's your chance to build a solution on InterSystems IRIS for the COVID-19 crisis! In this 48-hour virtual event, MIT will help tackle the most pressing technical, social, and financial issues caused by the COVID-19 outbreak. Participants will form teams on Friday, April 3rd to generate solutions, including proof of concepts and a preliminary vision for execution. On Sunday, April 5th, teams will reconvene to present their solutions. InterSystems as a partner of the MIT COVID-19 Challenge virtual hackathon will provide templates, mentors and hosting resources to help teams to build and deploy your solutions with InterSystems IRIS for Health. ➡️ Apply to participate now! Apply for the hackathon! You'd be able to form teams and you'll get our mentors' support during the hackathon along with a set of templates to build on IRIS for Health effectively! Stay tuned!
Announcement
Anastasia Dyubaylo · Apr 6, 2020

The 2nd Programming Contest: InterSystems IRIS with REST API

Hi Developers! Want to participate again in the competition of creating open-source solutions using InterSystems IRIS Data Platform? Then we're pleased to announce the second InterSystems IRIS Online Programming Contest! And the topic for this contest is InterSystems IRIS with REST API. The contest will again last three weeks: April 13-May 3, 2020. Also, please join the InterSystems Contests Discord Channel to chat about contest and technology. Prizes We have 2 winning nominations and there will be more money prizes than in the previous contest! 1. Experts Nomination - winners will be determined by a specially selected jury: 🥇 1st place - $2,000 🥈 2nd place - $1,000 🥉 3rd place - $500 2. Community Nomination - an application that will receive the most votes in total: 🥇 1st place - $1,000 🥈 2nd place - $500 If several participants score the same amount of votes they all are considered as winners and the money prize is shared among the winners. Also, we will provide winners with high-level badges on Global Masters. Who can participate? Any Developer Community member from any country can participate in a contest, except for InterSystems employees. Create an account! Contest Period April 13-26, 2020: Two weeks to upload your applications to Open Exchange (also during this period, you can edit your projects). April 27-May 3, 2020: One week to vote. All winners will be announced on May 4th, 2020. The Topic ➡️ InterSystems IRIS REST API and web socket applications ⬅️ We will choose the best application built using InterSystems IRIS Community Edition(IRIS CE), or InterSystems IRIS for Health Community Edition (IRIS CE4H) as a backend exposing REST API or web sockets interface. And you'll have technology bonuses if you introduce special technology implementations in your application. Learn more about the topic and bonuses here. Helpful Resources 1. How to submit an application to a contest: Quick example & Video 2. Online Course: Building REST API in InterSystems IRIS 3. Video: How to make the REST-API in InterSystems IRIS with Open API spec 4. Article: How to test modules before publishing to Open Exchange Judgment Please find the Judgment and Voting Rules for the Contest here. So! Ready. Set. Code. Stay tuned, the post will be updated! ❗️ Please check out the Official Contest Terms here.❗️ Hey Community! We launched a special hashtag for our programming marathon! Please welcome - #IRIScontest This hashtag is already on InterSystems Developers' social networks: Twitter, Facebook, Telegram and LinkedIn. Let the world know about the Intersystems IRIS Contest! Please share! 😉 Hi Developers! Only 3 days left before the start of the Intersystems IRIS Programming Contest! You will have 2 weeks (April 13-26) to upload your solutions to the Open Exchange and compete for the main prizes. So join our competition and win! 😎 These competitions appear to demand the use of Cache ObjectScript rather than any other language. Is this correct? And if so, why can't other languages be used instead? Hi Rob! The official name of the language is InterSystems ObjectScript or just ObjectScript - it's not only for Caché any more ;) And if so, why can't other languages be used instead? Every contest has a topic. First one was about ObjectScript CLI. The current is about Intersystems IRIS REST API or web sockets interface. There is no ObjectScript requirement in this particular contest directly but it's obviously involved. Because if you even generate the REST API using spec-first approach you get endpoints but you need ObjectScript to write the logic. See examples in the article from @Eduard.Lebedyuk or another from @David.Reche. And next month we are having a Native API contest where applications using python, node.js, .NET, java are very welcome. I updated my reply. Yes, ObjectScript is needed in this contest to implement REST API business logic. Introduced the 3rd technology bonus: usage of a spec-first approach! You get the bonus if you generate the REST-API in Intersystems IRIS with swagger spec using Open API services. Learn more in the documentation. See examples: one, two, three. And this is also a very nice example of spec-first by @Guillaume.Rongier7183! Hi Community! We start tomorrow! Are you ready to participate in our exciting contest? 🚀 Provided a few details on the requirements, possible application types, and bonuses. How to apply for the Programming Contest Log in to Open Exchange, open your applications section. Open the application which you want to apply for the contest and click Apply for Contest. Make sure the status is 'Published'. The application will go for the review and if it fits the topic of the contest the application will be listed on the Contest Board. Hi Rob! We provided more details for the contest - and indeed you can avoid using ObjectScript in the contest!You can use already existing! E.g. are extremely in need of the Frontend for the Forms 2.0, something line Vue, or react would be great! Or you can use any out-of-the-box IRIS REST API. So, provided Intersystems IRIS is used as the back-end database for the data persistence of the APIs, the competition allows the use of any other technologies in front of it? eg QEWD/Node.js + browser UI? The competition allows using REST API on the Intersystems IRIS side. If QEWD/Node.js + browser UI does this, then go ahead. That depends on your definition of "REST API on the Intersystems IRIS side". That Intersystems IRIS provides the HTTP interface? and/or the code that does the work of the API is within IRIS and therefore ObjectScript? As far as QEWD is concerned, Intersystems IRIS is simply a persistent JSON store with no other role (though you still can invoke ObjectScript methods and access classes if you want), so a REST API is implemented in JavaScript and handled by Node.js/QEWD. Sure, you can interpret Intersystems IRIS in any way it works for your application, but in this contest, we want to focus on REST API on Intersystems IRIS side. That means you either use any internal 6 api (Atelier, UIMA, iKnow, DocDB, MGMNT, BI, Monitoring) or any installed from Open Exchange or build your own with Swagger and/or ObjectScript. In that case I'll wait for a different competition :-) The registration period has already begun! Follow our Contest Board and stay tuned. Waiting for your cool projects! Hi Developers! We are inviting you to join InterSystems Contests Discord Channel to quickly discuss all the questions related to technology requirements, bonuses, REST API, spec first, etc. It has several rooms: REST Contest Technology Bonuses Voting Join! And let's chat about the contest and how to develop the solution and win! Hey Developers, The first application is already in the Contest Board! @Lorenzo.Scalese with the JSON-Filter project is in the game! And who's next? 😉 Hi Developers, Wonder if you could vote for the solution you like? You can! If you contribute to DC! Everyone contributing articles, questions and replies is very welcome to give the vote in the Community Nomination. An application that will receive the most votes in total will win in this nomination. Prizes are waiting for you, stay tuned! 🚀 And here is the video which shows how to apply for the contest: Enjoy! Hey Developers, One more application is already in the game: isc-apptools-admin project by @MikhailenkoSergey. Full speed ahead! 🔥 Please check out our Contest Board! Wrote how to test modules before publishing to Open Exchange. Hey Community, Our Contest Board has been updated again. One more application is already participating in the Intersystems IRIS contest! Please welcome the iris-history-monitor by @Henrique.GonçalvesDias! And who's next? 😉 Added video How to make the REST-API in InterSystems IRIS and expose Open API (Swagger) spec for any InterSystems IRIS REST API This video shows how to create your own basic CRUD API for InterSystems IRIS using the GitHub template and expose it with Open API spec. ⏯ Creating CRUD REST API for InterSystems IRIS in 5 minutes Enjoy! To those who want to earn the ZPM bonus: Here is also the video that describes what is ZPM (ObjectScript Package Manager), how to install and use it. Hi all, You're very welcome to join the InterSystems Developers Discord Channel to discuss all topics and questions related to the Intersystems IRIS Programming Contests (technology requirements, bonuses, REST API, spec first, etc). There are lively discussions with InterSystems developers! Join us! 😎 Yeah, +1 in the Contest Board! Check out the new project: REST for Tasks on my Status Report by @Oliver.Wilms 👍🏼 Developers! You have 5 days to submit your application for the InterSystems IRIS Online Contest! Don't hesitate to submit if you didn't finish it - you'll be able to fix the bugs and make improvements during the voting week too! Want more about ZPM (ObjectScript Package Manager)? Enjoy watching the new video, specially recorded by @Evgeny.Shvarov for the second Intersystems IRIS contest: ⏯ How to Build, Test and Publish ZPM Package with REST Application for InterSystems IRIS This video shows how to build the Package with ObjectScript classes and REST API application from scratch, how to test it, and how to publish it on a test registry and in the InterSystems Open Exchange registry. Stay tuned! If you have your REST API application and just need to Dockerize it - take the advantage of Intersystems IRISDocker Kit. This archive added to any repo with InterSystems IRIS code makes this repo running in InterSystems IRIS in Docker container. And you don't need to start from scratch with a template. Voting for the best application will begin soon! Only 4 days left before the end of registration for the Intersystems IRIS Programming Contest. Don't miss your chance to win! 🏆 Last day to submit your application! Compete with other magnificent seven which are already in the competition! Last call! Registration for the Intersystems IRIS Contest ends today! Hurry up to upload your application 😉 The voting has been started! Choose your best InterSystems IRIS application!
Article
Evgeny Shvarov · Jan 10, 2020

How to Publish an Application on InterSystems Open Exchange: Part 2

Hi Developers! This post describes how you could publish your InterSystems application on Open Exchange. What is the InterSystems application? It could be anything, which is built with InterSystems data platforms or to help work with InterSystems data platforms: InterSystems IRIS, InterSystems IRIS for Health, InterSystems HealthShare, InterSystems Ensemble and InterSystems Caché. This could be tools, frameworks, adapters, examples and business solutions. Why publish on Open Exchange? InterSystems Open Exchange is an "App Central" for the solutions in InterSystems. It's the first place where the developer goes to look for tools, frameworks, and examples on InterSystems IRIS. And Open Exchange brings the added traffic to your solution which could be converted into leads. We are having a set of business development tools for published Open Exchange applications. This definitely makes your InterSystems application more noticed. Submitting an application Suppose you have a library with open source published on Github which you want to publish on Open Exchange. For the purpose of a demo, I'll fork this remarkable project of @Peter.Steiwer ObjectScript-Math which I forked and renamed to object script-super-math. First of all, you need to sign in. You can do this using your DC account or create a new one (which will be suitable for DC too). Once signed in you'll see Profile Menu available which has My Apps section: Click on a New button to submit a new InterSystems application: OEX has a nice integration with Github which allows filling all the matching fields from the Github repo. E.g. let's try with objectscript-super-math: It imports the name, short description, license, Full description, Download URL. To have the application ready for publishing you just need to peek InterSystems Data Platform it works with and the related tags. Also, if there is an article on DC where you discuss the application, add the link to this article. If you publish not Github app with open source, but e.g. the solution you need to accurately fill all the mandatory fields: Name, Download URL, Short Description, License, Tags, Product. And the optional fields which you think should be filled. You also can add the related Youtube videos and two very important checkboxes: We'll talk about this later. Once you are happy with the fields you are welcome to Save the application. Before publishing the app you can set up the app's icon: Upload screenshots: And after that once you are ready, send it for approval on publishing: Here you need to submit the version and the release notes of the version. Release notes section supports Markdown so you can paste here very informative description with rich formatting on what are the changes come with the new release of your application. This will send the application to Open Exchange management for approval and once it's approved the application will be published on Open Exchange, you'll get the notification on email and the release news will be published on Open Exchange News. Next time we'll discuss profile setup, company setup and some special cases. Stay tuned! A tip for anyone trying to upload a different image for an already-published app. You need to unpublish it first, to make the "Upload image" option appear on the menu. See this GH issue comment. A related tip is that when you send your unpublished-and-modified app for approval you can override the default next version number that appears in the dialog, setting it to the same as the current version. You'll still have to enter something in the Release Notes field, but when your changes are subsequently approved your published app won't show any change in the Version History tab. John! Thanks for the heads up! It's really not very convenient! We'll fix it.
Article
Evgeny Shvarov · Jan 13, 2020

How to Make Open Exchange Helpful For Your InterSystems Application

Hi Developers! In previous articles, we spoke on how to publish your application. But how to make Open Exchange work its best for your application? What do you want for your application on Open Exchange? It is attention (traffic) and downloads. Let's talk about how Open Exchange can deliver this. How Do I know how many downloads I have? There is an Analytics tab that is visible for the Author of application. It shows how many times users clicked on the Download button. We have a roadmap for this tab and soon will add the monthly info on Downloads and some other information and you are very welcome to submit what else do you want to see here. E.g. we plan to make this Downloads counter publicly available and make a sorting upon Downloads. Prepare your Application's page Your application's page should contain enough description of what the application does. This makes users want to click on the Download button. How to set up the description? If your application is the facade for a GitHub repo Open Exchange can import Readme.Md from your Github repo. E.g. like here: check the Open Exchange page and the related Github repo page So, in this case, you just need to properly maintain README.md file in the Github repo which will be automatically updated on Open Exchange. In case it is a non-Github application you setup the description on your own the Long Description field. This field supports markdown so you have rich formatting tools to describe the purpose, features, installation steps and usage terms. A good example of a Markdown description on Open Exchange. Make Regular Releases If your app is evolving this is noticed by users. Add the feature and don't forget to make a release on Developers Community. It takes 5 min but it adds the release notes to News, it places your app on top on the main page, it gets your subscribers notified on the new features your bring to your application. Post the Related Developer Community article You can tell a lot how to use your application and how people can benefit from its features? It's a great reason to write an article about it on Developers Community and highlight the features of your application. Of course, you will add the hyperlinks to your application page inside the text but you also can feature the application using DC tools. E.g. you can add the "Related Open Exchange Application" link and this will add a special button on top of your article and in the news feed and an URL to the app at the bottom of the application. See the Example. You also want to link back the application on Open Exchange to the article on DC: This will make the Discuss button active on the Application page. An average DC article gets about 800 reads so you can deliver additional traffic to your application using this channel. Easy installation and setup If your application has 12 steps of "How to install it" this probably doesn't help to increase the number of downloads. Ideally, this should be a one-step installation. If your application is InterSystems ObjectScript I recommend pack and publish it with an ObjectScript Package manager. This really makes your installation into one command: USER:zpm>install your-application Example. Use Your Global Masters' "Miles" Another way how you can get more traffic to your application page is to use Global Masters points. You may know we have a Global Masters advocates hub where we give points to participants for there contribution to InterSystems Community. The ways of the contribution could be articles, answers, and questions on DC, applications on Open Exchange (yes, you get GM points for submitting applications). So, you probably have some GM points(sign in with your DC or Open Exchange credentials) and if you give to the Community and the Community could give you back and you can use GM points to get more attention to your Open Exchange application. How? Google Ads promotion You can spend some points and set up a Google Adwords campaign of $1,000 which InterSystems will cover and manage. Learn more. App of the week We have "News blocks" on every page of Developers Community and this News block can tell about your Open Exchange application. Learn more. Your Application Screencast On InterSystems Developers Youtube If you have a screencast on the features of your application you can get more views and traffic publishing it on Developers Community Youtube. And the simplest and the most working approach: add the features which everybody likes and thus you get the most attention and downloads) Choose any recipes above or take all of them and boost the downloads of your InterSystems Application on Open Exchange! If you have any ideas on how we could better help to make your Open Exchange applications more noticed and helpful submit suggestions and bug reports here. Submit your InterSystems apps and stay tuned!