Search

Clear filter
Announcement
Janine Perkins · Sep 13, 2016

Featured InterSystems Online Course: Building Custom Business Operations

Learn to design, build, implement, and test a new custom business operation in an Ensemble production by taking this online learning course.This course will teach you how to determine when and why to create a custom business operation, design and create a custom business operation for a production, and add a new business operation to a production and configure settings.Learn More.
Announcement
Evgeny Shvarov · Mar 1, 2016

InterSystems Global Summit 2016 Free Registration Contest. And the winner is...

Hi Community!I'm pleased to announce that the winner of the Global Summit Free Registration Contest is... Dmitry Maslennikov!Final leaderboard is:To win the prize Dmitry published 3 posts and 13 comments in two weeks. Thanks to your votes for posts and comments Dmitry gathered maximum number of points.We award Dmitry with Free Registration promo code for InterSystems Global Summit 2016 and cover expenses for 4 night stay in Waldorf Astoria Arizona Biltmore.Welcome to InterSystems Global Summit 2016! Thanks a lot Congrats! Congratulations! I am looking forward to meeting you at the Global Summit.Stefan So first off, I think Dmitry deserved to win, but I do have a question.In this post, dated Feb 19th, I had 55 points. On the 26th I posted asking about it and my score was at 50. 4 days later and my final score is still 50 (after a lot more posts). What's the deal? edit: Just realized that posts to the Developer Community Feedback forum didn't count, which explains why my points didn't go up. But it still doesn't explain why they went down. :) Scott - I am sure Evgeny will provide some clarification here once he gets online. Thanks for your patience! Hi, Scott!You are right!The explanation is very simple - we had a bug in formula. Sorry about this.But the final leaderboard is quite right! We have points for: posting, commenting, votes in your posts and comments. And yes, we filter Announcements and Developer Community Feedback groups. Scott!Want to invite you to participate in the Second Developer Community Contest!Hope we will not show errors in leaderboard this time :)
Article
Murray Oldfield · Apr 1, 2016

InterSystems Data Platforms and performance – how to update pButtons.

Previously I showed you how to run pButtons to start collecting performance metrics that we are looking at in this series of posts. - [Part 1 - Getting started on the Journey, collecting metrics](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-1) - [Part 2 - Looking at the metrics we collected](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-2) ##Update: May 2020. _Since this post was written several years ago, we have moved from Caché to IRIS. See the comments for an updated link to the documentation for pButtons (Caché) and SystemPerformance (IRIS). Also, a note on how to update your systems to the latest versions of the performance tools._ pButtons is compatible with Caché version 5 and later and is included with recent distributions of InterSystems data platforms (HealthShare, Ensemble and Caché). This post reminds you that you should download and install the latest version of pButttons. The latest version is always available for download: _Update:_ **See the comments below for details** To check which version you have installed now, you can run the following: %SYS>write $$version^pButtons() Note 1: - The current version of pButtons will require a license unit; future distributions will address this requirement. - With this distribution of pButtons, versioning has changed. — The prior version of pButtons was 1.16c — This new distribution is version 5. Note 2: - pButtons version 5 also corrects a problem introduced with version 1.16a that could result in prolonged collection times. Version 1.16a was included with Caché 2015.1.0. If you have pButtons version 1.16a through 1.16c, you should download pButtons version 5 from the FTP site. More detailed information on pButtons is available in the files included with the download and in the online Caché documentation. @Murray.Oldfield Current version of documentation says regarding to the update https://cedocs.intersystems.com/latest/csp/docbook/Doc.View.cls?KEY=GCM_pbuttons#GCM_pButtons_runsmpThis utility may be updated between releases. The latest version is available on the WRC distribution site under Tools The xml indeed can be downloaded from WRC but no word neither about pButtons version not the way to install it. Should I simple import the xml into %SYS? Hi, yes you can import using the system management portal; System >Classes Then import into %SYS. Here is version information before; %SYS>write $$version^SystemPerformance() 14 After import, you can see the version information changed. Also note there was a conversion run. The custom profile I had created before the import existed after the update. %SYS>write $$version^SystemPerformance() $Id: //iris/2020.1.0/databases/sys/rtn/diagnostic/systemperformance.mac#1 $ %SYS>d ^SystemPerformance Re-creating command data for new ^SystemPerformance version. Old command data saved in ^IRIS.SystemPerformance("oldcmds"). Current log directory: /path/path/iris/mgr/ Available profiles: 1 12hours - 12 hour run sampling every 10 seconds 2 24hours - 24 hour run sampling every 10 seconds 3 30mins - 30 minute run sampling every 1 second 4 4hours - 4 hour run sampling every 5 seconds 5 5_mins_1_sec- 5 mins 1 sec 6 8hours - 8 hour run sampling every 10 seconds 7 test - A 5 minute TEST run sampling every 30 seconds Select profile number to run: 5 Collection of this sample data will be available in 420 seconds. The runid for this data is 20200518_094753_5_mins_1_sec. %SYS> You can also import from the command line; USER>zn "%SYS" %SYS>do $system.OBJ.Load("/path/SystemPerformance-IRIS-All-2020.1.0-All.xml","ck") Load started on 05/18/2020 10:02:13 Loading file /path/SystemPerformance-IRIS-All-2020.1.0-All.xml as xml Imported object code: SystemPerformance Load finished successfully. %SYS>
Question
Evgeny Shvarov · Apr 5, 2016

How to avoid writing duplicate code in dtl InterSystems Ensemble

Hi! There is a question for Ensemble on Stackoverflow:I have the below dtl. In the foreach loop, I am just copying the same code in another part under anif condition. How can I avoid this redundancy? Can I reuse using sub transformation?Here is the dtl class file :https://docs.google.com/document/d/1snJXElyw13hAfb8Lmg5IaySc7md_DE8J40FB79hBaXU/edit?usp=sharingOriginal question. Hi, Nic!Thank you for your answer!Would you please put it on Stackoverflow too? http://stackoverflow.com/questions/36400699/how-to-avoid-writing-duplicate-code-in-dtl-intersystem-ensembleThank you in advance! Maybe I'm missing something, but it seems like you could have an OR condition to eliminate the code duplication if condition='ExcludeInactiveAllergiesAlerts="No" OR flag'="No"
Article
Murray Oldfield · Apr 27, 2016

InterSystems Data Platforms and performance - Part 5 Monitoring with SNMP

# InterSystems Data Platforms and performance - Part 5 Monitoring with SNMP In previous posts I have shown how it is possible to collect historical performance metrics using pButtons. I go to pButtons first because I know it is installed with every Data Platforms instance (Ensemble, Caché, …). However there are other ways to collect, process and display Caché performance metrics in real time either for simple monitoring or more importantly for much more sophisticated operational analytics and capacity planning. One of the most common methods of data collection is to use SNMP (Simple Network Management Protocol). SNMP a standard way for Caché to provide management and monitoring information to a wide variety of management tools. The Caché online documentation includes details of the interface between Caché and SNMP. While SNMP should 'just work' with Caché there are some configuration tricks and traps. It took me quite a few false starts and help from other folks here at InterSystems to get Caché to talk to the Operating System SNMP master agent, so I have written this post so you can avoid the same pain. In this post I will walk through the set up and configuration of SNMP for Caché on Red Hat Linux, you should be able to use the same steps for other \*nix flavours. I am writing the post using Red Hat because Linux can be a little more tricky to set up - on Windows Caché automatically installs a DLL to connect with the standard Windows SNMP service so should be easier to configure. Once SNMP is set up on the server side you can start monitoring using any number of tools. I will show monitoring using the popular PRTG tool but there are many others - [Here is a partial list.](https://en.wikipedia.org/wiki/Comparison_of_network_monitoring_systems) Note the Caché and Ensemble MIB files are included in the `Caché_installation_directory/SNMP` folder, the file are: `ISC-CACHE.mib` and `ISC-ENSEMBLE.mib`. #### Previous posts in this series: - [Part 1 - Getting started on the Journey, collecting metrics.](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-1) - [Part 2 - Looking at the metrics we collected.](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-2) - [Part 3 - Focus on CPU.](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-3-focus-cpu) - [Part 4 - Looking at memory.](https://community.intersystems.com/post/intersystems-data-platforms-and-performance-part-4-looking-memory) # Start here... Start by reviewing Monitoring Caché Using SNMP in the [Caché online documentation](http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_snmp). ## 1. Caché configuration Follow the steps in _Managing SNMP in Caché_ section in the [Caché online documentation](http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_snmp) to enable the Caché monitoring service and configure the Caché SNMP subagent to start automatically at Caché startup. Check that the Caché process is running, for example look on the process list or at the OS: ps -ef | grep SNMP root 1171 1097 0 02:26 pts/1 00:00:00 grep SNMP root 27833 1 0 00:34 pts/0 00:00:05 cache -s/db/trak/hs2015/mgr -cj -p33 JOB^SNMP Thats all, Caché configuration is done! ## 2. Operating system configuration There is a little more to do here. First check that the snmpd daemon is installed and running. If not then install and start snmpd. Check snmpd status with: service snmpd status Start or Stop snmpd with: service snmpd start|stop If snmp is not installed then you will have to install as per OS instructions, for example: yum -y install net-snmp net-snmp-utils ## 3. Configure snmpd As detailed in the Caché documentation, on Linux systems the most important task is to verify that the SNMP master agent on the system is compatible with the Agent Extensibility (AgentX) protocol (Caché runs as a subagent) and the master is active and listening for connections on the standard AgentX TCP port 705. This is where I ran into problems. I made some basic errors in the `snmp.conf` file that meant the Caché SNMP subagent was not communicating with the OS master agent. The following sample `/etc/snmp/snmp.conf` file has been configured to start agentX and provide access to the Caché and ensemble SNMP MIBs. _Note you will have to confirm whether the following configuration complies with your organisations security policies._ At a minimum the following lines must be edited to reflect your system set up. For example change: syslocation "System_Location" to syslocation "Primary Server Room" Also edit the at least the following two lines: syscontact "Your Name" trapsink Caché_database_server_name_or_ip_address public Edit or replace the existing `/etc/snmp/snmp.conf` file to match the following: ############################################################################### # # snmpd.conf: # An example configuration file for configuring the NET-SNMP agent with Cache. # # This has been used successfully on Red Hat Enterprise Linux and running # the snmpd daemon in the foreground with the following command: # # /usr/sbin/snmpd -f -L -x TCP:localhost:705 -c./snmpd.conf # # You may want/need to change some of the information, especially the # IP address of the trap receiver of you expect to get traps. I've also seen # one case (on AIX) where we had to use the "-C" option on the snmpd command # line, to make sure we were getting the correct snmpd.conf file. # ############################################################################### ########################################################################### # SECTION: System Information Setup # # This section defines some of the information reported in # the "system" mib group in the mibII tree. # syslocation: The [typically physical] location of the system. # Note that setting this value here means that when trying to # perform an snmp SET operation to the sysLocation.0 variable will make # the agent return the "notWritable" error code. IE, including # this token in the snmpd.conf file will disable write access to # the variable. # arguments: location_string syslocation "System Location" # syscontact: The contact information for the administrator # Note that setting this value here means that when trying to # perform an snmp SET operation to the sysContact.0 variable will make # the agent return the "notWritable" error code. IE, including # this token in the snmpd.conf file will disable write access to # the variable. # arguments: contact_string syscontact "Your Name" # sysservices: The proper value for the sysServices object. # arguments: sysservices_number sysservices 76 ########################################################################### # SECTION: Agent Operating Mode # # This section defines how the agent will operate when it # is running. # master: Should the agent operate as a master agent or not. # Currently, the only supported master agent type for this token # is "agentx". # # arguments: (on|yes|agentx|all|off|no) master agentx agentXSocket tcp:localhost:705 ########################################################################### # SECTION: Trap Destinations # # Here we define who the agent will send traps to. # trapsink: A SNMPv1 trap receiver # arguments: host [community] [portnum] trapsink Caché_database_server_name_or_ip_address public ############################################################################### # Access Control ############################################################################### # As shipped, the snmpd demon will only respond to queries on the # system mib group until this file is replaced or modified for # security purposes. Examples are shown below about how to increase the # level of access. # # By far, the most common question I get about the agent is "why won't # it work?", when really it should be "how do I configure the agent to # allow me to access it?" # # By default, the agent responds to the "public" community for read # only access, if run out of the box without any configuration file in # place. The following examples show you other ways of configuring # the agent so that you can change the community names, and give # yourself write access to the mib tree as well. # # For more information, read the FAQ as well as the snmpd.conf(5) # manual page. # #### # First, map the community name "public" into a "security name" # sec.name source community com2sec notConfigUser default public #### # Second, map the security name into a group name: # groupName securityModel securityName group notConfigGroup v1 notConfigUser group notConfigGroup v2c notConfigUser #### # Third, create a view for us to let the group have rights to: # Make at least snmpwalk -v 1 localhost -c public system fast again. # name incl/excl subtree mask(optional) # access to 'internet' subtree view systemview included .1.3.6.1 # access to Cache MIBs Caché and Ensemble view systemview included .1.3.6.1.4.1.16563.1 view systemview included .1.3.6.1.4.1.16563.2 #### # Finally, grant the group read-only access to the systemview view. # group context sec.model sec.level prefix read write notif access notConfigGroup "" any noauth exact systemview none none After editing the `/etc/snmp/snmp.conf` file restart the snmpd deamon. service snmpd restart Check the snmpd status, note that AgentX has been started see the status line: __Turning on AgentX master support.__ h-4.2# service snmpd restart Redirecting to /bin/systemctl restart snmpd.service sh-4.2# service snmpd status Redirecting to /bin/systemctl status snmpd.service ● snmpd.service - Simple Network Management Protocol (SNMP) Daemon. Loaded: loaded (/usr/lib/systemd/system/snmpd.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2016-04-27 00:31:36 EDT; 7s ago Main PID: 27820 (snmpd) CGroup: /system.slice/snmpd.service └─27820 /usr/sbin/snmpd -LS0-6d -f Apr 27 00:31:36 vsan-tc-db2.iscinternal.com systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon.... Apr 27 00:31:36 vsan-tc-db2.iscinternal.com snmpd[27820]: Turning on AgentX master support. Apr 27 00:31:36 vsan-tc-db2.iscinternal.com snmpd[27820]: NET-SNMP version 5.7.2 Apr 27 00:31:36 vsan-tc-db2.iscinternal.com systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon.. sh-4.2# After restarting snmpd you must restart the Caché SNMP subagent using the `^SNMP` routine: %SYS>do stop^SNMP() %SYS>do start^SNMP(705,20) The operating system snmpd daemon and Caché subagent should now be running and accessible. ## 4. Testing MIB access MIB access can be checked from the command line with the following commands. `snmpget` returns a single value: snmpget -mAll -v 2c -c public vsan-tc-db2 .1.3.6.1.4.1.16563.1.1.1.1.5.5.72.50.48.49.53 SNMPv2-SMI::enterprises.16563.1.1.1.1.5.5.72.50.48.49.53 = STRING: "Cache for UNIX (Red Hat Enterprise Linux for x86-64) 2015.2.1 (Build 705U) Mon Aug 31 2015 16:53:38 EDT" And `snmpwalk` will 'walk' the MIB tree or branch: snmpwalk -m ALL -v 2c -c public vsan-tc-db2 .1.3.6.1.4.1.16563.1.1.1.1 SNMPv2-SMI::enterprises.16563.1.1.1.1.2.5.72.50.48.49.53 = STRING: "H2015" SNMPv2-SMI::enterprises.16563.1.1.1.1.3.5.72.50.48.49.53 = STRING: "/db/trak/hs2015/cache.cpf" SNMPv2-SMI::enterprises.16563.1.1.1.1.4.5.72.50.48.49.53 = STRING: "/db/trak/hs2015/mgr/" etc etc There are also several windows and \*nix clients available for viewing system data. I use the free iReasoning MIB Browser. You will have to load the ISC-CACHE.MIB file into the client so it knows the structure of the MIB. The following image shows the iReasoning MIB Browser on OSX. ![free iReasoning MIB Browser](https://community.intersystems.com/sites/default/files/inline/images/mib_browser_0.png) ## Including in Monitoring tools This is where there can be wide differences in implementation. The choice of monitoring or analytics tool I will leave up to you. _Please leave comments to the post detailing the tools and value you get from them for monitoring and managing your systems. This will be a big help for other community members._ Below is a screen shot from the popular _PRTG_ Network Monitor showing Caché metrics. The steps to include Caché metrics in PRTG are similar to other tools. ![PRTG Monitoring tool](https://community.intersystems.com/sites/default/files/inline/images/prtg_0.png) ### Example workflow - adding Caché MIB to monitoring tool. #### Step 1. Make sure you can connect to the operating system MIBs. A tip is to do your trouble-shooting against the operating system not Caché. It is most likely that monitoring tools already know about and are preconfigured for common operating system MIBs so help form vendors or other users may be easier. Depending on the monitoring tool you choose you may have to add an SNMP 'module' or 'application', these are generally free or open source. I found the vendor instructions pretty straight forward for this step. Once you are monitoring the operating system metrics its time to add Caché. #### Step 2. Import the `ISC-CACHE.mib` and `ISC-ENSEMBLE.mib` into the tool so that it knows the MIB structure. The steps here will vary; for example PRTG has a 'MIB Importer' utility. The basic steps are to open the text file `ISC-CACHE.mib` in the tool and import it to the tools internal format. For example Splunk uses a Python format, etc. _Note:_ I found the PRTG tool timed out if I tried to add a sensor with all the Caché MIB branches. I assume it was walking the whole tree and timed out for some metrics like process lists, I did not spend time troubleshooting this, instead I worked around this problem by only importing the performance branch (cachePerfTab) from the `ISC-CACHE.mib`. Once imported/converted the MIB can be reused to collect data from other servers in your network. The above graphic shows PRTG using Sensor Factory sensor to combine multiple sensors into one chart. # Summary There are many monitoring, alerting and some very smart analytics tools available, some free, others with licences for support and many and varied functionality. You must monitor your system and understand what activity is normal, and what activity falls outside normal and must be investigated. SNMP is a simple way to expose Caché and Ensemble metrics. I was asked a couple of questions offline, so the following is to answer them: _Q1. In your article, why do you say it is necessary to change information strings in snmpd.conf? (ie. syslocation/syscontact)?_ A1. What I mean is that you should change syslocation and syscontact to reflect your site, but leaving them as the defaults in the sample will not stop SNMP working using this sample `snmpd.conf`file. _Q2. you also mention basic errors you made in configuring it, which were these? It might be helpful to mention the debugging facilities for snmp `(^SYS("MONITOR","SNMP","DEBUG") )` as well?_ A2. One problem was misconfiguring the security settings in `snmpd.conf`. Following the example above will get you there. I also spun my wheels with what turned out to be a spelling (or case) error on the line `agentXSocket tcp:localhost:705`. In the end I figured out the problem was to do with agentX not starting by looking at the logs written to the `install-dir/mgr/SNMP.log` file. Caché logs any problems encountered while establishing a connection or answering requests in the `SNMP.log`. You should also check `cconsole.log` and the logs for snmpd in the OS. On Windows, iscsnmp.dll logs any errors it encounters in %System%\System32\snmpdbg.log (on a 64–bit Windows system, this file is in the SysWOW64 subdirectory). As pointed out in Fabian's question more information can be logged to the SNMP.log if you set ^SYS("MONITOR","SNMP","DEBUG")=1 in the %SYS namespace and restart the ^SNMP Caché subagent process. This logs details about each message received and sent. Thanks for the questions MO BTW Although I have not tried implementing - due to high number of Healthcare applications using Caché I thought it may be of interest that Paessler PRTG has developed new sensors for monitoring medical equipment that communicate via HL7 and DICOM.https://www.paessler.com/blog/2016/04/13/all-about-prtg/ehealth-sensors-know-what-is-going-on-in-your-medical-it I've installed PRTG on a windows 10 system... enabled SNMP services... I've configured SNMP Service to Community "public" destination "127.0.0.1"... PRTG is able to see and graph the system statistics... OK.Then I imported ISC-Cache.mib with Paesler MIB Imported, OK, and "Save for PRTG Network Monitor"... everything seems fine, but, then, where is supposed to be? When I go to PRTG NM I cannot see anyhing related to Caché ... no clue about the library that I supposedly just imported... S of SNMP means Simple... so I'm pretty sure I'm missing something really basic here, but I don't know how to go on. Found... I just got to go to add sensors to a group, windows type and snmp library... there it is the oidlib I had just imported!! When using containers (IRIS) how do use the host to proxy the snmp calls? is there a way to have the container use the hosts ip (and port 705) to do its reporting? Hi Jay, for the docker run command look at the --net=host flag. Also SAM might be of interest to you. See the recent announcement and a user case I hope it's helpful do we hv a similar topic for Cache on windows?Thx!
Announcement
Janine Perkins · Apr 27, 2016

Featured InterSystems Online Course: DeepSee Analyzer Basics

Learn the fundamentals of how and when to use the DeepSee Analyzer. DeepSee Analyzer BasicsThis course describes typical use cases for the Analyzer and shows how to access the Analyzer from the Management Portal. You will learn how use drag and drop to create simple queries, and then refine these queries by sorting, applying filters, and drilling down. Learn More.
Article
Gevorg Arutiunian · Nov 21, 2016

Caché Localization Manager or i18N in InterSystems Caché

Caché Localization Manager CLM is a tool for localization/internationalization/adding multi-language support to a project based on InterSystems Caché. Imagine that you have a ready project where all the content is in Russian, and you need to add an English localization to it. You wrap all your strings into resources, translate them into English and call the necessary resource for Russian or English when necessary. Nothing tricky, if you think about it. But what if there are lots of strings and there are mistakes in Russian (or English)? What if you need to localize in more than one language – say, ten? This is exactly the kind of project where you should use CLM. It will help you localize the entire content of your project into the necessary language and retain the possibility to correct entries. CLM allows you to do the following: Add a new localization.Delete a localization.Export a localization.Import a localization.View two tables at a time.Conveniently switch between spaces.Check spelling. Let’s “look under the hood” now Caché has a standard approach to implementing I11N using the $$$TEXT macros: $$$TEXT("Text", "Domain", "Language") where: Text — the text to be used for localization in the future. Domain — modules in your applications. Language — the language of "Text". If you use $$$TEXT in COS code, data is added to the ^CacheMsg global during class compilation. And this is the global that CLM works with. In ^CacheMsg, everything is identical to $$$TEXT, you just add "ID" as the text hash. ^CacheMsg("Domain", "Language", "ID") = "Text" If you are using CSP, then the use of $$$TEXT in CSP will look as follows: <csp:text id="14218931" domain="HOLEFOODS">Date Of Sale</csp:text> Installation First of all, you need to download the Installed class from GitHub and import it to any convenient space in Caché. I will use the USER space. Once done, open the terminal and switch to the USER space. To start the installation, enter the following command in the terminal: USER> do ##class(CLM.Installer).setup() Installation process: You can make sure the application is installed correctly by following this link: http://localhost:57772/csp/clm/index.csp (localhost:57772 — the path to your server). Settings CLM uses Yandex to perform the translation. You will now need to obtain a Yandex API key to let it work. It's free, but needs registration. Let’s now deal with spellchecking. CLM uses Caché Native Access for SpellCheck implementation. CNA was written for calling external functions from dynamic libraries, such as .dll or .so. SpellCheck works with the Hunspell library and needs two files for spellchecking. The first file is a dictionary containing words, the second one contains affixes and defines the meaning of special marks (flags) in the dictionary. How words are checked: All words are packed from CLM and sent to Hunspell via CNA, where CNA converts them into a language that Hunspell understands. Hunspell checks every word, finds the root form and all possible variations, and returns the result. But where do we get all these dictionaries and libraries? — CNA: use an available release or build it on your own. — Hunspell: same thing, use an available version or download sources for making your own build. — We will also need a standard C language library. In Windows, it is located here: C:\Windows\System32\msvcrt.dll. — Dictionaries can be downloaded here. At the moment, 51 languages are supported: Albanian Czech German Latin Romanian VietnameseArabian Chinese Greek Latvian RussianArmenian Danish Hebrew Lithuanian SpanishAzeri Dutch Hungarian Macedonian SerbianBelarusian English Icelandic Malay SlovakBosnian Estonian Indonesian Maltese SlovenianBasque Esperanto Italian Norwegian SwedishBulgarian Finnish Japanese Polish ThaiCatalan French Kazan Portuguese TurkishCroatian Georgian Korean Brazil Ukrainian The entire configuration process boils down to entering the paths to everything you got before. Open CLM in a browser. There is a “Set Paths” button in the upper right corner. Click it and you’ll see the next window. Use it to enter the required paths. Here’s what I got: Demonstration of a registration form localization Link to demonstration. Password and login: demo. Your critique, comments and suggestion are very welcome. The source code and instructions are available on github under an MIT license. An interesting article, but I'm puzzled by your use of the term "I11n". I'm familiar with "I18n" as an abbreviation for Internationalization (because there are 18 letters between the "I" and the "n". Likewise, I understand "L10n" as standing for Localization. Some Googling suggests that "I11n" is short for Introspection. Or have I missed something? Hi, John!i11n here stands for "internationalization" i18n - internationalization, but i11n is something else True. Title is fixed
Article
Murray Oldfield · Nov 12, 2016

InterSystems Data Platforms Capacity Planning and Performance Series Index

# Index This is a list of all the posts in the Data Platforms’ capacity planning and performance series in order. Also a general list of my other posts. I will update as new posts in the series are added. > You will notice that I wrote some posts before IRIS was released and refer to Caché. I will revisit the posts over time, but in the meantime, Generally, the advice for configuration is the same for Caché and IRIS. Some command names may have changed; the most obvious example is that anywhere you see the `^pButtons` command, you can replace it with `^SystemPerformance`. --- > While some posts are updated to preserve links, others will be marked as strikethrough to indicate that the post is legacy. Generally, I will say, "See: some other post" if it is appropriate. --- #### Capacity Planning and Performance Series Generally, posts build on previous ones, but you can also just dive into subjects that look interesting. - [Part 1 - Getting started on the Journey, collecting metrics.][1] - [Part 2 - Looking at the metrics we collected.][2] - [Part 3 - Focus on CPU.][3] - [Part 4 - Looking at memory.][4] - [Part 5 - Monitoring with SNMP.][5] - [Part 6 - Caché storage IO profile.][6] - [Part 7 - ECP for performance, scalability and availability.][7] - [Part 8 - Hyper-Converged Infrastructure Capacity and Performance Planning][8] - [Part 9 - Caché VMware Best Practice Guide][9] - [Part 10 - VM Backups and IRIS freeze/thaw scripts][10] - [Part 11 - Virtualizing large databases - VMware cpu capacity planning][11] #### Other Posts This is a collection of posts generally related to Architecture I have on the Community. - [AWS Capacity Planning Review Example.][29] - [Using an LVM stripe to increase AWS EBS IOPS and Throughput.][28] - [YASPE - Parse and chart InterSystems Caché pButtons and InterSystems IRIS SystemPerformance files for quick performance analysis of Operating System and IRIS metrics.][27] - [SAM - Hacks and Tips for set up and adding metrics from non-IRIS targets][12] - [Monitoring InterSystems IRIS Using Built-in REST API - Using Prometheus format.][13] - [Example: Review Monitor Metrics From InterSystems IRIS Using Default REST API][14] - [InterSystems Data Platforms and performance – how to update pButtons.][15] - [Extracting pButtons data to a csv file for easy charting.][16] - [Provision a Caché application using Ansible - Part 1.][17] - [Windows, Caché and virus scanners.][18] - [ECP Magic.][19] - [Markdown workflow for creating Community posts.][20] - [Yape - Yet another pButtons extractor (and automatically create charts)][21] See: [YASPE](https://community.intersystems.com/post/yaspe-yet-another-system-performance-extractor). - [How long does it take to encrypt a database?][22] - [Minimum Monitoring and Alerting Solution][23] - [LVM PE Striping to maximize Hyper-Converged storage throughput][24] - [Unpacking pButtons with Yape - update notes and quick guides][25] - [Decoding Intel processor models reported by Windows][26] Murray Oldfield Principle Technology Architect InterSystems Follow the community or @murrayoldfield on Twitter [1]: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-1 [2]: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-2 [3]: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-3-focus-cpu [4]: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-part-4-looking-memory [5]: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-part-5-monitoring-snmp [6]: https://community.intersystems.com/post/data-platforms-and-performance-part-6-cach%C3%A9-storage-io-profile [7]: https://community.intersystems.com/post/data-platforms-and-performance-part-7-ecp-performance-scalability-and-availability [8]: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-8-hyper-converged-infrastructure-capacity [9]: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-9-cach%C3%A9-vmware-best-practice-guide [10]: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-vm-backups-and-cach%C3%A9-freezethaw-scripts [11]: https://community.intersystems.com/post/virtualizing-large-databases-vmware-cpu-capacity-planning [12]: https://community.intersystems.com/post/sam-hacks-and-tips-set-and-adding-metrics-non-iris-targets [13]: https://community.intersystems.com/post/monitoring-intersystems-iris-using-built-rest-api [14]: https://community.intersystems.com/post/example-review-monitor-metrics-intersystems-iris-using-default-rest-api [15]: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-how-update-pbuttons [16]: https://community.intersystems.com/post/extracting-pbuttons-data-csv-file-easy-charting [17]: https://community.intersystems.com/post/provision-cach%C3%A9-application-using-ansible-part-1 [18]: https://community.intersystems.com/post/windows-cach%C3%A9-and-virus-scanners [19]: https://community.intersystems.com/post/ecp-magic [20]: https://community.intersystems.com/post/markdown-workflow-creating-community-posts [21]: https://community.intersystems.com/post/yape-yet-another-pbuttons-extractor-and-automatically-create-charts [22]: https://community.intersystems.com/post/how-long-does-it-take-encrypt-database [23]: https://community.intersystems.com/post/minimum-monitoring-and-alerting-solution [24]: https://community.intersystems.com/post/lvm-pe-striping-maximize-hyper-converged-storage-throughput [25]: https://community.intersystems.com/post/unpacking-pbuttons-yape-update-notes-and-quick-guides [26]: https://community.intersystems.com/post/decoding-intel-processor-models-reported-windows [27]: https://community.intersystems.com/post/yaspe-yet-another-system-performance-extractor [28]: https://community.intersystems.com/post/using-lvm-stripe-increase-aws-ebs-iops-and-throughput [29]: https://community.intersystems.com/post/aws-capacity-planning-review-example
Announcement
Janine Perkins · Jan 17, 2017

Featured InterSystems Online Course: Caché ObjectScript Basics

Take this online course to learn the foundations of the Caché ObjectScript language especially as it relates to use in creating variables and objects in Caché. You will learn about variables, commands, and operators as well as how to find more information using the InterSystems DocBooks when needed. This course contains instructional videos and exercises.Learn More.
Announcement
Anastasia Dyubaylo · May 15, 2020

InterSystems IRIS 2020.1 Tech Talk: DevOps

Hey Developers, We're pleased to invite you to join the next InterSystems IRIS 2020.1 Tech Talk: DevOps on June 2nd at 10:00 AM EDT! In this InterSystems IRIS 2020.1 Tech Talk, we focus on DevOps. We'll talk about InterSystems System Alerting and Monitoring, which offers unified cluster monitoring in a single pane for all your InterSystems IRIS instances. It is built on Prometheus and Grafana, two of the most respected open source offerings available. Next, we'll dive into the InterSystems Kubernetes Operator, a special controller for Kubernetes that streamlines InterSystems IRIS deployments and management. It's the easiest way to deploy an InterSystems IRIS cluster on-prem or in the Cloud, and we'll show how you can configure mirroring, ECP, sharding and compute nodes, and automate it all. Finally, we'll discuss how to speed test InterSystems IRIS using the open source Ingestion Speed Test. This tool is available on InterSystems Open Exchange for your own testing and benchmarking. Speakers:🗣 @Luca.Ravazzolo, InterSystems Product Manager 🗣 @Robert.Kuszewski, InterSystems Product Manager - Developer Experience Date: Tuesday, June 2, 2020Time: 10:00 AM EDT ➡️ JOIN THE TECH TALK! Additional Resources: SAM Documentation SAM InterSystems-Community Github repo Docker Hub SAM Hi Everyone, We’ve had a presenter change for this Tech Talk. Please welcome @Robert.Kuszewski as a speaker at the DevOps Webinar! In addition, please check the updated Additional Resources above. Don't forget to register here! Tomorrow! Don't miss this webinar! 😉 PLEASE REGISTER HERE! I registered last week and I have gotten the reminder on Sunday. But I cannot get into the session at all - it loops back to the page to save to your calendar. This event will start in 1.5 hours My apologies for my human error - I have a new work schedule that I am still getting used to hi, is there a replay for this? Thx!
Article
Timothy Leavitt · Jun 4, 2020

AppS.REST - a new REST framework for InterSystems IRIS

Over the past year or so, my team (Application Services at InterSystems - tasked with building and maintaining many of our internal applications, and providing tools and best practices for other departmental applications) has embarked on a journey toward building Angular/REST-based user interfaces to existing applications originally built using CSP and/or Zen. This has presented an interesting challenge that may be familiar to many of you - building out new REST APIs to existing data models and business logic. As part of this process, we've built a new framework for REST APIs, which has been too useful to keep to ourselves. It is now available on the Open Exchange at https://openexchange.intersystems.com/package/apps-rest. Expect to see a few more articles about this over the coming weeks/months, but in the meanwhile, there are good tutorials in the project documentation on GitHub (https://github.com/intersystems/apps-rest). As an introduction, here are some of our design goals and intentions. Not all of these have been realized yet, but we're well on the way! Rapid Development and Deployment ​Our REST approach should provide the same quick start to application development that Zen does, solving the common problems while providing flexibility for application-specific specialized use cases. Exposing a new resource for REST access should be just as easy as exposing it a a Zen DataModel. Addition/modification of REST resources should involve changes at the level being accessed. Exposure of a persistent class over REST should be accomplished by inheritance and minimal overrides, but there should also be support for hand-coding equivalent functionality. (This is similar to %ZEN.DataModel.Adaptor and %ZEN.DataModel.ObjectDataModel.) Common patterns around error handling/reporting, serialization/deserialization, validation, etc. should not need to be reimplemented for each resource in each application. Support for SQL querying, filtering, and ordering, as well as advanced search capabilities and pagination, should be built-in, rather than reimplemented for each application. It should be easy to build REST APIs to existing API/library classmethods and class queries, as well as at the object level (CRUD). Security ​​Security is an affirmative decision at design/implementation time rather than an afterthought. When REST capabilities are gained by class inheritance, the default behavior should be to provide NO access to the resource until the developer actively specifies who should receive access and under what conditions. Standardized implementations of SQL-related features minimize the surface for SQL injection attacks. Design should take into consideration the OWASP API Top 10 (see: https://owasp.org/www-project-api-security) Sustainability ​Uniformity of application design is a powerful tool for an enterprise application ecosystem. Rather than accumulating a set of diverse hand-coded REST APIs and implementations, we should have similar-looking REST APIs throughout our portfolio. This uniformity should lead to: Common debugging techniques Common testing techniques Common UI techniques for connecting to REST APIs Ease of developing composite applications accessing multiple APIs The set of endpoints and format of object representations provided/accepted over REST should be well-defined, such that we can automatically generate API documentation (e.g., Swagger/OpenAPI) based on these endpoints. Based on industry-standard API documentation, we should be able to generate portions of client code (e.g., typescript classes corresponding to our REST representations) using third-party/industry-standard tools. Awesome! @Timothy.Leavitt this is amazing! I'll be making use of it in my application :) I was looking into the OpenExchange description, and in the Tutorial and User Guide, I think the links are broken. I got a "Not found" message when I try to access the URLs. https://openexchange.intersystems.com/docs/sample-phonebook.md https://openexchange.intersystems.com/docs/user-guide.md Thank you for your interest, and for pointing out that issue. I saw it after publishing and fixed it in GitHub right away. The Open Exchange updates from GitHub at midnight, so it should be all set now. minimum platform version of InterSystems IRIS 2018.1 Porting old apps with a framework available on new version of the platform (IRIS) only, no contradictions here? :) Is there something fundamental preventing the framework from being used on Cache too? Maybe I'm wrong, but the minimum requirement here it's because you don't have %JSON.Adaptor on Caché. The %JSON. Adaptor is missing in Caché but %JSON.Formatter was backported half a year ago. it is in OpenExchange available @Henrique.GonçalvesDias is right - that's the reason for the minimum requirement. IMO, getting an old app running on the new version of the platform is a relatively small effort compared to a Zen -> Angular migration (for example). Hi @Timothy.Leavitt I'm testing the AppS.REST to create a new application, following the Tutorial and Sample steps in Github I created a Dispatch Class: Class NPM.REST.Handler Extends AppS.REST.Handler { ClassMethod AuthenticationStrategy() As %Dictionary.CacheClassname { Quit ##class(AppS.REST.Authentication.PlatformBased).%ClassName(1) } ClassMethod GetUserResource(pFullUserInfo As %DynamicObject) As AppS.REST.Authentication.PlatformUser { Quit ##class(AppS.REST.Authentication.PlatformUser).%New() } } And a simple persistent class: Class NPM.Model.Task Extends (%Persistent, %Populate, %JSON.Adaptor, AppS.REST.Model.Adaptor) { Parameter RESOURCENAME = "task"; Property RowID As %String(%JSONFIELDNAME = "_id", %JSONINCLUDE = "outputonly") [ Calculated, SqlComputeCode = {Set {*} = {%%ID}}, SqlComputed, Transient ]; Property TaskName As %String(%JSONFIELDNAME = "taskName"); /// Checks the user's permission for a particular operation on a particular record. /// <var>pOperation</var> may be one of: /// CREATE /// READ /// UPDATE /// DELETE /// QUERY /// ACTION:<action name> /// <var>pUserContext</var> is supplied by <method>GetUserContext</method> ClassMethod CheckPermission(pID As %String, pOperation As %String, pUserContext As AppS.REST.Authentication.PlatformUser) As %Boolean { Quit ((pOperation = "QUERY") || (pOperation = "READ") || (pOperation = "CREATE") || (pOperation = "UPDATE")) } } But when I try the REST API using Postman GET: http://localhost:52773/csp/npm-app-rest/api/task/1 I'm getting a 404 Not Found message. Am I doing something wrong or missing something? Thanks @Henrique.GonçalvesDias , do you have a record with ID 1? If not, you can populate some data with the following (since you extend %Populate): Do ##class(NPM.Model.Task).Populate(10) Yes, I already populated the class. Give CSPSystem user access to the database with a REST broker. @Eduard.Lebedyuk is probably right on this. If you add auditing for <PROTECT> events you'll probably see one before the 404. I added auditing on everything, and the <PROTECT> error never showed up. So, I started everything from scratch and found out a typo on Postman. Thanks, @Eduard.Lebedyuk @Timothy.Leavitt PS: Sorry, guys. I think not sleeping enough hours isn't good for health and cause this kind of mistakes This is really cool, and we will be using this in a big way. But I have encountered an issue I can't fix. I took one of my data classes (Data.DocHead) and had it inherit from AppS.REST.Model.Adaptor and %JSON.Adaptor, set the RESOURCENAME and other things and tested using Postman and it worked perfectly! Excellent! Due to the need to have multiple endpoints for that class for different use cases, I figured I would set it up using the AppS.REST.Model.Proxy, so I created a new class for the Proxy, removed the inheritance in the data class (left %JSON.Adaptor), deleted the RESOURCENAME and other stuff in the data class. I used the same RESOURCENAME in the proxy that I had used in data class originally. I compiled the proxy class, and get the message: ERROR #5001: Resource 'dochead', media type 'application/json' is already in use by class Data.DocHead > ERROR #5090: An error has occurred while creating projection RestProxies.Data.DocHead:ResourceMap. I've recompiled the entire application with no luck. So there must be a resource defined somewhere that is holding dochead like it was still attached to Data.Dochead via a RESOURCENAME, but that parameter is not in that class anymore. How do I clear that resource so I can use it in the proxy? @Richard.Schilke, I'm glad to hear that you're planning on using this, and we're grateful for your feedback. Quick fix should just be: Do ##class(AppS.REST.ResourceMap).ModelClassDelete("Data.DocHead") Background: metadata on REST resources and actions is kept in the AppS.REST.ResourceMap and AppS.REST.ActionMap classes. These are maintained by projections and it seems there's an edge case where data isn't getting cleaned up properly. I've created a GitHub issue as a reminder to find and address the root cause: https://github.com/intersystems/apps-rest/issues/5 That did the trick - thank you so much! Best practice check: When I have a data class (like Data.DocHead) that will need multiple Mappings (Base, Expanded, Reports), then the recommended way is to use the proxy class and have a different proxy class for Data.DocHead for each mapping? For example, RESTProxies.Data.DocHead.Base.cls would be the proxy for the Base mapping in Data.DocHead, while RESTProxies.Data.DocHead.Expanded.cls would be the proxy for the Expanded mapping in Data.DocHead, etc. (the only difference might be the values for the JSONMAPPING and RESOURCENAME prameters)? I'm fine with that, just checking that you don't have some other clever way of doing that... @Timothy.Leavitt, I've run into another issue. The proxy is setup and working great for general GET access. But since my system is a multi-tenant, wide open queries are not a thing I can use, so I decided to try to use a defined class Query in the data class Lookups.Terms: Query ContactsForClientID(cClientOID As %String) As %SQLQuery{SELECT * FROM Lookups.TermsWHERE ClientID = :cClientOIDORDER BY TermsCode} Then I setup the Action Mapping in my proxy class RESTProxies.Lookups.Terms.Base: XData ActionMap [ XMLNamespace = "http://www.intersystems.com/apps/rest/action" ]{<actions xmlns="http://www.intersystems.com/apps/rest/action"><action name="byClientID" target="class" method="GET" modelClass="Lookups.Terms" query="Lookups.Terms:ContactsForClientID"><argument name="clientid" target="cClientOID" source="url"/></action></actions>} And I invoked this using this URL in a GET call using Postman (last part only): terms_base/$byClientID?clientid=290 And the result: 406 - Client browser does not accept the MIME type of the requested page. In the request, I verified that both Content-Type and Accept are set to application/json (snip from the Postman): So what have I missed? @Richard.Schilke , yes, having a separate proxy for each mapping would be best practice. You could also have Data.DocHead extend Adaptor for the primary use case and have proxies for the more niche cases (if one case is more significant - typically this would be the most complete representation). What's the MEDIATYPE parameter in Lookups.Terms (the model class)? The Accept header should be set to that. Also, you shouldn't need to set Content-Type on a GET, because you're not supplying any content in the request. (It's possible that it's throwing things off.) If you can reproduce a simple case independent of your code (that you'd be comfortable to share), feel free to file a GitHub issue and I'll try to knock it out soon. I'll also note - the only thing that really matters from the class query is the ID. If nothing else is using the query you could just change it to SELECT ID FROM ... - it'll constitute the model instances based on that. (This is handy because it allows reuse of class queries with different representations.) Good to know and, yes, very handy! Thank you! I posted an issue with my source to Github. Surfaced another issue this week-end. (I remember when I used to take week-ends off, but no whining!) So I have a multiple linked series of classes in Parent/Child relationships: DocHead->DocItems->DocItemsBOM->DocItemsBOMSerial So if I wanted to express all of this in a JSON object, I would need to make the "Default" mapping the one that exposes all the Child Properties, because it looks like I can't control the Mapping of the Child classes from the Parent class. This doesn't bother me, as I had already written a shell that does this, and your Proxy/Adaptor makes it work even better, but just wanted to check that the Parent can't tell the Child what Proxy the child should use to display its JSON. It's even more complicated than that, as sometimes I want to show DocHead->DocItems (and stop), while, in other Use Cases, I have to show DocHead, DocItems, and DocItemsBOM (and stop), while in other Use Cases, I need the entire stack. Thanks for posting - I'm taking a look now. This issue is starting to ring a bell; I think this looks like a bug we fixed in another branch internally to my team. (I've had reconciling the GitHub branch and our internal branch on my list for some time - I'll try to at least get this fix in, soon.) Re: customizing mappings of relationship/object properties, see https://docs.intersystems.com/healthconnectlatest/csp/docbook/Doc.View.cls?KEY=GJSON_adaptor#GJSON_adaptor_xdata_define - this is doable in %JSON.Adaptor mapping XData blocks via the Mapping attribute for an object-valued property included in the mapping. Wow - I think that means I can handle all my Use Cases with that capability. Nice! Thanks again! @Timothy.Leavitt , have you had a chance to see if this error I'm getting on Actions was resolved? @Richard.Schilke I'm planning to address it tomorrow or Friday. Keep an eye out for the next AppS.REST release on the Open Exchange - I'll reply again here too. (This will also include a fix for the other issue you reported; I've already merged a PR for that.) @Timothy.Leavitt , I will be looking for it. I'm trying to do something with a custom Header that I want to provide for the REST calls. Do I have access to the REST Header somewhere in the service that I can pull the values, like a %request? And in something of an edge case, we're calling these REST services from an existing ZEN application (for now as we start a slow pull away from Zen), so the ZEN app gets a %Session created for it, and then calls the REST service. It seems that Intersystems is managing the License by recognizing that the browser has a session cookie, and it doesn't burn a License for the REST call - that's very nice (but I do have a request in to the WRC about whether that is expected behavior or not so I don't get surprised if it gets "fixed"!). Does that mean your REST service can see that %Session, as that would be very helpful, since we store User/Multi-tenant ID, and other important things in there (the %Session, not the cookie). @Richard.Schilke - on further review, it's an issue with the Action map. See my response in https://github.com/intersystems/apps-rest/issues/7 (and thank you for filing the issue!). I'll still create a new release soon to pick up the projection bug you found. Regarding headers - you can reference %request anywhere in the REST model classes, it just breaks abstraction a bit. (And for the sake of unit testing, it would be good to behave reasonably if %request happens not to be defined, unless your planning on using Forgery or equivalent.) Regarding sessions - yes, you can share a session with a Zen application via a common session cookie path or using GroupById. You can reference this as needed as well, though I'd recommend wrapping any %session (or even %request) dependencies in the user context object that gets passed to CheckPermissions(). @Timothy.Leavitt - thanks so much for the response. The Action worked perfectly with your corrections! I will take your advice and work with the %session/headers in the context object, since that makes the most sense. What are the plans (if any) to enable features in a resultset such as pagination, filters, and sorting? Users are horrible, aren't they? No matter what good work you do, they always want more! I appreciate what you have done here, and it will save my company probably hundreds of hours of work, plus it is very elegant... @Richard.Schilke - great! We have support for filtering/sorting on the collection endpoints already, though perhaps not fully documented. Pagination is a challenge from a REST standpoint but I'd love to add support for it (perhaps in conjunction with "advanced search") at some point. I'm certainly open to ideas on the implementation there. :) Users are the best, because if you don't have them, it's all just pointlessly academic. ;) @Timothy.Leavitt - stuck again. I'm in ClassMethod UserInfo, and found out some interesting things. First off, I was wrong about the REST service using the session cookie from the Zen application when it is called from the Zen application. Displaying the %session.SessionId parameters for each call shows that they are all different, and not the same as the SessionId of the Zen application calling the REST service. So the idea that it holds a license for 10 seconds can't be correct, as it seems almost immediate. I run 20 REST calls to different endpoints in a loop, and I saw a single License increase. You said I should be able to expose the session cookie of the Zen application, but I don't see a way to do that either. I can't even find a way to see the header data in the UserInfo ClassMethod of the current REST call. Sorry to be a pest...but since you''re giving answers, I'll keep asking questions! Have a nice evening... @Richard.Schilke , you should be able to share a session by specifying the same CSP session cookie path for your REST web application and the web application(s) through which your Zen pages are accessed. Alternatively, you could assign the web applications the same GroupById in their web application configuration. You likely also need to configure your REST handler class (your subclass of AppS.REST.Handler) to use CSP sessions (from your earlier description, I assumed you had). This is done by overriding the UseSession class parameter and setting it to 1 (instead of the default 0). To reference header data in the UserInfo classmethod, you should just be able to use %request (an instance of %CSP.Request) and %response (an instance of %CSP.Response) as appropriate for request/response headers. 💡 This article is considered as InterSystems Data Platform Best Practice. How would the AppS.REST Handler co-exist with a 'Spec-first' approach, where the dispatch class should not be modified manually - only by re-importing the API spec? The AppS.REST user-guide states: 'To augment an existing REST API with AppS.REST features, forward a URL from your existing REST handler to this subclass of AppS.REST.Handler.' How would this work in practice with the above? Thanks in advance.
Announcement
Anastasia Dyubaylo · Feb 24, 2021

New Video: Building X12 Interfaces in InterSystems IRIS

Hi Community, See how X12 SNIP validation can be used in InterSystems IRIS data platform and how to create a fully functional X12 mapping in a single data transformation language (DTL): ⏯ Building X12 Interfaces in InterSystems IRIS Subscribe to InterSystems Developers YouTube and stay tuned!
Announcement
Anastasia Dyubaylo · Mar 1, 2021

InterSystems Grand Prix Contest: Vote fo the Best Apps!

Hey Developers, This week is a voting week for the InterSystems Grand Prix Contest! So, it's time to give your vote to the best solutions built with InterSystems IRIS. 🔥 You decide: VOTING IS HERE 🔥 How to vote and what's new? Any developer can vote for their application - votes will be counted in both Expert and Community nominations automatically (in accordance with the Global Masters level). All InterSystems employees can vote in both Expert and Community nominations. With the voting engine and algorithm for the Experts and Community nomination, you can select 3 projects: the 1st, the 2nd, and the 3rd place upon your decision. This is how it works for the Community leaderboard: Community Leaderboard: Place Points 1st 3 2nd 2 3rd 1 And there will be more complex math for the Experts leaderboard, where different levels of experts have more "points" power: Experts Leaderboard: Level Place 1st 2nd 3rd VIP level in GM, Moderators, Product Managers 9 6 3 Expert level in Global Masters 6 4 2 Specialist level in Global Masters 3 2 1 Experts' votes will also contribute 3-2-1 points to the Community leaderboard too. Voting 1. Sign in to Open Exchange – DC credentials will work. 2. Make any valid contribution to Developer Community – answer or ask questions, write an article, contribute applications on Open Exchange - and after 24 hours you'll be able to vote. Check this post on the options to make helpful contributions to the Developer Community. If you changed your mind, cancel the choice and give your vote to another application – you have 7 days to choose! Contest participants are allowed to fix the bugs and make improvements to their applications during the voting week, so don't miss and subscribe to application releases! ➡️ Also, please check out the full voting rules for InterSystems online contest here. Hi Community, Don't forget to support the application you like! ➡️ Voting here ⬅️ Hi Developers, just to remind you that you can help each other making pull requests and report about bugs on GitHub to make applications even greater! On Global Masters we award points: 500 points for a submitted pull request + 500 bonus points if PR is merged 500 points for a bug report + 2,500 points more if the bug is fixed (closed) More info about these challenges in this article. Voting for the InterSystems Grand Prix Contest goes ahead! And here're the results at the moment: Expert Nomination, Top 3 vscode-intersystems-iris – 83 iris-rad-studio – 78 HealthInfoQueryLayer – 60 ➡️ The leaderboard. Community Nomination, Top 3 vscode-intersystems-iris – 79 iris-rad-studio – 72 HealthInfoQueryLayer –71 ➡️ The leaderboard. There were a lot of questions on how to vote: active contributing members of the Developer Community are eligible to vote - 24 hours after the first contribution. The contribution should be helpful to the community to let DC moderators faster decide on the approval. The examples of helpful contributions are listed here. Hello everyone, There is a lot of good application here, good luck to every team they deserve it! Developers! Only 1 day left before the end of voting. Please check out the Contest Board and vote for the solutions you like! 👍🏼
Announcement
Anastasia Dyubaylo · Mar 9, 2021

InterSystems Grand Prix Contest: Give Us Your Feedback!

Hey developers, We want to hear from you! Give us your feedback on the InterSystems Grand Prix Contest! Please answer some questions to help us improve our contests. 👉🏼 Quick survey: InterSystems Grand Prix Contest Survey Or please share your thoughts in the comments to this post!
Announcement
Anastasia Dyubaylo · Apr 6, 2021

New Video: Working with FHIR Profiles in InterSystems IRIS for Health

Hi Community, Find out how to work with FHIR profiles and conformance resources when building FHIR applications on InterSystems IRIS for Health: ⏯ Working with FHIR Profiles in InterSystems IRIS for Health 👉🏼 Subscribe to InterSystems Developers YouTube. Enjoy and stay tuned!