Search

Clear filter
Announcement
Janine Perkins · Apr 13, 2016

Featured InterSystems Online Course: Troubleshooting Productions

Learn how to start troubleshooting productions.Troubleshooting ProductionsLearn how to start troubleshooting productions, with a focus on locating and understanding some of the key Ensemble Management Portal pages when troubleshooting. Learn More.
Announcement
Evgeny Shvarov · Dec 24, 2017

The 2nd Year of InterSystems Developer Community!

Hi, Community!I'm pleased to announce that in this December 2017 we have 2 years of InterSystems Developer Community up and running! Together we did a lot this year, and a lot more is planned for the next year!Our Community is growing: In November we had 3,700 registered members (2,200 last November) and 13,000 web users visited the site in November 2017 (7,000 last year).Thank you for using it, thanks for making it useful, thanks for your knowledge, experience, and your passion!And, may I ask you to share in the comments the article or question which was most helpful for you this year?Happy Birthday, Developer Community! And to start it: for me the most helpful article this year was REST FORMS Queries - yes, I'm using REST FORMS a lot, thanks [@Eduard.Lebedyuk]!Another is Search InterSystems documentation using iKnow and iFind technologiesTwo helpful questions were mine (of course ;):How to find duplicates in a large text fieldand Storage Schema in VCS: to Store Or Not to Store?and How to get the measure for the last day in a month in DeepSee There were many interesting articles and discussions this year. I'd like to thank all of you who participated and helped our community grow. @Murray.Oldfield series on InterSystems Data Platforms Capacity Planning and Performance was a highly informative read.
Article
Mikhail Khomenko · May 15, 2017

Making Prometheus Monitoring for InterSystems IRIS and Caché

Prometheus is one of the monitoring systems adapted for collecting time series data. Its installation and initial configuration are relatively easy. The system has a built-in graphic subsystem called PromDash for visualizing data, but developers recommend using a free third-party product called Grafana. Prometheus can monitor a lot of things (hardware, containers, various DBMS's), but in this article, I would like to take a look at the monitoring of a Caché instance (to be exact, it will be an Ensemble instance, but the metrics will be from Caché). If you are interested – read along. In our extremely simple case, Prometheus and Caché will live on a single machine (Fedora Workstation 24 x86_64). Caché version: %SYS>write $zvCache for UNIX (Red Hat Enterprise Linux for x86-64) 2016.1 (Build 656U) Fri Mar 11 2016 17:58:47 EST Installation and configuration Let's download a suitable Prometheus distribution package from the official site and save it to the /opt/prometheus folder. Unpack the archive, modify the template config file according to our needs and launch Prometheus. By default, Prometheus will be displaying its logs right in the console, which is why we will be saving its activity records to a log file. Launching Prometheus # pwd/opt/prometheus# lsprometheus-1.4.1.linux-amd64.tar.gz# tar -xzf prometheus-1.4.1.linux-amd64.tar.gz# lsprometheus-1.4.1.linux-amd64 prometheus-1.4.1.linux-amd64.tar.gz# cd prometheus-1.4.1.linux-amd64/# lsconsole_libraries consoles LICENSE NOTICE prometheus prometheus.yml promtool# cat prometheus.yml global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.scrape_configs: - job_name: 'isc_cache' metrics_path: '/metrics/cache' static_configs: - targets: ['localhost:57772']# ./prometheus > /var/log/prometheus.log 2>&1 &[1] 7117# head /var/log/prometheus.logtime=«2017-01-01T09:01:11+02:00» level=info msg=«Starting prometheus (version=1.4.1, branch=master, revision=2a89e8733f240d3cd57a6520b52c36ac4744ce12)» source=«main.go:77»time=«2017-01-01T09:01:11+02:00» level=info msg=«Build context (go=go1.7.3, user=root@e685d23d8809, date=20161128-09:59:22)» source=«main.go:78»time=«2017-01-01T09:01:11+02:00» level=info msg=«Loading configuration file prometheus.yml» source=«main.go:250»time=«2017-01-01T09:01:11+02:00» level=info msg=«Loading series map and head chunks...» source=«storage.go:354»time=«2017-01-01T09:01:11+02:00» level=info msg=«23 series loaded.» source=«storage.go:359»time=«2017-01-01T09:01:11+02:00» level=info msg="Listening on :9090" source=«web.go:248» The prometheus.yml configuration is written in the YAML language, which doesn't like tabulation symbols, which is why you should use spaces only. We have already mentioned that metrics will be downloaded from http://localhost:57772 and we'll be sending requests to /metrics/cache (the name of the application is arbitrary), i.e. the destination address for collecting metrics will be http://localhost:57772/metrics/cache. A "job=isc_cache" tag will be added to each metric. A tag, very roughly, is the equivalent of WHERE in SQL. In our case, it won't be used, but will do just fine for more than one server. For example, names of servers (and/or instances) can be saved to tags and you can then use tags to parameterize requests for drawing graphs. Let's make sure that Prometheus is working (we can see the port it's listening to in the output above – 9090): A web interface opens, which means that Prometheus is working. However, it doesn't see Caché metrics yet (let's check it by clicking Status → Targets): Preparing metrics Our task is to make metrics available to Prometheus in a suitable format at http://localhost:57772/metrics/cache. We'll be using the REST capabilities of Caché because of their simplicity. It should be noted that Prometheus only “understands” numeric metrics, so we will not export string metrics. To get the latter, we will use the API of the SYS.Stats.Dashboard class. These metrics are used by Caché itself for displaying the System toolbar: Example of the same in the Terminal: %SYS>set dashboard = ##class(SYS.Stats.Dashboard).Sample() %SYS>zwrite dashboarddashboard=<OBJECT REFERENCE>[2@SYS.Stats.Dashboard]+----------------- general information ---------------| oref value: 2| class name: SYS.Stats.Dashboard| reference count: 2+----------------- attribute values ------------------| ApplicationErrors = 0| CSPSessions = 2| CacheEfficiency = 2385.33| DatabaseSpace = "Normal"| DiskReads = 14942| DiskWrites = 99278| ECPAppServer = "OK"| ECPAppSrvRate = 0| ECPDataServer = "OK"| ECPDataSrvRate = 0| GloRefs = 272452605| GloRefsPerSec = "70.00"| GloSets = 42330792| JournalEntries = 16399816| JournalSpace = "Normal"| JournalStatus = "Normal"| LastBackup = "Mar 26 2017 09:58AM"| LicenseCurrent = 3| LicenseCurrentPct = 2. . . The USER space will be our sandbox. To begin with, let's create a REST application /metrics. To add come very basic security, let's protect our log-in with a password and associate the web application with some resource – let's call it PromResource. We need to disable public access to the resource, so let's do the following: %SYS>write ##class(Security.Resources).Create("PromResource", "Resource for Metrics web page", "")1 Our web app settings: We will also need a user with access to this resource. The user should also be able to read from our database (USER in our case) and save data to it. Apart from this, this user will need read rights for the CACHESYS system database, since we will be switching to the %SYS space later in the code. We'll follow the standard scheme, i.e. create a PromRole role with these rights and then create a PromUser user assigned to this role. For password, let’s use "Secret": %SYS>write ##class(Security.Roles).Create("PromRole","Role for PromResource","PromResource:U,%DB_USER:RW,%DB_CACHESYS:R")1%SYS>write ##class(Security.Users).Create("PromUser","PromRole","Secret")1 It is this user PromUser that we will used for authentication in the Prometheus config. Once done, we'll re-read the config by sending a SIGNUP signal to the server process. A safer config # cat /opt/prometheus/prometheus-1.4.1.linux-amd64/prometheus.ymlglobal: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.scrape_configs: - job_name: 'isc_cache' metrics_path: '/metrics/cache' static_configs: - targets: ['localhost:57772'] basic_auth: username: 'PromUser' password: 'Secret'## kill -SIGHUP $(pgrep prometheus) # or kill -1 $(pgrep prometheus) Prometheus can now successfully pass authentication for using the web application with metrics. Metrics will be provided by the my.Metrics request processing class. Here is the implementation: Class my.Metrics Extends %CSP.REST { Parameter ISCPREFIX = "isc_cache"; Parameter DASHPREFIX = {..#ISCPREFIX_"_dashboard"}; XData UrlMap [ XMLNamespace = "http://www.intersystems.com/urlmap" ] { <Routes> <Route Url="/cache" Method="GET" Call="getMetrics"/> </Routes> } /// Output should obey the Prometheus exposition formats. Docs: /// https://prometheus.io/docs/instrumenting/exposition_formats/ /// /// The protocol is line-oriented. A line-feed character (\n) separates lines. /// The last line must end with a line-feed character. Empty lines are ignored. ClassMethod getMetrics() As %Status { set nl = $c(10) do ..getDashboardSample(.dashboard) do ..getClassProperties(dashboard.%ClassName(1), .propList, .descrList) for i=1:1:$ll(propList) { set descr = $lg(descrList,i) set propertyName = $lg(propList,i) set propertyValue = $property(dashboard, propertyName) // Prometheus supports time series database // so if we get empty (for example, backup metrics) or non-digital metrics // we just omit them. if ((propertyValue '= "") && ('$match(propertyValue, ".*[-A-Za-z ]+.*"))) { set metricsName = ..#DASHPREFIX_..camelCase2Underscore(propertyName) set metricsValue = propertyValue // Write description (help) for each metrics. // Format is that the Prometheus requires. // Multiline descriptions we have to join in one string. write "# HELP "_metricsName_" "_$replace(descr,nl," ")_nl write metricsName_" "_metricsValue_nl } } write nl quit $$$OK } ClassMethod getDashboardSample(Output dashboard) { new $namespace set $namespace = "%SYS" set dashboard = ##class(SYS.Stats.Dashboard).Sample() } ClassMethod getClassProperties(className As %String, Output propList As %List, Output descrList As %List) { new $namespace set $namespace = "%SYS" set propList = "", descrList = "" set properties = ##class(%Dictionary.ClassDefinition).%OpenId(className).Properties for i=1:1:properties.Count() { set property = properties.GetAt(i) set propList = propList_$lb(property.Name) set descrList = descrList_$lb(property.Description) } } /// Converts metrics name in camel case to underscore name with lower case /// Sample: input = WriteDaemon, output = _write_daemon ClassMethod camelCase2Underscore(metrics As %String) As %String { set result = metrics set regexp = "([A-Z])" set matcher = ##class(%Regex.Matcher).%New(regexp, metrics) while (matcher.Locate()) { set result = matcher.ReplaceAll("_"_"$1") } // To lower case set result = $zcvt(result, "l") // _e_c_p (_c_s_p) to _ecp (_csp) set result = $replace(result, "_e_c_p", "_ecp") set result = $replace(result, "_c_s_p", "_csp") quit result } } Let's use the console to check that our efforts have not been vain (added the --silent key, so that curl doesn't impede us with its progress bar): # curl --user PromUser:Secret --silent -XGET 'http://localhost:57772/metrics/cache' | head -20# HELP isc_cache_dashboard_application_errors Number of application errors that have been logged.isc_cache_dashboard_application_errors 0# HELP isc_cache_dashboard_csp_sessions Most recent number of CSP sessions.isc_cache_dashboard_csp_sessions 2# HELP isc_cache_dashboard_cache_efficiency Most recently measured cache efficiency (Global references / (physical reads + writes))isc_cache_dashboard_cache_efficiency 2378.11# HELP isc_cache_dashboard_disk_reads Number of physical block read operations since system startup.isc_cache_dashboard_disk_reads 15101# HELP isc_cache_dashboard_disk_writes Number of physical block write operations since system startupisc_cache_dashboard_disk_writes 106233# HELP isc_cache_dashboard_ecp_app_srv_rate Most recently measured ECP application server traffic in bytes/second.isc_cache_dashboard_ecp_app_srv_rate 0# HELP isc_cache_dashboard_ecp_data_srv_rate Most recently measured ECP data server traffic in bytes/second.isc_cache_dashboard_ecp_data_srv_rate 0# HELP isc_cache_dashboard_glo_refs Number of Global references since system startup.isc_cache_dashboard_glo_refs 288545263# HELP isc_cache_dashboard_glo_refs_per_sec Most recently measured number of Global references per second.isc_cache_dashboard_glo_refs_per_sec 273.00# HELP isc_cache_dashboard_glo_sets Number of Global Sets and Kills since system startup.isc_cache_dashboard_glo_sets 44584646 We can now check the same in the Prometheus interface: And here is the list of our metrics: We won’t focus on viewing them in Prometheus. You can select the necessary metric and click the “Execute” button. Select the “Graph” tab to see the graph (shows the cache efficiency): Visualization of metrics For visualization purposes, let’s install Grafana. For this article, I chose installation from a tarball. However, there are other installation options, from packages to a container. Let's perform the following steps (after creating the /opt/grafana folder and switching to it): Let's leave the config unchanged for now. On our last step, we launch Grafana in the background mode. We'll be saving Grafan's log to a file, just like we did with Prometheus: # ./bin/grafana-server > /var/log/grafana.log 2>&1 & By default, Grafana’s web interface is accessible via port 3000. Login/password: admin/admin. A detailed instruction on making Prometheus work with Grafana is available here. In short, we need to add a new Data Source of the Prometheus type. Select your option for direct/proxy access: Once done, we need to add a dashboard with the necessary panels. The test sample of a dashboard is publicly available, along with the code of the metrics collection class. A dashboard can be simply imported to Grafana (Dashboards → Import): We'll get the following after import: Save the dashboard: Time range and update period can be selected in the top right corner: Examples of monitoring types Let's test the monitoring of calls to globals: USER>for i=1:1:1000000 {set ^prometheus(i) = i}USER>kill ^prometheus We can see that the number of references to globals per second has increased, while cache efficiency dropped (the ^Prometheus global hasn’t been cached yet): Let's check our license usage. To do this, let's create a primitive CSP page called PromTest.csp in the USER namespace: <html><head><title>Prometheus Test Page</title></head><body>Monitoring works fine!</body></html> And visit it so many times (we assume that the /csp/user application is not password-protected): # ab -n77 http://localhost:57772/csp/user/PromTest.csp We'll see the following picture for license usage: Conclusions As we can see, implementing the monitoring functionality is not hard at all. Even after a few initial steps, we can get important information about the work of the system, such as: license usage, efficiency of globals caching, application errors. We used the SYS.Stats.Dashboard for this tutorial, but other classes of SYS, %SYSTEM, %SYS packages also deserve attention. You can also write your own class that will supply custom metrics for your own application – for instance, the number of documents of a particular type. Some useful metrics will eventually be compiled into a separate template for Grafana. To be continued If you are interested in learning more about this, I will write more on the subject. Here are my plans: Preparing a Grafana template with metrics for the logging daemon. It would be nice to make some sort of a graphical equivalent of the ^mgstat tool – at least for some of its metrics. Password protection for web applications is good, but it would be nice to verify the possibility of using certificates. Use of Prometheus, Grafana and some exporters for Prometheus as Docker containers. Use of discovery services for automatically adding new Caché instances to the Prometheus monitoring list. It's also where I'd like to demonstrate (in practice) how convenient Grafana and its templates are. This is something like dynamic panels, where metrics for a particular selected server are shown, all on the same Dashboard. Prometheus Alert Manager. Prometheus configuration settings related to the duration of storing data, as well possible optimizations for systems with a large number of metrics and a short statistics collection interval. Various subtleties and nuances that will transpire along the way. Links During the preparation of this article, I visited a number of useful sites and watched a great deal of videos: Prometheus project website Grafana project website Blog of one of Prometheus developers called Brian Brazil Tutorial on DigitalOcean Some videos from Robust Perception Many videos from a conference devoted to Prometheus Thank you for reading all the way down to this line! Hi Mikhail, very nice! And very useful.Have you done any work on templating, e.g. so one or more systems may be selected to be displayed on one dashboard? Hi, Murray! Thanks for your comment! I'll try to describe templating soon. I should add it to my plan. For now, an article about ^mgstat is almost ready (in head -)). So stay tuned for more. Hi, Murray!You can read continuation of article here. There templates are used. Hi Mikhail, very nice! I am configure here the process! Where are the metric values stored? What kind of metrics can be added via SetSensor? Only counters or also gauges and histograms? Metrics in this approach are stored in Prometheus. As Prometheus is time-series database, you can store any numeric metric there either counter or gauge. Hum... ok for that part I understand it, but if I extend the %SYS.Monitor.SAM.Abstract (https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_rest) and SetTimer method, do you know where is it stored? I'm not sure, but think that SAM-implementation is based on System Monitor (https://docs.intersystems.com/iris20194/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_healthmon) in some fashion. Could you clarify your task? Do you want to know an exact name of Global where samples are stored? For what?Link you've shared describes how to expose your custom numeric metric that could be read by Prometheus, for instance, and then stored there. 💡 This article is considered as InterSystems Data Platform Best Practice.
Question
Mike Kadow · Jan 15, 2018

Access to the InterSystems Class Database with the Atelier IDE?

How can I access the InterSystems Class Database with the Atelier IDE?Say I want access to the Samples database and Namespace? I'm not clear what you mean by "access the Class Database". Connect Atelier to the SAMPLES namespace and you'll be able to view/edit classes from the SAMPLES database. Please expand on what else you're trying to do. John, maybe I was asking two separate things.When I bring up I/S documentation, there is an option to go to Class Reference.When I select that option I can see all Classes in the InterSystems database, that is what I am really wanting to do.I put the comment in about Samples as an after thought, not really thinking it does not apply to the other question. Ok, I realize I am not being clear, often I say things before I think them through.When I being up I/S documentation, I can select the Class Reference option.From Studio I can look up classes that are in the Class Reference Option.I tried to do the same thing in Atelier, and was unable to find the command to browse through all the Classes I see in the Class Reference Option. That, is what I am trying to do. I hope that is clear. You can also see this information in the Atelier Documentation view as you are moving focus within a class. If you do not see this view you can launch it by selecting Window > Show View > Other > Atelier > Atelier Documentation > Open.For example, I opened Sample.Person on my local Atelier client, selected the tab at the bottom for Atelier Documentation, then clicked on "%Populate" in the list of superclasses. Now I can see this in the Atelier Documentation view: Hey, NicoleThat's excellent !!!What ever I click on shows up in "Atelier Documentation" Tab.Thanks for the hint! Easier:in tab Server Explorer clickk to ADD NEW Sever Connection (green cross) OK, you look for something different than I understood.The CLASS REFERENCE to DocBook seems to be not directly available as in Studio.Just by external access to Documentation ....Part of it is found if you have an class in your editor and move your cursor over a class namethen you get a volatile class description that you can nail down by clicking or <F2>Its's pretty similar to the DocBook version EXCEPT that you have no further references (e.g. Data Types or %Status or ...)So it's not a multi level navigation like in browser!For illustration I have done this for %Persitent. For %Populate, %XML.Adaptor you have do again and again.
Announcement
Andreas Dieckow · Jul 11, 2018

InterSystems products running on AIX on Power 9

InterSystems products (InterSystems IRIS, Caché and Ensemble) support AIX on Power 9 chips starting with:Caché/Ensemble 2017.1.3InterSystems IRIS 2018.1.1
Question
Tony Beukes · Jul 8, 2018

Terminal access to the InterSystems IRIS Experience sandbox

Is Terminal access to the InterSystems IRIS Experience sandbox available? Are you looking for full ssh/bash access into the container, or would an interactive InterSystems Terminal session? To run SQL shell or other database specific commands?Full ssh is difficult as it opens up potential security issues. The InterSystems Terminal would be possible. We have a web accessible version that is in testing right now that we could add.If you clarify what you are looking to do, we can see what meets the needs best.Doug Thanks Doug,It would be great if we could have access via the InterSystems Terminal.Any idea when we could expect it to be available? Good news, we have updated the Direct Access to InterSystems IRIS container to include a terminal link in the management portal.When you get launch your InterSystems IRIS instance, you will get a set of links to that instance. Use the Management Portal link and log in with the username/password provided.Then on the home page of the management portal, you will see a "Terminal" link in the "Links" section. When you click on that link, you will need to enter the username/password again, but then will be in an interactive terminal session that defaults to the USER namespace. This is the same as an iris session iris -U USER at the shell, or the "Terminal" menu option in the launcher.Please let us know if you have any other suggestions or request as we want to make it easy to test out and learn InterSystem IRIS functionality.
Announcement
Anastasia Dyubaylo · Jul 16, 2018

Group: Prague Meetup for InterSystems Data Platform

Hi Community!User or developer working with Caché, Ensemble or other InterSystems products? Healthcare or banking IT professional? Or just a developer seeking new challenges?Come and join us for discussing what's up once you are in Prague, Czech Republic, or near by! We'll share news and experience on how to develop modern big-data, multi-model oriented applications.Please, feel free to ask your questions about InterSystems Meetup group in Prague. @Daniel.Kutac and @Ondrej.Hoferek will provide details.
Announcement
Jeff Fried · Sep 29, 2018

InterSystems Caché and Ensemble 2018.1 Release

InterSystems is pleased to announce that InterSystems Caché and Ensemble 2018 are now released!New in these releases are features that improve security and operations, including:· Key Management Interoperability Protocol (KMIP) support· Microsoft Volume Shadow Copy (VSS) integration· Integrated Windows Authentication support for HTTP· SSH enhancementsThe releases also include many important updates and fixes.See the documentation for release notes and upgrade guides for both Caché and Ensemble. The platforms for these releases are the same as for 2017.2. Full details can be found in this supported platforms document.
Announcement
Mike Morrissey · Mar 7, 2018

InterSystems Launches FHIR® Sandbox at HIMSS18

The InterSystems FHIR® Sandbox is a virtual testing environment that combines HealthShare technology with synthetic patient data and open source and commercial SMART on FHIR apps, to allow users to play with FHIR functionality.The sandbox is designed to enable developers and innovators to connect and test their own DSTU2 apps to multi-source health records hosted by the latest version of HealthShare. Share your experience with others or ask questions here in the FHIR Implementers Group. Click here to access the InterSystems FHIR® Sandbox.
Announcement
Stefan Cronje · Oct 6, 2018

Debug Stack for InterSystems Cache, Ensemble and IRIS

Hey folks,I've shared a debug stack we created on the Open Exchange.I want to post the link here, but need the link to this article for the Open Exchange. Which came first, the chicken or the egg? The github link: https://github.com/stefanc82/Cache-Debug-Stack Please show a sample output. The repo contains an example. Here is an example of exporting the stack to a string in terminal DEV>set sc = ##class(Examples.DebugStack).TestDebugStack() Examples.DebugStack TestDebugStack Calling Method InnerStackTest with value: 5 | |- Examples.DebugStack TestInnterStack pVal argument: 5 | |- Examples.DebugStack TestInnterStack tMyVal: 15 | |- Examples.DebugStack TestInnterStack Calling TestThirdLevelStack with tMyVal: 15 | | |- Examples.DebugStack TestThirdLevelStack pVal argument: 15 | | |- Examples.DebugStack TestThirdLevelStack tFinalVal: 35 | |- Examples.DebugStack TestInnterStack TestThirdLevelStack completed OK Examples.DebugStack TestDebugStack TestInnerStack completed OK DEV> It will be more readable if placed in a text file or a CSV. The "columns" are tab delimited. It has the option of providing output to a string or a global character stream. The "egg" is available now on Open Exchange )
Article
Eduard Lebedyuk · Oct 8, 2018

Configuring Apache to work with InterSystems products on Linux

InterSystems products (IRIS, Caché, Ensemble) already include a built-in Apache web server. But the built-in server is designed for the development and administration tasks and thus has certain limitations. Though you may find some useful workarounds for these limitations, the more common approach is to deploy a full-scale web server for your production environment. This article describes how to set up Apache to work with InterSystems products and how to provide HTTPS access. We will be using Ubuntu, but the configuration process is almost the same for all Linux distributions.Installing ApacheI assume that you have already installed let's say Caché to /InterSystems/Cache.By default, Caché is supplied with a plugin for Apache, so you can simply go to /InterSystems/Cache/csp/bin and select the corresponding file:CSPa24.so (Apache Version 2.4.x)CSPa22.so (Apache Version 2.2.x)CSPa20.so (Apache Version 2.0.x)CSPa.so (Apache Version 1.3.x)If several are available it's better to choose the latest one.Now it's time to install Apache. For Ubuntu execute apt-get update apt-get install apache2 zlib1g-dev In some OSes, there's no apache2 package (you'd get an "unable to locate package" error) and it's named httpd instead. If there's no apache build for your OS in the default repositories, I recommend searching for a repository against building apache yourself. While it's possible, it's fairly challenging and binaries generally could be found. Once the installation is complete, make sure that the installed Apache version meets the requirements: apache2 -v Then open the full list of modules to check whether 'mod_so' is on the list: apache2 -l So, Apache is now up and running. To test your Apache installation, type the IP address of your server in a web browser and see if the following page appears: Apache config structure Again, this can change depending on your OS and Apache versions but usually, Apache has the following configuration structure: In here: apache2.conf - main config file, loads first and loads everything elseenvvars - default environment variables for apache2ctlports.conf - ports that apache listens (usually 80 and 443)conf-available/conf-enabled - configuration snippets that are available or enabledmods-available/mods-enabled - mods (for example we need to add CSP mod to work with InterSystems products)sites-available/sites-enabled - sites (i.e. on port 80, on port 443, on host test.domain.com and so on) There are also several utilities to run and configure Apache: apache2 - main program (can be used to run apache2 in debug mode)apache2ctl - Apache control interfacea2enconf, a2disconf - enable or disable an apache2 configuration filea2enmod, a2dismod - enable or disable an apache2 modulea2ensite, a2dissite - enable or disable an apache2 site / virtual hostservice apache2 - manages apache2 in service mode Linking CSP to Apache Now that Apache is up and running let's connect it to Cache. To do this, make some changes to the Apache configuration. Edit the following files:1. /etc/apache2/envvars contains the environment variables. Set the variables APACHE_RUN_USER and APACHE_RUN_GROUP to cacheusr or (irisusr for InterSystems IRIS) 2. In a mod directory /etc/apache2/mods-available create a new mod - a file named CSP.load: CSPModulePath /InterSystems/Cache/csp/bin/ LoadModule csp_module_sa /InterSystems/Cache/csp/bin/CSPa24.so AddHandler csp-handler-sa csp cls cxw zen 3. Enable CSP module using a2enmod programm 4. Create a new site in /etc/apache2/sites-available named Cache80.conf: <IfModule mod_ssl.c> <VirtualHost _default_:80> ServerName test.domain.com DocumentRoot "/InterSystems/Cache/csp" ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined LogLevel info <Location /> CSP On SetHandler csp-handler-sa </Location> <Location "/csp/bin/Systems/"> SetHandler csp-handler-sa </Location> <Location "/csp/bin/RunTime/"> SetHandler csp-handler-sa </Location> DirectoryIndex index.csp index.html index.htm </VirtualHost> </IfModule> 5. Using a2dissite/a2ensite disable old site on port 80 and enable Cache site 6. Restart apache with: service apache2 restart 7. Open http://<ip>/csp/sys/UtilHome.csp and see if the system management portal appears: If apache didn't start, check the logs - it usually points to the problem line in configuration. Run apachectl configtest to check your configuration. If there's no pointer to error or configtest says that everything is fine, try starting apache in a foreground apache2 -DFOREGROUND -e debug If you're getting "application path" error it means that apache is unable to access CSP\bin directory and CSP.ini file inside. SSL Now let's set up https. To do this, you need a domain name, they start at $2/year. After you bought your domain name and configured DNS A record to point to your IP address go to any certificate authority for certificates. I'll show how it can be done with Let's Encrypt for free. 1. Using a2dissite/a2ensite disable Cache site on port 80 and enable default site 2. Restart apache with: service apache2 restart 3. Follow these instructions (I recommend forcing http->https redirect) 4. Create new site combining SSL configuration and Cache site: <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerName test.domain.com DocumentRoot "/InterSystems/Cache/csp" ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined LogLevel info <Location /> CSP On SetHandler csp-handler-sa </Location> <Location "/csp/bin/Systems/"> SetHandler csp-handler-sa </Location> <Location "/csp/bin/RunTime/"> SetHandler csp-handler-sa </Location> #DirectoryIndex index.csp index.php index.html index.htm DirectoryIndex index.html SSLCertificateFile /etc/letsencrypt/live/test.domain.com/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/test.domain.com/privkey.pem Include /etc/letsencrypt/options-ssl-apache.conf ServerAlias test.domain.com </VirtualHost> </IfModule> Make sure that 'Server Name' matches the 'commonName' parameter in the server certificate, and that the paths to all keys (server key 'SSLCertificateKeyFile', server certificate 'SSLCertificateFile' issued by the certificate authority) are valid. 5. Enable it, disable default SSL site, enable Cache SSL site 6. Restart apache with: service apache2 restart 7. Check whether the system management portal is now available at https://<ip>/csp/sys/UtilHome.csp 8. Enable auto-updating certificate. Edit crontab with: crontab -e and add a line there: 0 0 * * 0 certbot renew Summary It's easy to start running Apache server with InterSystems products. Increase security by implementing HTTPS on your sites. Links DocumentationApache documentation The configuration structure for apache is actually much more dependent on which distribution you are using than the OS. There are pretty huge differences in the way Ubuntu/SuSE/RH/Centos are setting things up. Even down to some of them calling the system service apache vs apache2 vs httpd. There are also differences with some of them using systemd vs still the old sysvinit. The good news is, you can always re-arrange your apache config to your liking. Please also note, setting ~~~ CSP On SetHandler csp-handler-sa ~~~ Is redundant. CSP On on a path already maps all filetypes to be handled by the gateway. Adding the sethandler doesn't add anything. If you are omitting CSP On, but use the csp-handler-sa as detailed above, the gateway will not serve static files. Either you will need to add them (ok for low traffic sites), or you will need to configure apache to serve the static files directly (if your apache is not on the same machine as your instance you will need to copy the static files or mount a network drive) Thanks for the info, Fabian.I myself am a fan of Ubuntu structure, do you know how to get that automatically on other OSes, primarily CentOS?
Announcement
Evgeny Shvarov · Oct 9, 2018

InterSystems Open Exchange - New Service for Developers!

Hi, Community!I'm pleased to announce that InterSystems Open Exchange is now operating and available for everyone!InterSystems Open Exchange is a gallery of software Solutions, Technology Examples, and Frameworks which were developed with InterSystems Data Platforms: Caché, Ensemble, HealthShare, InterSystems IRIS or InterSystems IRIS for Health.Also, it is a place where you can find tools and solutions which can help you with development, deployment and support the solutions built with InterSystems Data Platforms.Open Exchange is the new online service for developers on InterSystems Data Platforms in addition to Online Learning, Documentation and Community! Find solutions and examples you need, upload tools and share code snippets you developed and let InterSystems Data Platforms technology be more helpful for your business and software development!Please feel free to share your feedback here on Developer Community with Open Exchange tag and you can report bugs and enhancement requests in this public repository of open exchange.Stay tuned!
Article
Evgeny Shvarov · Oct 18, 2018

How to Publish an Application on InterSystems Open Exchange?

Hi Community!As you know we launched the InterSystems Open Exchange — the marketplace for solutions and tools on InterSystems Data Platforms!But how to publish your application on OE?Before we start, let me answer the few basic questions.Who can publish?Basically, everyone. You can sign in to Open Exchange with your InterSystems Developer Community or WRC account.What is the application?Open Exchange application is a solution, tool, interoperability adapter or interfaces developed using any InterSystems Data Platforms product: Caché, Ensemble, HealthShare, InterSystems IRIS or InterSystems IRIS for Health.Or this tool or solution should help in the development, testing, deployment or management solutions on InterSystems Data Platforms.What is the application for Open Exchange? Actually, it is the name, description and set of links to the application entries: download page, documentation, code repository (if any), license, etc.Let me illustrate the process with my personal example.Submitting the Application to Open ExchangeIn order to illustrate the procedure, I developed a fantastic application on ObjectScript for InterSystems IRIS and want to share it with Developer Community: Ideal ObjectScript.It demonstrates the ideal usage of ObjectScript coding guidelines for various ObjectScript use cases.There are mandatory fields, which need to be presented on every Open Exchange application.1. Name - a unique to Open Exchange name for the application2. Description - description of the application. The field supports markdown.3. Product URL - the link to a Download page of your application. 4. License - the link to the page which shows the license for your application.5. InterSystems Data Platforms - set of InterSystems Data Platforms your application is intended for.All the rest fields are optional.So, let's submit my app.I have the Name: Ideal ObjectScriptDescription: Ideal ObjectScript demonstrates the ideal usage of InterSystems ObjectScript coding guidelines for various ObjectScript use cases.Product URL: https://github.com/evshvarov/ideal_objectscript/releases/tag/1.0 - the link to 1.0 version on Github releases section of the app.License URL: https://github.com/evshvarov/ideal_objectscript/blob/1.0/LICENSE - the link to the LICENSE file of the app.InterSystems Data Platforms: And the application supports InterSystems IRIS, Caché, and Ensemble - this is the list of InterSystems products I tested the application with by myself.With that, we are ready to submit the app.Application versionOnce you click Send For Approval you need to provide the version of the app and release notes. We use Semver for versioning. The release notes will be published in the Open Exchange News, DC Social Media and version history section of the app.After that application enters the approval workflow which results with approval and automatic publishing on OpEx or with some recommendations on how to correct the application's descriptions and links.Enter Additional parametersImage URLplace an URL to an image icon of your app to let it be displayed on a tile. You can omit this and the standard OpEx icon will be shown.Github URLPlace the link to a Github repository if you have it for your application. We have the integration with Github on Open Exchange, so if you introduce the link to the Github repository of your app Open Exchange will show the description from Github automatically (everything which is listed in Readme.md). E.g. see how Ideal ObjectScript page is displayed on Open Exchange.Community Article URLOf course, you can tell about your application on Developer Community with the nice article so place the URL to it here!As you can see the procedure is very simple! Looking forward to seeing your InterSystems Data Platforms applications on Open Exchange!Stay tuned! That's great !Does it have to be a Github repo or can I use BitBucket ?Also - if we find an error (eg WebTerminal on IRIS), can we leave a comment generally or for the developer ?Steve Hi, Steve!Thanks for your feedback! Does it have to be a Github repo or can I use BitBucket ?It can be any public repository: Github, BitBucket or Gitlab. But today we have the embedded support only for Github. E.g. if you submit Github repo in the application OE will use README.md as description, LICENSE.md as license and we plan to introduce more support in the near future.You can add the request for BitBucket support in Open Exchange issues.Also - if we find an error (eg WebTerminal on IRIS), can we leave a comment generally or for the developer ?Every application has either repo with issues or the support link which are intended to receive feedback, bug reports, and feature requests.If you find an error on Open Exchange, please submit it here! ) @Evgeny can you tell me:How to edit an application (title, image, description, etc.) after published? I can't find the option.And also, how to you publish new versions?many thanks! Hi, David!Thanks for the question!Today the procedure is the following: Unpublish, Edit, Send for Approval again. I've previously logged this issue #1 at https://github.com/intersystems-community/openexchange/issues/1 Good to know. I'll be tuned to the issue. Thanks And we have video instruction for this now. Hi @Evgeny.Shvarov The "Image URL" has some default dimension? I would like to change my application icon. Hi @Henrique! We updated the interface and now it's the matter of click Upload Image menu in the App Editor. See the article. Let me know if this works! Hi @Evgeny.Shvarov Thanks for the link to the updated article. I found out the error using the Google Chrome DevTools. I was trying to upload an image in a Published App. In the DevTools console I got the error: POST https://openexchange.intersystems.com/mpapi/packages/image/446 500 (Internal Server Error) Using the Network tab, I could see the details in the Response: { "errors":[ { "code":5001, "domain":"%ObjectErrors", "error":"ERROR #5001: You can't to change title image of the published package. Please, unpublish package or create draft.", "id":"GeneralError", "params":["You can't to change title image of the published package. Please, unpublish package or create draft." ] } ], "summary":"ERROR #5001: You can't to change title image of the published package. Please, unpublish package or create draft." } As a suggestion, this response could be shown in a toast message? Thanks, @Henrique ! Filed here
Article
Benjamin De Boe · Jan 31, 2018

Introducing the InterSystems IRIS Connector for Apache Spark

With the release of InterSystems IRIS, we're also making available a nifty bit of software that allows you to get the best out of your InterSystems IRIS cluster when working with Apache Spark for data processing, machine learning and other data-heavy fun. Let's take a closer look at how we're making your life as a Data Scientist easier, as you're probably already facing tough big data challenges already, just from the influx of job offers in your inbox! What is Apache Spark? Together with the technology itself, we're also launching an exciting new learning package called the InterSystems IRIS Experience, an immersive combination of fast-paced courses, crisp videos and hands-on labs in our hosted learning environment. The first of those focuses exclusively on our connector for Apache Spark, so let's not reinvent the introduction wheel here and refer you to the course for a broader introduction. It's 100% free, after all! In (very!) short, Apache Spark offers you an abstract object representation of a potentially massive dataset. As a developer, you just call methods on this Dataset interface like filter(), map() and orderBy() and Spark will make sure they are executed efficiently, leveraging the servers in your Spark cluster by parallelizing the work as much as possible. It also comes with a growing set of libraries for Machine Learning, streaming and analyzing graph data that leverage this dataset paradigm. Why combine Spark with InterSystems IRIS? Spark isn't just good at data processing and attracting large crowds at open source conferences, it's also good at allowing smart database vendors to participate in this drive to efficiency through its Data Source API. While the Dataset object abstracts the user from any complexities of the underlying data store (which could be pretty crude in the case of a file system), it does offer the database vendors on the other side of the object a chance to provide information on what those complexities (which we call features!) may be leveraged by Spark to improve overall efficiency. For example, a filter() call on the Dataset object could be forwarded to an underlying SQL database in the form of a WHERE clause. This predicate pushdown mechanism means the compute work is pushed closer to the data, allowing greater overall efficiency and throughput and part of building a connector for Apache Spark means registering all core functions that can be pushed down into the data source. Besides this predicate pushdown, which many other databases support for Spark, our InterSystems IRIS Connector offers a few other unique advantages: The connector is designed to work well with sharding, our new option for horizontal scalability. More specifically, if you're building a Dataset based on a sharded table, we'll make sure Spark slaves connect directly to the shard servers to read the data in parallel, rather than stand in line to pipe the data through the shard master. As part of our new container deployment options, we're also offering a container that has both InterSystems IRIS and Apache Spark included. This means that when setting up a (sharded) cluster of these using ICM, you'll automatically have a Spark slave running alongside each data shard, allowing them to exploit data locality when reading or writing sharded tables, avoiding any network overhead. Note that if you set up your Spark and InterSystems IRIS clusters manually, with a Spark slave running on each server that has an InterSystems IRIS instance, you'll also benefit from this. When reading data, the connector can implicitly partition the data being read by exploiting the same mechanism our SQL query optimizer uses in %PARALLEL mode. Hereby, multiple connections to the same InterSystems IRIS instance are opened to read the data in parallel, increasing throughput. With the basics in place already, you'll see further speedups coming up in InterSystems IRIS 2018.2. Also starting with InterSystems IRIS 2018.2, you'll be able to export predictive models built with SparkML to InterSystems IRIS with a single iscSave() method call. This will automatically generate a PMML class on the database side with native code to run the model in InterSystems IRIS in real time or batch scenarios. Getting started with the InterSystems IRIS Connector is easy, as it's plug-compatible with the default JDBC connector that ships with Spark. So any Spark program that started with var dataset = spark.read.format("jdbc") .option("dbtable", "BigData.MassiveSales") can now become import com.intersys.spark._ var dataset = spark.read.format("iris") .option("dbtable","BigData.MassiveSales") Now that's just the bare essentials to get you started wit InterSystems IRIS as the data store behind your Apache Spark cluster. The rest will only be constrained by your Data Science imagination, coffee supply and the 24h in a typical day. Come and try it yourself in the InterSystems IRIS Experience for Big Data Analytics! Someone please explain me how to working with InterSystems IRIS . The working stuffs for InterSystems IRIS how to download it and how to work with InterSystems IRIS. contact your sales rep.
Announcement
Evgeny Shvarov · Feb 12, 2018

ESG And InterSystems Webinar on 14th of February

Hi, Community! There would be a webinar in two days by analysts Steve Duplessie and Mike Leone from Enterprise Strategy Group and Joe Lichtenberg, director of marketing for Data Platforms at InterSystems. They will present their recent research on operational and analytics workloads on a unified data platform and discuss the top database deployments and infrastructure challenges that organizations are struggling with, including managing data growth and database size and meeting database performance requirements. And Joe Lichtenberg will introduce attendees to the company’s latest product, InterSystems IRIS Data Platform. Join! Building Smarter, Faster, and Scalable Data-Rich Applications for Businesses that Operate in Real-Time ​