Find

Article
· Oct 31, 2022 5m read

InterSystems IRIS with Laravel (via ODBC)

It has been asked a few times recently, how one can make Laravel Framework work with InterSystems IRIS Data Platform. It's been a while since this post about Laravel and InterSystems Caché was published. To bring it up to date, the article gives a brief instruction set on how to setup and configure a Laravel project for use with InterSystems IRIS through ODBC.

What is Laravel?

Laravel is a PHP framework that is based on MVC architecture. Using Laravel simplifies and speeds up the backend development while building modern secure web applications. It is very beginner-friendly, widely used in PHP world and tends to be the most popular backend framework according to github.com star rating measured in this video. Combining all that with flexibility and performance capabilities delivered by InterSystems IRIS as a database is seen to be beneficial for both worlds.

This post is organised into 4 steps which represent a sequence of actions one needs to complete in order to make the connection work. Specific ways of completing each step may vary depending on the platform. Commands here are shown for Ubuntu 22.02 (x64).

 

Setup Driver Manager (unixODBC)

In order to make the connection work we need to install a Driver Manager. Most commonly used driver managers are 'unixODBC' and 'iODBC'. This guide uses 'unixODBC', which may be downloaded here http://www.unixodbc.org/ . Please refer to 'Download' section of the website to download and build. Alternatively, build instructions can also be found here. We'll use here packages from 'apt' package manager for Ubuntu.

Install packages

Install unixodbc package accompanied by libodbccr2 which provides unixODBC Cursor library.

sudo apt update
sudo apt -y install unixodbc libodbccr2 odbcinst

Create a link for Cursor Library
In certain cases there might be issues with Shared Object Dependencies after unixODBC installation. This is shown as 'Can't open cursor lib' error. There are few workarounds described in internet. In order to resolve this issue we make a symbolic link to a desired library. First, we locate the library:

sudo find / -type f -name "libodbccr*"

And then we create a link

sudo ln -s /usr/lib/x86_64-linux-gnu/libodbccr.so.2.0.0 /etc/libodbccr.so

 

Setup ODBC Driver for InterSystems IRIS

ODBC driver for InterSystems IRIS can be obtained in various ways. For example, ODBC Driver is included to all InterSystems IRIS kits. The other option would be Distributions Portal on wrc.intersystems.com.

Alternatively, drivers for all supported platforms can be found here: https://intersystems-community.github.io/iris-driver-distribution/

Download, unpack and install ODBC Driver:

sudo mkdir -p /usr/lib/intersystems/odbc
cd /usr/lib/intersystems/odbc/

sudo wget -q https://raw.githubusercontent.com/intersystems-community/iris-driver-distribution/main/ODBC/lnxubuntu2004/ODBC-2022.1.0.209.0-lnxubuntu2004x64.tar.gz
sudo tar -xzvf /usr/lib/intersystems/odbc/ODBC-2022.1.0.209.0-lnxubuntu2004x64.tar.gz
sudo ./ODBCinstall

sudo rm -f ODBC-2022.1.0.209.0-lnxubuntu2004x64.tar.gz

After that, the driver will be located in the following folder /usr/lib/intersystems/odbc/bin/

Additional information on drivers and their usage may be found in the docs. This guide uses libirisodbcur6435.so as a driver library.

 

Setup Laravel project

The traditional and convenient way to interact with a database from Laravel would be using its Eloquent package. Eloquent is an object relational mapper (ORM) that is included by default within the Laravel framework. Only few DBMS vendors are supported out-of-the-box. So in order to implement connection and SQL query builder specifics for InterSystems IRIS (via ODBC) some additional PHP code needs to be written. Thanks to @Jean Dormehl this gap was covered for InterSystems Caché . The same one could be used for InterSystems IRIS.
So in this article we describe steps to setup connection for existing Laravel project using jeandormehl/laracache package, assuming that installation and configuration of php, composer and Laravel is done prior to that.

Install php-odbc
Make sure that php-odbc module is installed. You can check the list of modules installed with the following command:

php -m | grep odbc

To install php-odbc extension use the following command using a proper version of php installed on your environment

sudo apt -y install php8.1-odbc

Setup 'jeandormehl/laracachepackage
Go to your Laravel project directory, install package and publish its config file.

composer require jeandormehl/laracache
php artisan vendor:publish --tag=isc

 

Configure IRIS connection

Edit your .env file to contain settings needed to connect to a database. For Unix users it should look similar to this:

DB_CONNECTION=isc
DB_WIN_DSN=
DB_UNIX_DRIVER=/usr/lib/intersystems/odbc/bin/libirisodbcur6435.so
DB_HOST=127.0.0.1
DB_PORT=1972
DB_DATABASE=USER
DB_USERNAME=_SYSTEM
DB_PASSWORD=sys

After editing .env file you may find useful to clear application config cache:

php artisan config:cache

 

Usage

Let's try to retrieve some data using our new package. As an example we'll create a Model inherited from Laracache\Cache\Eloquent\Model . Just for testing purposes we will count the number of sent messages in Interoperability enabled namespace.

nano app/Models/EnsMessageHeader.php
<?php
namespace App\Models;
use Laracache\Cache\Eloquent\Model;
class EnsMessageHeader extends Model
{
    protected $table = 'Ens.MessageHeader';
    protected $fillable = [
        'MessageBodyClassName'
    ];
}

To execute a query we may create an artisan console command like this:

nano routes/console.php
<?php
use Illuminate\Foundation\Inspiring;
use Illuminate\Support\Facades\Artisan;
use App\Models\EnsMessageHeader;
Artisan::command('iris:test', function () {
    echo EnsMessageHeader::count() . PHP_EOL;
});

Then executing the following command should retrieve the number of records 

php artisan iris:test

This scenario should work to a wide range of InterSystems IRIS based products.

5 Comments
Discussion (5)3
Log in or sign up to continue
Article
· Oct 26, 2022 2m read

IRIS が起動時に必要とするポート

これは InterSystems FAQ サイトの記事です。

IRISが起動時に必要とするポートは、次のとおりです。
(ポート番号はデフォルト設定の場合です。)

1. 1972 : IRIS のスーパーサーバポート(管理ポータルで変更可能)
     IRIS 起動のために、必須のポートです。
     このポートが使用できないと IRIS は正常に起動しません。
     ※IRIS 2019.2以前のバージョンでは、スーパーサーバーポートは 51773 でした。    

2. 4002 : IRIS ライセンスサーバポート(管理ポータルで変更可能)
     IRIS がライセンスサーバになる場合、必要なポートです。
     このポートが使用できないと、正しいライセンス情報が取得できません。
     ライセンスクライアントについては、このポートは必須ではありません。
     ※リモートライセンスサーバがファイアウォールで保護されている場合、UDP トラフィックに対してライセンスサーバポートが開かれている必要があります。  
     ライセンスサーバの構成について  

3. 23 :IRIS Telnetポート(管理ポータルで変更可能)
     IRIS サーバが Windows で、かつ、クライアントから IRIS ターミナルによる操作を行う場合にのみ、必要なポートです。  

4. 52773 : 管理用Webサーバ(PWS)ポート(管理ポータルで変更可能)
     IRIS 管理ポータル用に自動起動する、Webサーバポートです。
     ※ユーザアプリで使用する場合は、 ApacheやIISなどのウェブサーバで使用されるポート 80 (既定)となります。  
 2023.2(CD) / 2024.1(EM) 以降のバージョンでは、プライベートウェブサーバ(PWS)は廃止となりました。
 80(既定):Webサーバポート
 管理ポータルを含めた、Webアプリケーションで使用するWebサーバポート

5. 2188 : ISCAgentポート(管理ポータルで変更可能)
     ミラーリングを使用するときにのみ、必要なポートです。


IRIS のスーパーサーバポートについては、IANAのPort番号一覧で明記されております(Cacheでの登録になります)。

intersys-cache 1972/tcp Cache
intersys-cache 1972/udp Cache


以下ページをご参照ください。
 iana.orgのページ
 

1 Comment
Discussion (1)0
Log in or sign up to continue
Article
· Sep 30, 2022 1m read

Some FHIR interoperability adapter examples

Hi developers!

Maybe you have to implement scenarios that don't require a FHIR repository, but forwarding FHIR requests, manage the responses and maybe run transformations or extract some values in between. Here you will find some examples that can be implemented using InterSystems IRIS For Health o HealthShare Health Connect.

Discussion (0)1
Log in or sign up to continue
Announcement
· Sep 16, 2022

InterSystems extension for Docker Desktop is now published

Great news, the extension to Docker Desktop for InterSystems Container Registry is now publicly available for everyone.

It is already available in the marketplace in Docker Desktop. It was published today, and it requires restarting Docker Desktop to see it.

Feel free to post any feedback in the GitHub repository, here

Discussion (0)1
Log in or sign up to continue
Article
· Sep 13, 2022 8m read

CI/CD with IRIS SQL

In the vast and varied SQL database market, InterSystems IRIS stands out as a platform that goes way beyond just SQL, offering a seamless multimodel experience and supporting a rich set of development paradigms. Especially the advanced Object-Relational engine has helped organizations use the best-fit development approach for each facet of their data-intensive workloads, for example ingesting data through Objects and simultaneously querying it through SQL. Persistent Classes correspond to SQL tables, their properties to table columns and business logic is easily accessed using User-Defined Functions or Stored Procedures. In this article, we'll zoom in on a little bit of the magic just below the surface, and discuss how it may affect your development and deployment practices. This is an area of the product where we have plans to evolve and improve, so please don't hesitate to share your views and experiences using the comments section below.

⚠ This part of the product is changing significantly with IRIS 2025.2, please see this article for more detail. The information below applies to 2025.1 and earlier releases.

 

Saving the Storage Definition

Writing brand new business logic is easy, and assuming you have well-defined APIs and specifications, adapting or extending it usually is too. But when it's not just business logic, but also involves persistent data, anything you change from the initial version will need to be able to cope with data that was ingested through that earlier version.

On InterSystems IRIS, data and code coexist in a single high-performance engine, without the half dozen abstraction layers you might see in other 3GL or 4GL programming frameworks. This means there's only a very thin and transparent mapping to translate your class' properties to $list positions in a global node per row of data when using default storage. If you add or remove properties, you don't want the data from a removed property to show up under a new property. This mapping of your class' properties is what the Storage Definition takes care of, a somewhat cryptic XML block you may have noticed at the bottom of your class definition. The first time you compile a class, a new Storage Definition gets generated based on the class' properties and parameters. When you make changes to your class definition, at recompile time those changes are reconciled with the existing Storage Definition and it gets amended such that it remains compatible with existing data. So while you are going out on a limb refactoring your classes, the Storage Definition carefully considers your earlier creativity and ensures both old and new data remain accessible. We call this schema evolution.

In most other SQL databases the physical storage of your tables is much more opaque if visible at all, and changes can only be made through ALTER TABLE  statements. Those are standard DDL (Data Definition Language) commands, but typically much less expressive than what you can achieve modifying a class definition and procedural code on IRIS directly. 

At InterSystems, we strive to offer IRIS developers the ability to cleanly separate their code and data, as that is crucial to ensure smooth packaging and deployment of your applications. The Storage Definition plays a unique role in this, as it captures how the one maps to the other. That's why it's worth a closer look in the context of general development practices and CI/CD pipelines in particular.

Exporting to UDL

In the current century, source code management is file-based, so let's first take a look at IRIS' main file export format. The Universal Description Language is, as its name suggests, meant to be a universal file format for any and all code you write on InterSystems IRIS. It is the default export format when working with the VS Code ObjectScript plug-in and leads to easy-to-read files that resemble almost exactly what you'd see in an IDE, with an individual .cls file for each class (table) in your application. You can use $SYSTEM.OBJ.Export() to create UDL files explicitly, or just leverage the VS Code integration.

From the days of Studio, you may also remember an XML format that captured the same information as UDL and allowed grouping multiple classes into a single export. While that last part is convenient in some scenarios, it's a lot less practical to read and track differences across versions, so we'll ignore it for now.

Because UDL is meant to capture everything IRIS can express about a class, it will include all elements of a class definition, including the full Storage Definition. When importing a class definition that already includes a Storage Definition, IRIS will verify whether that Storage Definition covers all properties and indices of the class and if that is the case, just take it as-is and overwrite the previous Storage Definition for that class. This makes UDL a practical format for version management of classes and their Storage Definition, as it preserves that backwards compatibility for data ingested through prior versions of the class, wherever you deploy it to. 

If you are a hardcore developer, you may wonder whether these Storage Definitions keep growing and whether this "baggage" needs to be carried around indefinitely. The purpose of Storage Definitions is to preserve compatibility with pre-existing data, so if you know there isn't any of that and you want to get rid of lengthy genealogy, you can "reset" your Storage Definition by removing it from the class definition and have the class compiler generate it anew. For example, you may use this to take advantage of newish best practices such as the use of Extent Sets, which implement hashed global names and separate each index into their own global, improving low-level efficiencies. For backwards compatibility within customer applications, we cannot universally change such defaults in the %Persistent superclass (though we will apply them when creating a table from scratch using the CREATE TABLE  DDL command), so a periodic review of your classes and their storage is worthwhile. It is also possible to edit the Storage Definition XML directly, but users should exercise extreme caution as this may render existing data inaccessible.

So far so good. Storage Definitions offer a smart mapping between your classes and automatically adapt as the schema evolves. What else is in there?

Static vs Stats?

As you probably know, the InterSystems IRIS SQL engine makes advanced use of table statistics to identify the optimal query plan for any given statement the user executes. Table statistics include metrics on the size of a table, how values are distributed within a column, and much more. This information helps the IRIS SQL optimizer to decide which index is most beneficial, in what order to join tables, etcetera, so intuitively the more up-to-date your statistics are, the better chances you have to optimal query plans. Unfortunately, until we introduced fast block sampling in IRIS 2021.2, collecting accurate table statistics used to be a computationally expensive operation. Therefore, when customers deploy the same application to many environments in which the data patterns were largely the same, it made sense to consider the table statistics part of the application code and include them with the table definitions.

This is why on IRIS today you'll find the table statistics embedded inside the Storage Definition. When collecting table statistics through a manual call to TUNE TABLE or implicitly by querying it (see below), the new statistics are written to the Storage Definition and existing query plans for this table are invalidated so they can take advantage of the new statistics upon the next execution. Because they are part of the Storage Definition, these statistics will be part of UDL class exports and therefore may end up in your source code repository. In the case of carefully vetted statistics for a packaged application, this is desirable, as you'll want these specific statistics to drive query plan generation for all the application deployments.

Starting with 2021.2, IRIS will automatically collect table statistics at the start of query planning when querying a table that doesn't have any statistics at all and is eligible for fast block sampling. In our testing, the benefits of working with up-to-date statistics rather than no statistics at all clearly outweighed the cost of on-the-fly statistics gathering. For some customers however, this has had the unfortunate side effect that statistics gathered automatically on the developer's instance end up in the Storage Definition in the source control system and eventually in the packaged application. Obviously, the data in that developer environment and therefore the statistics on it may not be representative for a real customer deployment and lead to suboptimal query plans.

This scenario is easy to avoid. Table statistics can be excluded from the class definition export by using the /exportselectivity=0 qualifier when calling $SYSTEM.OBJ.Export() , or use the /importselectivity flag value of your choice to avoid importing them along with the code. The system default for these flag can be found here, and changed using $SYSTEM.OBJ.SetQualifiers(...). You can then leave it up to the automatic collection in the eventual deployment to pick up representative stats, make explicit statistics collection part of your deployment process, which will overwrite anything that might have been packaged with the application, or manage your table stats separately through their own import/export functions: $SYSTEM.SQL.Stats.Table.Export() and Import().  

We're currently working on a project to move table statistics to live with the data rather than be part of the code, and differentiate more cleanly between any fixed statistics configured explicitly by a developer and those collected from the actual data. Also, we're planning more automation with respect to periodically refreshing those statistics, based on how much the table data changes over time. 

Wrapping up

In this article, we've outlined the role of a Storage Definition in IRIS ObjectRelational engine, how it supports schema evolution and what it means to include it in your source control system. We've also described why table statistics are currently stored in that Storage Definition and suggested development practices for making sure your application deployments end up with statistics that are representative for the actual customer data. As mentioned earlier, we're planning to further enhance these capabilities so we look forward to your feedback on the current and planned functionality and refine our design as appropriate.

6 Comments
Discussion (6)4
Log in or sign up to continue