Clear filter
Question
Scott Roth · Dec 15, 2023
Outside of the learning module for IAM, I would like to give it a try with Community Edition on my own however the Community Edition License does not include it.
Has there been any discussion on allowing Company's Demo IAM through Community Edition before they get the license?
Thanks
Scott IAM is not available with IRIS Community Edition due to licensing constraints. However, Kong Community Edition works great with IRIS Community Edition. No licenses required for either one.
Announcement
Fabiano Sanches · Aug 4, 2023
The 2023.2 releases of InterSystems IRIS Data Platform, InterSystems IRIS for Health, and InterSystems IRIS Studio are now Generally Available (GA).
RELEASE HIGHLIGHTS
2023.2 is a Continuous Delivery (CD) release. Many updates and enhancements have been added in 2023.2:
Private Web Server
Starting with this release:
Local InterSystems IRIS installation will no longer install the private web server. Access to the Management Portal and other built-in web applications will require configuration of a connection to an external web server. Installation will include the option to automatically configure the Apache web server (on all platforms except Microsoft Windows) and the IIS web server (on Microsoft Windows), if already installed.
InterSystems IRIS containers will no longer provide the private web server. Access to the Management Portal of containerized instances will require deployment of an associated Web Gateway container.
Enhancing Analytics and AI
Time Series Forecasting in IntegratedML: IntegratedML can now be used for predicting future values for one or more time-dependent variables. Check out to this video: Predicting Time Series Data with IntegratedML (4m)
InterSystems Reports 2023.2: InterSystems Reports 2023.2 incorporates Logi Reports v23.2. New capabilities include:
Increased functionality in Page Report Studio to enable customizations from Report Server.
Additional Bookmark management.
InterSystems Adaptive Analytics 2022.3: InterSystems Adaptive Analytics 2023.2 incorporates AtScale version 2023.2. New capabilities include:
Token-based authentication to support Microsoft PowerBI in Azure environment (as requested by InterSystems)
Scheduling of aggregate builds on a per-cube basis at convenient times for better performance management
Usage Metrics Dashboard.
Enhancing the Developer Experience
InterSystems SQL - Safe Query Cancellation: This release introduces a new command for InterSystems SQL to cancel running statements safely and reclaim any resources used by the statement, including memory and temporary space: CANCEL QUERY.
Enhancing Speed, Scale and Security
SQL Performance Improvements: This release includes various performance enhancement across InterSystems SQL process which are 100% transparent to users:
The Global Iterator is an internal module that shifts much of the work traversing indices and data maps into the kernel, as opposed to leveraging ObjectScript.
The Adaptive Parallel Execution framework (APE) was first introduced for queries involving columnar storage: rather than preparing and executing separate parallel subqueries, APE generates parallel worker code into the same cached query class as the main query.
Kernel-Level Performance Improvements: This release includes several kernel-level performance improvements that transparently benefit customer applications:
Where possible, the global module now reads a set of physically contiguous big string blocks using a single, large IO operation. Big string blocks are used for storing long strings and columnar data.
Mirroring now uses large asynchronous IO operations to read and write journal files through the agent channel, leading to a significant increase in mirror agent throughput.
Enhancing Interoperability and FHIR
Multi-Threaded FHIR DataLoader:The FHIR DataLoader, HS.FHIRServer.Tools.DataLoader, has been enhanced to run as a multi-threaded process. This allows it to take advantage of parallel processing, which can increase the performance significantly, depending on available CPU resources. Import NDJSON Using FHIR DataLoader: InterSystems IRIS for Health 2023.2 adds new functionality to import NDJSON bundles using the DataLoader. This is important for systems outputting data as Bulk FHIR. Support for $everthing input parameter _since: The $everything input parameter _since is now supported for Patient and Encounter resource types. Improved FHIR Indexing Performance: InterSystems IRIS for Health 2023.2 introduces enhancements to FHIR indexing performance.
DOCUMENTATION
Details on all the highlighted features are available through these links below:
InterSystems IRIS 2023.2 documentation, release notes and deprecated & discontinued technologies and features.
InterSystems IRIS for Health 2023.2 documentation, release notes and deprecated & discontinued technologies and features.
In addition, check out this link for upgrade information related to this release.
HOW TO GET THE SOFTWARE
As usual, Continuous Delivery (CD) releases come with classic installation packages for all supported platforms, as well as container images in Docker container format. For a complete list, refer to the Supported Platforms page.
Classic installation packages
Installation packages are available from the WRC's Continuous Delivery Releases page. Additionally, kits can also be found in the Evaluation Services website. InterSystems IRIS Studio is still available in the release, and you can get it from the WRC's Components distribution page.
Containers
Container images for both Enterprise and Community Editions of InterSystems IRIS and IRIS for Health and all corresponding components are available from the InterSystems Container Registry web interface. The build number for this release is 2023.2.0.227.0.
Announcement
Fabiano Sanches · Nov 15, 2023
The 2023.3 releases of InterSystems IRIS Data Platform, InterSystems IRIS for Health, and InterSystems IRIS Studio are now Generally Available (GA).
RELEASE HIGHLIGHTS
2023.3 is a Continuous Delivery (CD) release. Many updates and enhancements have been added in this release:
Enhancing Cloud and Operations
Journal Archiving: Starting with this release, system administrators can now configure an archive location for completed journal files. When configured, after a journal file switch, the completed journal file will first be compressed (using the Journal Compression feature) and then automatically moved to this archive location, which can be on a lower-cost storage tier such as HDD drives or cloud storage such as Amazon S3. Archived journal files can then be automatically deleted from the local journal directory, reducing the overall footprint on the high-performance storage tier used for writing journal files and lowering the Total Cost of Ownership for InterSystems IRIS deployments.
Enhancing Analytics and AI
Time Series Forecasting support in IntegratedML: Support for Time Series models in IntegratedML, introduced with InterSystems IRIS 2023.2 as an experimental feature, is now fully supported for production use.
Enhancing the Developer Experience
Business Rule Editor Enhancements: The Business Rule Editor is continuously being improved and starting with this release supports searching for arbitrary text and matches are being highlighted across the rule sets. In the case a rule calls multiple transformations the order is configurable now. In addition, the user experience has been improved based on user feedback e.g., all editing modals have been widened to make it less likely text is cut off.
Enhancing Speed, Scale and Security
Faster globals size estimates: This release introduces a new algorithm for fast estimation of the total size of a global, that uses stochastic sampling rather than a comprehensive block scan. The new algorithm is able to provide accurate estimates (within 5% of actual size) in seconds for globals measuring hundreds of GBs. It is available as a new option in the existing ^%GSIZE , %GlobalEdit and %GlobalQuery interfaces.
Faster ObjectScript runtime: When using public variables in an ObjectScript routine, the InterSystems IRIS kernel needs them to be registered in an in-memory symbol table that is specific to the routine. Starting with InterSystems IRIS 2023.3, this symbol table is built lazily, only adding public variables upon their first reference. This means only variables actually used in the routine are added to the symbol table, which may significantly reduce the cost of building it. This is an entirely transparent change to internal memory management, but it may yield noticeable performance gains for ObjectScript code that makes significant use of public variables.
Enhancing Interoperability and FHIR
Support for base FHIR R5: This release supports the base implementation of FHIR R5 (Fast Healthcare Interoperability Resources Release 5). FHIR R5 represents a significant leap forward in healthcare data interoperability and standardization, our commitment to delivering the latest advancements to our users. FHIR R5 marks a crucial milestone in our ongoing efforts to support the most up-to-date healthcare data standards.
What is in R5?
Fifty-five new resources, however, most are not immediately relevant to many use cases.
Several resources promoted to maturity level 5 – essentially the highest before being locked down for backwards compatibility. FHIR is closer to becoming very stable.
New property changes and data types, which help builders solve problems.
FHIR Profile Validation: FHIR Profile Validation is designed to enhance the accuracy and reliability of your healthcare data by ensuring compliance with Fast Healthcare Interoperability Resources (FHIR) profiles. This feature will empower healthcare professionals, developers, and organizations to streamline data exchange, improve interoperability, and maintain data quality within the FHIR ecosystem.
DOCUMENTATION
Details on all the highlighted features are available through these links below:
InterSystems IRIS 2023.3 documentation, release notes and deprecated & discontinued technologies and features.
InterSystems IRIS for Health 2023.3 documentation, release notes and deprecated & discontinued technologies and features.
In addition, check out this link for upgrade information related to this release.
HOW TO GET THE SOFTWARE
As usual, Continuous Delivery (CD) releases come with classic installation packages for all supported platforms, as well as container images in Docker container format. For a complete list, refer to the Supported Platforms page.
Classic installation packages
Installation packages are available from the WRC's Continuous Delivery Releases page. Additionally, kits can also be found in the Evaluation Services website. InterSystems IRIS Studio is still available in the release, and you can get it from the WRC's Components distribution page.
Containers
Container images for both Enterprise and Community Editions of InterSystems IRIS and IRIS for Health and all corresponding components are available from the InterSystems Container Registry web interface.
✅ Build number for this release is: 2023.3.0.254.0. If you're in the ICR, containers are tagged as "latest-cd".
Article
David Hockenbroch · Sep 11, 2024
Do not let the title of this article confuse you; we are not planning to take the InterSystems staff out to a fine Italian restaurant. Instead, this article will cover the principles of working with date and time data types in IRIS. When we use these data types, we should be aware of three different conversion issues:
Converting between internal and ODBC formats.
Converting between local time, UTC, and Posix time.
Converting to and from various date display formats.
We will start by clarifying the definitions of internal, UTC, and Posix time. If you define a property within a class as a %Date or %Time data type, it will not take long to realize that these data types are not stored in the ODBC standard way. They look like integers, where the %Time will be the number of seconds after midnight, and the date is the number of days since December 31, 1840. It comes from MUMPS. When it was created, one of its inventors heard about a Civil War veteran who was 121 years old. Since it was the 1960s, it was decided that the first date should be in the early 1840s. That would guarantee that all birthdays could be represented as positive integers. Additionally, certain algorithms worked the best when every fourth year was a leap year. That is why day one is January 1, 1841, and you can sometimes see null dates displayed as December 31, 1840.
Coordinated Universal Time, or UTC, is effectively a successor to Greenwich Mean Time (GMT). It is based on a weighted average of hundreds of highly accurate atomic clocks worldwide, making it more precise than GMT. The abbreviation UTC is a compromise between English and French speakers. In English, the correct abbreviation would be CUT, whereas in French, it would be TUC. UTC was chosen as the middle ground to enable the usage of the same term in all languages. Besides, it matches the abbreviations utilized for variants of universal time, UT1, UT2, etc.
Posix time is also known as Epoch time or Unix time. It shows the number of seconds that have passed since January 1, 1970. Posix time does not account for leap seconds. That is why when a positive leap second is declared, one second of Posix time is repeated. A negative leap second has never been claimed, but if that ever happened there would be a small range of Unix time slots that did not refer to any specific moment in real time.
Because both time models mentioned above are defined as a number of seconds since a fixed point in time, and, perhaps, because it is not a useful concept in computing, neither UTC nor Posix time are adjusted for daylight savings time.
We will begin with converting between internal and external formats. Two system variables are useful in obtaining these pieces of information: $HOROLOG, which can be shortened to $H, and $ZTIMESTAMP, sometimes abbreviated as $ZTS. $H contains two numbers separated by a comma. The first one is the current local date, and the second one is the current local time. $ZTS is formatted similarly, except that the second number has seven decimal places, and it holds the UTC date and time instead of the local ones. We could define a class with the following properties:
///Will contain the local date.
Property LocalDate As %Date [ InitialExpression = {$P($H,",",1)} ];
/// Will contain the UTC date.
Property UTCDate As %Date [ InitialExpression = {$P($ZTS,",",1)} ];
/// Will contain the local time.
Property LocalTime As %Time [ InitialExpression = {$P($H,",",2)} ];
/// Will contain the UTC time.
Property UTCTime As %Time [ InitialExpression = {$P($ZTS,",",2)} ];
/// Will contain Posix timestamp
Property PosixStamp As %PosixTime [ InitialExpression = {##class(%PosixTime).CurrentTimeStamp()} ];
/// Will contain the local timestamp.
Property LocalTimeStamp As %TimeStamp [ InitialExpression = {##class(%UTC).NowLocal()} ];
/// Will contain the UTC timestamp.
Property UTCTimeStamp As %TimeStamp [ InitialExpression = {##class(%UTC).NowUTC()} ];
If we create an instance of this class and save it, it will contain the dates and times given when that instance was made. If we query this table in logical mode, we will see the underlying numbers.
There are a few ways to make this data human-readable. First, we could switch the query mode from logical to ODBC. In the System Management Portal, this action is controlled by a drop-down box. When we are writing ObjectScript code and querying with a %SQL.Statement, we can set the %SelectMode property of that object to 1 for ODBC mode as follows:
set stmt = ##class(%SQL.Statement).%New()
set stmt.%SelectMode = 1
We can also use the %ODBCOUT SQL function. You can apply the following query to give us the data in ODBC standard format:
SELECT ID, %ODBCOUT(LocalDate) AS LocalDate, %ODBCOUT(UTCDate) AS UTCDate, %ODBCOUT(LocalTime) AS LocalTime, %ODBCOUT(UTCTime) AS UTCTime, %ODBCOUT(PosixStamp) As PosixStamp from SQLUser.DateArticle
There is also a %ODBCIN function for inserting ODBC format data into your table. The following SQL will produce a record with the same values as above:
INSERT INTO SQLUser.DateArticle(LocalDate,UTCDate,LocalTime,UTCTime,PosixStamp) VALUES(%ODBCIN('2024-08-20'),%ODBCIN('2024-08-20'),%ODBCIN('13:50:37'),%ODBCIN('18:50:37.3049621'),%ODBCIN('2024-08-20 13:50:37.304954'))
Unlike the above-mentioned data types, %TimeStamp follows ODBC standards. It means that if we want to create a timestamp, we should use different functions or convert the data from the HOROLOG format to ODBC. Fortunately for ObjectScript programmers, there are many useful methods to choose from stored in the %Library.UTC class already. For example, the following line of code will convert the current local date and time, or whatever other HOROLOG formatted data you give it, to a timestamp:
set mytimestamp = ##class(%UTC).ConvertHorologToTimeStamp($H)
Of course, if we are trying to obtain the current timestamp, we can also add the following properties to our class for the local and UTC timestamp respectively::
/// Will contain the local timestamp.
Property LocalTimeStamp As %TimeStamp [ InitialExpression = {##class(%UTC).NowLocal()} ];
/// Will contain the UTC timestamp.
Property UTCTimeStamp As %TimeStamp [ InitialExpression = {##class(%UTC).NowUTC()} ];
There are even functions for converting between UTC and local timestamps. They can be a major headache saver when dealing with various time zones and daylight savings time:
set utctimestamp = ##class(%UTC).ConvertLocaltoUTC(##class(%UTC).NowLocal())
set localtimestamp = ##class(%UTC).ConvertUTCtoLocal(##class(%UTC).NowUTC())
For demonstration purposes, I have used the current time stamps. However, you can give these functions any timestamp you wish, and they will convert accordingly.
Those of us who do a lot of integration projects, sometimes are required to convert a timestamp to XSD format. For this purpose, the %TimeStamp class has a LogicalToXSD function built into it. If we use the following line:
set xsdTimeStamp = ##class(%TimeStamp).LogicalToXSD(..LocalTimeStamp)
We will get a string formatted as YYYY-MM-DDTHH:MM:SS.SSZ, e.g., 2024-07-18T15:14:20.45Z.
Unfortunately for SQL users, these functions do not exist in SQL, but we can fix that by adding a couple of class methods to our class using the SqlProc keyword:
ClassMethod ConvertLocaltoUTC(mystamp As %TimeStamp) As %TimeStamp [ SqlProc ]
{
return ##class(%UTC).ConvertLocaltoUTC(mystamp)
}
ClassMethod ConvertUTCtoLocal(mystamp As %TimeStamp) As %TimeStamp [ SqlProc ]
{
return ##class(%UTC).ConvertUTCtoLocal(mystamp)
}
Then we can utilize SQL similar to the following to make the conversion:
SELECT DateArticle_ConvertLocaltoUTC(LocalTimeStamp), DateArticle_ConvertUtctoLocal(UTCTimeStamp), DateArticle_ConverttoXSD(LocalTimeStamp) FROM DateArticle
Please note that I have named my class User.DateArticle. If your class name is different, it will replace "DateArticle" in the query mentioned above. Also remember that the XSD conversion function does not care whether you give it a UTC or local timestamp since it is just converting the output format of the timestamp, not its value.
DATEADD and DATEDIFF are two other important functions to know when dealing with dates. They are typically used to add and subtract increments to and from dates or find a difference between two dates. Both of them take an argument specifying what units to work with first, and then they receive two dates. For instance, DATEADD(minute,30,GETUTCDATE()) would give you the UTC timestamp 30 minutes from now whereas DATEADD(hour,30,GETUTCDATE()) will deliver the timestamp 30 hours from now. If we wanted to go backward, we would still employ DATEADD, but with a negative number. DATEADD(minute,-30,GETUTCDATE()) would give you the time 30 minutes ago. DATEDIFF is used to find the difference between two dates. You can calculate your current offset from UTC by utilizing the following line:
SELECT DATEDIFF(minute,GETUTCDATE(),GETDATE())
It will give you your offset in minutes. In most time zones, hours would suffice. However, a few time zones have a thirty or forty-five-minute increment from the neighboring time zones. Therefore, you should be careful when using it since your offset might change with daylight saving time.
In ObjectScript, we can also operate the $ZDATETIME function, which can be abbreviated to $ZDT, to show our timestamps in various formats, e.g., some international formats and common data formats. $ZDT converts HOROLOG formatted timestamps to those formats. The reverse of this function is $ZDATETIMEH, which can be abbreviated $ZDTH. It takes a lot of different formats of timestamps and converts them to IRIS’s internal HOROLOG format. $ZDT alone is a powerful function for converting internal data to external formats. If we use both these functions together, we can convert from any of the available formats into any of the other available formats.
Both $ZDT and $ZDTH take a dformat argument specifying the date format and a tformat argument specifying the time format. It is worth taking the time to read those lists and see what formats are available. For example, dformat 4 is a European numeric format, and tformat 3 is a twelve-hour clock format. Therefore I can output the current local time in that format by applying the following line:
write $ZDT($H,4,2)
It will show me 24/07/2024 02:55:46PM as of right now. The separator character for the date is taken from your locale settings, so if your locale defines that separator as a . instead of a / you will see that in the output. If I were to start with that as my input and wanted to convert that time to the internal HOROLOG format, I would apply the next line:
write $ZDTH(“24/07/2024 02:55:46PM”,4,2)
It will write out 67045,53746. Now, let’s say I have the timestamp in the European twelve-hour clock format. However, I want to display it in an American format with a 24-hour clock. I can operate both functions at once to achieve that result as demonstrated below:
write $ZDT($ZDTH(“24/07/2024 02:55:46PM”,4,2),1,1)
It will write out 07/24/2024 14:55:46. It is also worth pointing out that if you do not specify a dformat or a tformat, the conversion will use the date or time format specified by your locale.
Speaking of locale, those of us who do business internationally may occasionally want to work with different time zones. That is when the special $ZTIMEZONE (or $ZTZ) variable comes into play. This variable contains the current offset from GMT in minutes. You can also change it to set the time zone for a process. For instance, I live in the US Central time, so my offset is 360. If I want times to be expressed in Eastern time for one of my customers, I can set $ZTIMEZONE = 300 to make the process express dates in that time zone.
As previously mentioned, UTC and Posix both count from a specific point in time. Just as they are not affected by Daylight Saving Time, they are also not affected by time zones. This means that things that rely on them, such as $ZTIMESTAMP, are not influenced by $ZTIMEZONE. Functions that refer to your local time, however, are. For example, the following sequence of commands will write out the current date and time in my local time zone (Central time), and then in Eastern time:
write $ZDATETIME($H,3)
set $ZTIMEZONE = 300
write $ZDATETIME($H,3)
However, replacing $H above with $ZTS would just show the same time twice.
If you wish to convert between UTC and local time while taking into account any changes you have made to $ZTIMEZONE in your current process, there are additional methods to help you with that. You can use them as follows:
WRITE $SYSTEM.Util.UTCtoLocalWithZTIMEZONE($ZTIMESTAMP)
WRITE $SYSTEM.Util.LocalWithZTIMEZONEtoUTC($H)
You could also utilize the methods from the %UTC class that we covered above.
With these tools, you should be able to do any kind of time conversion you need except converting the current local time to five o’clock!
Question
Dmitry Maslennikov · Nov 24, 2016
Just curious how many companies use in their work Docker containers, I mean not only with InterSystems products. And if such companies exist, which of them uses docker and doesn't use it for InterSystems products by some reasons. What are the reasons? For companies which already uses InterSystems in containers, how do you use it? Development environment, testing or even in production ?
And if you don't use but thought about it, what are the reasons which stop you.
As for me, I've been using InterSystems Caché inside a Docker container in some different cases:
To test new FieldTest version, without installation. Just to build and run, quite easy.
In development, for open source projects, or in my current company. I have not used it in production, our project are not ready for production yet. But I'm already using docker for testing and building our application with Gitlab-ci.
Mostly disagree with your opinion. Docker is a good way for microservices, and docker it is not just one container, in most cases, you should use a bunch of them, each as a different service. And in this case, good way to control all this service is docker-compose. You can split your application into some different parts, database, frontend, and some other services.And I don't see any problems, in such way, when InterSystems inside one container, with application's database. And the power of docker is you can run multiple copies of the container, at once, when it will be needed. I see only one problem here, is separated global buffer, it means that it used not efficiently. Can you give an example different way with the different database server? I've tried some of the databases, in a container, and they work in the same way.Each container with InterSystems inside, and one our service inside. And I don't see any troubles here, even in security. Your way it is a bad way, it is like add a new layer with docker, like container (Application) inside another container (Caché), too complicated. Docker is cool way to deploy and use versioned application, but, IMVHO, it's only applicable for the case when you have single executable or single script, which sets up environment and invoke you for some particular functionality. Like run particular version of compiler for build scenario. Or run continious integration scenario. Or run web front-end environment.1 function - 1 container with 1 interactive executable is easy to convert to Docker. But not Cache, which is inherently multi-process. Luca has done a great thing in his https://hub.docker.com/r/zrml/intersystems-cachedb/ Docker container where he has wrappee whole environment (including control daemon, write daemon, journal daemon, etc) in 1 handy Docker container whith single entry point implemnted in Go as ccontainrmain but this is , hmm, ... not very effecient way to use Docker.Containers is all about density of CPU/disk resources, and all the beauty of Docker is based upon the simplicity to run multiple user at the single host. Given this way of packing Cache' configuration to the single container (each user run whole set of Cache' control processes) you will get the worst scalability.It would be much, much bettrer, if there would be 2 kinds of Cache' docker containers (ran via Swarm for example) where ther would be single control container, and multiple users containers (each connecting to their separate port and separate namespace). But, today, with current security implementation, there would be big, big problem - each user would be seeing whole configuration, which is kind of unexpected in the case of docker container hosting. Once, these security issues could be resolved there would be effeciet way to host Cache' under docker, but not before.
Question
wang wang · Mar 4, 2021
Can InterSystems IRIS support horizontal expansion? How many nodes can it support? Pls check below,Thx!
https://community.intersystems.com/post/horizontal-scalability-intersystems-iris
https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=PAGE_scalability
Article
sween · May 21, 2024
Have written several worthless Conky's in my day, and this one is no exception, but it was fun.
I cant imagine anybody is going to use this Conky, but here it is anyway. This is a mere implementation that scrapes the REST API for Metrics and uses the execi to display it on your desktop.
conky.conf
-- Conky, a system monitor https://github.com/brndnmtthws/conky
-- InterSystems IRIS https://www.intersystems.com/products/intersystems-iris/
conky.config = {
use_xft = true,
xftalpha = 0.8,
update_interval = .5,
total_run_times = 0,
own_window = true,
own_window_transparent = true,
own_window_argb_visual = true,
own_window_type = 'normal',
own_window_class = 'conky-semi',
own_window_hints = 'undecorated,below,sticky,skip_taskbar,skip_pager',
background = false,
double_buffer = true,
imlib_cache_size = 0,
no_buffers = true,
uppercase = false,
cpu_avg_samples = 2,
override_utf8_locale = true,
-- placement
alignment = 'top_right',
gap_x = 25,
gap_y = 50,
-- default drawing
draw_shades = true,
draw_outline = false,
draw_borders = false,
draw_graph_borders = true,
default_bar_width = 150, default_bar_height = 20,
default_graph_width = 150, default_graph_height = 12,
default_gauge_width = 20, default_gauge_height = 20,
-- colors
font = 'Liberation Mono:size=14',
default_color = 'EEEEEE',
color1 = 'AABBFF',
color2 = 'FF993D',
color3 = 'AAAAAA',
color4 = '3ab4ad',
-- layouting
template0 = [[${font Liberation Sans:bold:size=11}${color2}\1 ${color3}${hr 2}${font}]],
template1 = [[${color1}\1]],
template2 = [[${goto 100}${color}]],
template3 = [[${goto 180}${color}${alignr}]],
};
conky.text = [[
${image /etc/conky/isc.png -p 325,400 -s 100x250}
${color1}InterSystems IRIS Conky:$color ${scroll 16 $conky_version - $sysname $nodename $kernel $machine}
${color grey}Metrics Endpoint:$color ${scroll 32 http://iris.deezwatts.com:52773/api/monitor/metrics}
${color grey}license_days_remaining: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep license_days_remaining | cut -d " " -f 2 } $alignr ${color grey} system_alerts_new: $color2 ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep iris_system_alerts_new | cut -d " " -f 2 }
${color4}$hr
${color grey}$color CPU and Memory
${color grey}cpu_pct:$color $alignr ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep cpu_usage | cut -d " " -f 2 }% ${execibar 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep cpu_usage | cut -d " " -f 2 }
${color grey}page_space_percent_used:$color $alignr ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep page_space | cut -d " " -f 2 }% ${execibar 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep page_space | cut -d " " -f 2 }
${color grey}phys_mem_percent_used:$color $alignr ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep phys_mem | cut -d " " -f 2 }% ${execibar 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep phys_mem | cut -d " " -f 2 }
${color grey}phy_reads_per_sec: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep phys_reads | cut -d " " -f 2 } $alignr ${color grey} phy_writes_per_sec: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep phys_writes | cut -d " " -f 2 }
${color grey}cache_efficiency : $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep cache_efficiency | cut -d " " -f 2 } $alignr ${color grey} process_count: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep process_count | cut -d " " -f 2 }
${color4}$hr
${color grey}$color Database
${color grey}db_disk_percent_full :$color $alignr ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep disk_percent_full | grep USER | cut -d " " -f 2 }% ${execibar 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep disk_percent_full | grep USER | cut -d " " -f 2 }
${color grey}db_free_space : $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep db_free_space | grep USER | cut -d " " -f 2 } GB $alignr ${color grey} db_latency: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep db_latency | grep USER | cut -d " " -f 2 }
${color grey}wij_writes_per_second: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep wij_writes | cut -d " " -f 2 } $alignr ${color grey} trans_open_count: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep trans_open_count | cut -d " " -f 2 }
${color4}$hr
${color grey}$color Journaling
${color grey}jrn_free_space_primary: $alignr $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep iris_jrn_free_space | grep primary | cut -d " " -f 2 } GB
${color grey}jrn_free_space_secondary: $alignr $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep iris_jrn_free_space | grep secondary | cut -d " " -f 2 } GB
${color4}$hr
${color grey}$color Shared Memory Heap
${color grey}smh_percent_full_classes :$color $alignr ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep smh_percent_full | grep Classes | cut -d " " -f 2 }% ${execibar 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep smh_percent_full | grep Classes | cut -d " " -f 2 }
${color grey}smh_percent_full_globalmapping :$color $alignr ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep smh_percent_full | grep Global | cut -d " " -f 2 }% ${execibar 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep smh_percent_full | grep Global | cut -d " " -f 2 }
${color grey}smh_percent_full_locktable :$color $alignr ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep smh_percent_full | grep Lock | cut -d " " -f 2 }% ${execibar 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep smh_percent_full | grep Lock | cut -d " " -f 2 }
${color grey}smh_percent_full_routinebuffer :$color $alignr ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep smh_percent_full | grep Routine | cut -d " " -f 2 }% ${execibar 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep smh_percent_full | grep Routine | cut -d " " -f 2 }
${color4}$hr
${color grey}$color CSP
${color grey}csp_activity : $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep csp_activity | cut -d " " -f 2 } $alignr ${color grey} csp_actual_connections: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep csp_actual_connections | cut -d " " -f 2 }
${color grey}csp_gateway_latency : $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep csp_gateway_latency | cut -d " " -f 2 } $alignr ${color grey} csp_in_use_connections: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep csp_in_use_connections | cut -d " " -f 2 }
${color grey}csp_private_connections: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep csp_private_connections | cut -d " " -f 2 } $alignr ${color grey} csp_sessions: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep csp_sessions | cut -d " " -f 2 }
${color4}$hr
${color grey}$color SQL
${color grey}sql_commands_per_second: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep sql_commands_per_second | grep all | cut -d " " -f 2 | printf "%.2f" $(cat - | bc -l)} $alignr ${color grey} sql_queries_avg_runtime: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep sql_queries_avg_runtime | grep -v std | grep all | cut -d " " -f 2 | printf "%.5f" $(cat - |bc -l)}
${color grey}sql_queries_per_second : $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep sql_queries_per_second | grep all | cut -d " " -f 2 | printf "%.2f" $(cat - |bc -l) } $alignr ${color grey} sql_row_count_per_second: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep sql_row_count_per_second | grep all | cut -d " " -f 2 | printf "%.2f" $(cat - |bc -l)}
${color4}$hr
${color grey}$color Write Daemon
${color grey}wd_buffer_redirty: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep wd_buffer_redirty | cut -d " " -f 2 } $alignr ${color grey} wd_buffer_write: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep wd_buffer_write | cut -d " " -f 2 }
${color grey}wd_cycle_time : $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep wd_cycle_time | cut -d " " -f 2 } $alignr ${color grey} wd_proc_in_global: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep wd_proc_in_global | cut -d " " -f 2 }
${color grey}wd_wij_time : $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep wdwij_time | cut -d " " -f 2 } $alignr ${color grey} wd_write_time: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep wd_write_time | cut -d " " -f 2 }
${color4}$hr
${color grey}$color Routines
${color grey}rtn_call_local_per_sec: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep rtn_call_local_per_sec | cut -d " " -f 2 } $alignr ${color grey} rtn_call_miss_per_sec: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep rtn_call_miss_per_sec | cut -d " " -f 2 }
${color grey}rtn_seize_per_sec : $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep rtn_seize_per_sec | cut -d " " -f 2 } $alignr ${color grey} rtn_load_per_sec: $color ${execi 5 curl -s http://iris.deezwatts.com:52773/api/monitor/metrics | grep rtn_load_per_sec | cut -d " " -f 2 }
]];
Ran the below code below in the POD to drive up the resource consumption in the gif above as the example.CPU
fulload() { dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null & }; fulload; read; killall dd
Disk Space
fallocate -l 1390143672 /data/IRIS/OUTPUT.dat
Get your Conky and slap IRIS Monitoring on your X11 display today!
Announcement
Preston Brown · Sep 8, 2023
SCOPE OF SERVICES:The Database Developer’s responsibilities will include, but are not limited to:• Understand existing application using InterSystems Technology such as Cache Object Code, Ensemble, Older Style Tag and Routine.• To perform support and maintenance of existing Cache code• Analysis and migration of existing Cache code to Microsoft SQL; and Web Services (SOA)
REQUIRED SKILLS/EXPERIENCE:
A minimum of 5 proven years of experience in computer applications development planning, design, troubleshooting, integration, maintenance, and enhancement of Cache Database applications.
In Queens NY Office 4 days a week. 1 Day RemoteMust be an US Citizen only. Position Requires this.This is a long term contract (1 - 4+ years).This is a 1099 Position only.Rate is $50-80/hour.Position will begin 1.5 - 2 months after succesfull interview.
Serious inquires only. Interested parties email to pbrown@cybercodemasters.com
#Job Opportunity
Announcement
Anastasia Dyubaylo · Oct 27, 2023
Hey Developers,
We are super excited to invite you all to the new InterSystems online programming contest focused on Java and its derivatives!
🏆 InterSystems Java Programming Contest 🏆
Duration: November 13 - December 3, 2023
Prize pool: $14,000
The topic
We invite you to use Java in a new programming contest! Applications that use Kotlin, Clojure and Scala are also very welcome.
Submit an open-source application that uses Java, Kotlin, Clojure or Scala with InterSystems IRIS or InterSystems IRIS for Health.
General Requirements:
An application or library must be fully functional. It should not be an import or a direct interface for an already existing library in another language (except for C++, there you really need to do a lot of work to create an interface for Iris). It should not be a copy-paste of an existing application or library.
Accepted applications: new to Open Exchange apps or existing ones, but with a significant improvement. Our team will review all applications before approving them for the contest.
The application should work either on IRIS Community Edition or IRIS for Health Community Edition. Both could be downloaded as host (Mac, Windows) versions from Evaluation site, or can be used in a form of containers pulled from InterSystems Container Registry or Community Containers: intersystemsdc/iris-community:latest or intersystemsdc/irishealth-community:latest .
The application should be Open Source and published on GitHub.
The README file to the application should be in English, contain the installation steps, and contain either the video demo or/and a description of how the application works.
Only 3 submissions from one developer are allowed.
NB. Our experts will have the final say in whether the application is approved for the contest or not based on the criteria of complexity and usefulness. Their decision is final and not subject to appeal.
Prizes
1. Experts Nomination - a specially selected jury will determine winners:
🥇 1st place - $5,000
🥈 2nd place - $3,000
🥉 3rd place - $1,500
🏅 4th place - $750
🏅 5th place - $500
🌟 6-10th places - $100
2. Community winners - applications that will receive the most votes in total:
🥇 1st place - $1,000
🥈 2nd place - $750
🥉 3rd place - $500
🏅 4th place - $300
🏅 5th place - $200
If several participants score the same amount of votes, they all are considered winners, and the money prize is shared among the winners.
Who can participate?
Any Developer Community member, except for InterSystems employees (ISC contractors allowed). Create an account!
Developers can team up to create a collaborative application. Allowed from 2 to 5 developers in one team.
Do not forget to highlight your team members in the README of your application – DC user profiles.
Important Deadlines:
🛠 Application development and registration phase:
November 13, 2023 (00:00 EST): Contest begins.
November 26, 2023 (23:59 EST): Deadline for submissions.
✅ Voting period:
November 27, 2023 (00:00 EST): Voting begins.
December 3, 2023 (23:59 EST): Voting ends.
Note: Developers can improve their apps throughout the entire registration and voting period.
Helpful resources
1. Developing Java Applications with InterSystems IRIS:
InterSystems Java Connectivity Options
Learning Path Connecting Java Applications to InterSystems Products
JDBC Driver Documentation
XEP Java Documentation
Native API for Java Documentation
iris JDBC driver distribution
2. For beginners with ObjectScript Package Manager (IPM):
How to Build, Test and Publish ZPM Package with REST Application for InterSystems IRIS
Package First Development Approach with InterSystems IRIS and ZPM
3. How to submit your app to the contest:
How to publish an application on Open Exchange
How to submit an application for the contest
4. Example applications:
native-api template
workshop-pex
fhir-client-java
pex-demo
iris-hibernate
iris-liquibase
5. Videos:
Using Java to Connect to InterSystems
Connecting to InterSystems Cloud Services with Java
What is PEX?
InterSystems Connectivity with Java and other languages
Deploying Java project + InterSystems IRIS in Docker
Need Help?
Join the contest channel on InterSystems' Discord server or talk with us in the comment to this post.
We're waiting for YOUR project – join our coding marathon to win!
By participating in this contest, you agree to the competition terms laid out here. Please read them carefully before proceeding. Hey developers, don't miss the demo on Deploying Java project + InterSystems IRIS in Docker on Tuesday, October 31. Register for this roundtable here. Is Closure a spelling mistake? Do you mean Clojure (https://clojure.org/) instead? Hi Jani! Clojure, of course, thanks!
Article
Sylvain Guilbaud · Sep 25, 2023
Hi Community,
to learn quickly and in total autonomy on IRIS, I offer you some links which can help you in this beautiful bicycle ride rich in discoveries:
InterSystems Developer Hub
Full Stack Tutorial Build the IT infrastructure for a company that roasts and sells coffee. See how InterSystems IRIS can serve as your IT architecture backbone
REST + Angular Application Build a simple URL bookmarking app using InterSystems IRIS, REST service engine, and the Angular web framework
Apply Machine Learning Create, train, validate and use prediction models for hospital readmissions based on a publicly available historical database
InterSystems Interoperability Test drive our integration framework for connecting systems easily and developing interoperable applications.
Getting Started with InterSystems ObjectScript
Developing in ObjectScript with Visual Studio Code
Building a Server-Side Application with InterSystems
Getting Started with InterSystems IRIS for Coders
Managing InterSystems IRIS for Developers
InterSystems IRIS Management Basics
Predicting Outcomes with IntegratedML in InterSystems IRIS
Writing Applications Using Angular and InterSystems IRIS
Writing Python Applications with InterSystems
Configuring InterSystems IRIS Applications for Client Access
Connecting Java Applications to InterSystems Products
Connecting .NET Applications to InterSystems Products
Connecting Node.js Applications to InterSystems Products
Analyzing Data with InterSystems IRIS BI
Building Business Integrations with InterSystems IRIS
Building Custom Integrations
If you want to get started with your own local IRIS instance, I recommend using our templates available at OpenExchange :
intersystems-iris-dev-template
iris-interoperability-template
iris-embedded-python-template
iris-fullstack-template
iris-analytics-template
Thanks @Sylvain.Guilbaud This is pretty useful. This is a great list of resources for the beginners! I think would be great to expand the first link, which contains 4 interactive in-browser tutorials Thanks for the advice @Dmitry.Maslennikov
I added direct links to the 4 subsections wow! amazing guide for beginners 🤩
thank you, @Sylvain.Guilbaud!!