Search

Clear filter

InterSystem Demo

0 posts0 comments

Daniel InterSystems Moreau

0 posts0 comments
Question
Scott Roth · Dec 15, 2023

InterSystems IAM and InterSystems Community Edition

Outside of the learning module for IAM, I would like to give it a try with Community Edition on my own however the Community Edition License does not include it. Has there been any discussion on allowing Company's Demo IAM through Community Edition before they get the license? Thanks Scott IAM is not available with IRIS Community Edition due to licensing constraints. However, Kong Community Edition works great with IRIS Community Edition. No licenses required for either one.
Announcement
Fabiano Sanches · Nov 15, 2023

InterSystems announces General Availability of InterSystems IRIS, InterSystems IRIS for Health, & InterSystems Studio 2023.3

The 2023.3 releases of InterSystems IRIS Data Platform, InterSystems IRIS for Health, and InterSystems IRIS Studio are now Generally Available (GA). RELEASE HIGHLIGHTS 2023.3 is a Continuous Delivery (CD) release. Many updates and enhancements have been added in this release: Enhancing Cloud and Operations Journal Archiving: Starting with this release, system administrators can now configure an archive location for completed journal files. When configured, after a journal file switch, the completed journal file will first be compressed (using the Journal Compression feature) and then automatically moved to this archive location, which can be on a lower-cost storage tier such as HDD drives or cloud storage such as Amazon S3. Archived journal files can then be automatically deleted from the local journal directory, reducing the overall footprint on the high-performance storage tier used for writing journal files and lowering the Total Cost of Ownership for InterSystems IRIS deployments. Enhancing Analytics and AI Time Series Forecasting support in IntegratedML: Support for Time Series models in IntegratedML, introduced with InterSystems IRIS 2023.2 as an experimental feature, is now fully supported for production use. Enhancing the Developer Experience Business Rule Editor Enhancements: The Business Rule Editor is continuously being improved and starting with this release supports searching for arbitrary text and matches are being highlighted across the rule sets. In the case a rule calls multiple transformations the order is configurable now. In addition, the user experience has been improved based on user feedback e.g., all editing modals have been widened to make it less likely text is cut off. Enhancing Speed, Scale and Security Faster globals size estimates: This release introduces a new algorithm for fast estimation of the total size of a global, that uses stochastic sampling rather than a comprehensive block scan. The new algorithm is able to provide accurate estimates (within 5% of actual size) in seconds for globals measuring hundreds of GBs. It is available as a new option in the existing ^%GSIZE , %GlobalEdit and %GlobalQuery interfaces. Faster ObjectScript runtime: When using public variables in an ObjectScript routine, the InterSystems IRIS kernel needs them to be registered in an in-memory symbol table that is specific to the routine. Starting with InterSystems IRIS 2023.3, this symbol table is built lazily, only adding public variables upon their first reference. This means only variables actually used in the routine are added to the symbol table, which may significantly reduce the cost of building it. This is an entirely transparent change to internal memory management, but it may yield noticeable performance gains for ObjectScript code that makes significant use of public variables. Enhancing Interoperability and FHIR Support for base FHIR R5: This release supports the base implementation of FHIR R5 (Fast Healthcare Interoperability Resources Release 5). FHIR R5 represents a significant leap forward in healthcare data interoperability and standardization, our commitment to delivering the latest advancements to our users. FHIR R5 marks a crucial milestone in our ongoing efforts to support the most up-to-date healthcare data standards. What is in R5? Fifty-five new resources, however, most are not immediately relevant to many use cases. Several resources promoted to maturity level 5 – essentially the highest before being locked down for backwards compatibility. FHIR is closer to becoming very stable. New property changes and data types, which help builders solve problems. FHIR Profile Validation: FHIR Profile Validation is designed to enhance the accuracy and reliability of your healthcare data by ensuring compliance with Fast Healthcare Interoperability Resources (FHIR) profiles. This feature will empower healthcare professionals, developers, and organizations to streamline data exchange, improve interoperability, and maintain data quality within the FHIR ecosystem. DOCUMENTATION Details on all the highlighted features are available through these links below: InterSystems IRIS 2023.3 documentation, release notes and deprecated & discontinued technologies and features. InterSystems IRIS for Health 2023.3 documentation, release notes and deprecated & discontinued technologies and features. In addition, check out this link for upgrade information related to this release. HOW TO GET THE SOFTWARE As usual, Continuous Delivery (CD) releases come with classic installation packages for all supported platforms, as well as container images in Docker container format. For a complete list, refer to the Supported Platforms page. Classic installation packages Installation packages are available from the WRC's Continuous Delivery Releases page. Additionally, kits can also be found in the Evaluation Services website. InterSystems IRIS Studio is still available in the release, and you can get it from the WRC's Components distribution page. Containers Container images for both Enterprise and Community Editions of InterSystems IRIS and IRIS for Health and all corresponding components are available from the InterSystems Container Registry web interface. ✅ Build number for this release is: 2023.3.0.254.0. If you're in the ICR, containers are tagged as "latest-cd".
Announcement
Fabiano Sanches · Aug 4, 2023

InterSystems announces General Availability of InterSystems IRIS, InterSystems IRIS for Health, & InterSystems Studio 2023.2

The 2023.2 releases of InterSystems IRIS Data Platform, InterSystems IRIS for Health, and InterSystems IRIS Studio are now Generally Available (GA). RELEASE HIGHLIGHTS 2023.2 is a Continuous Delivery (CD) release. Many updates and enhancements have been added in 2023.2: Private Web Server Starting with this release: Local InterSystems IRIS installation will no longer install the private web server. Access to the Management Portal and other built-in web applications will require configuration of a connection to an external web server. Installation will include the option to automatically configure the Apache web server (on all platforms except Microsoft Windows) and the IIS web server (on Microsoft Windows), if already installed. InterSystems IRIS containers will no longer provide the private web server. Access to the Management Portal of containerized instances will require deployment of an associated Web Gateway container. Enhancing Analytics and AI Time Series Forecasting in IntegratedML: IntegratedML can now be used for predicting future values for one or more time-dependent variables. Check out to this video: Predicting Time Series Data with IntegratedML (4m) InterSystems Reports 2023.2: InterSystems Reports 2023.2 incorporates Logi Reports v23.2. New capabilities include: Increased functionality in Page Report Studio to enable customizations from Report Server. Additional Bookmark management. InterSystems Adaptive Analytics 2022.3: InterSystems Adaptive Analytics 2023.2 incorporates AtScale version 2023.2. New capabilities include: Token-based authentication to support Microsoft PowerBI in Azure environment (as requested by InterSystems) Scheduling of aggregate builds on a per-cube basis at convenient times for better performance management Usage Metrics Dashboard. Enhancing the Developer Experience InterSystems SQL - Safe Query Cancellation: This release introduces a new command for InterSystems SQL to cancel running statements safely and reclaim any resources used by the statement, including memory and temporary space: CANCEL QUERY. Enhancing Speed, Scale and Security SQL Performance Improvements: This release includes various performance enhancement across InterSystems SQL process which are 100% transparent to users: The Global Iterator is an internal module that shifts much of the work traversing indices and data maps into the kernel, as opposed to leveraging ObjectScript. The Adaptive Parallel Execution framework (APE) was first introduced for queries involving columnar storage: rather than preparing and executing separate parallel subqueries, APE generates parallel worker code into the same cached query class as the main query. Kernel-Level Performance Improvements: This release includes several kernel-level performance improvements that transparently benefit customer applications: Where possible, the global module now reads a set of physically contiguous big string blocks using a single, large IO operation. Big string blocks are used for storing long strings and columnar data. Mirroring now uses large asynchronous IO operations to read and write journal files through the agent channel, leading to a significant increase in mirror agent throughput. Enhancing Interoperability and FHIR ​Multi-Threaded FHIR DataLoader:The FHIR DataLoader, HS.FHIRServer.Tools.DataLoader, has been enhanced to run as a multi-threaded process. This allows it to take advantage of parallel processing, which can increase the performance significantly, depending on available CPU resources. Import NDJSON Using FHIR DataLoader: InterSystems IRIS for Health 2023.2 adds new functionality to import NDJSON bundles using the DataLoader. This is important for systems outputting data as Bulk FHIR. Support for $everthing input parameter _since: The $everything input parameter _since is now supported for Patient and Encounter resource types. Improved FHIR Indexing Performance: InterSystems IRIS for Health 2023.2 introduces enhancements to FHIR indexing performance. DOCUMENTATION Details on all the highlighted features are available through these links below: InterSystems IRIS 2023.2 documentation, release notes and deprecated & discontinued technologies and features. InterSystems IRIS for Health 2023.2 documentation, release notes and deprecated & discontinued technologies and features. In addition, check out this link for upgrade information related to this release. HOW TO GET THE SOFTWARE As usual, Continuous Delivery (CD) releases come with classic installation packages for all supported platforms, as well as container images in Docker container format. For a complete list, refer to the Supported Platforms page. Classic installation packages Installation packages are available from the WRC's Continuous Delivery Releases page. Additionally, kits can also be found in the Evaluation Services website. InterSystems IRIS Studio is still available in the release, and you can get it from the WRC's Components distribution page. Containers Container images for both Enterprise and Community Editions of InterSystems IRIS and IRIS for Health and all corresponding components are available from the InterSystems Container Registry web interface. The build number for this release is 2023.2.0.227.0.

#InterSystems Official

343 Posts85 Followers

#InterSystems IRIS

5.2K Posts23 Followers
Question
Dmitry Maslennikov · Nov 24, 2016

InterSystems and containers

Just curious how many companies use in their work Docker containers, I mean not only with InterSystems products. And if such companies exist, which of them uses docker and doesn't use it for InterSystems products by some reasons. What are the reasons? For companies which already uses InterSystems in containers, how do you use it? Development environment, testing or even in production ? And if you don't use but thought about it, what are the reasons which stop you. As for me, I've been using InterSystems Caché inside a Docker container in some different cases: To test new FieldTest version, without installation. Just to build and run, quite easy. In development, for open source projects, or in my current company. I have not used it in production, our project are not ready for production yet. But I'm already using docker for testing and building our application with Gitlab-ci. Mostly disagree with your opinion. Docker is a good way for microservices, and docker it is not just one container, in most cases, you should use a bunch of them, each as a different service. And in this case, good way to control all this service is docker-compose. You can split your application into some different parts, database, frontend, and some other services.And I don't see any problems, in such way, when InterSystems inside one container, with application's database. And the power of docker is you can run multiple copies of the container, at once, when it will be needed. I see only one problem here, is separated global buffer, it means that it used not efficiently. Can you give an example different way with the different database server? I've tried some of the databases, in a container, and they work in the same way.Each container with InterSystems inside, and one our service inside. And I don't see any troubles here, even in security. Your way it is a bad way, it is like add a new layer with docker, like container (Application) inside another container (Caché), too complicated. Docker is cool way to deploy and use versioned application, but, IMVHO, it's only applicable for the case when you have single executable or single script, which sets up environment and invoke you for some particular functionality. Like run particular version of compiler for build scenario. Or run continious integration scenario. Or run web front-end environment.1 function - 1 container with 1 interactive executable is easy to convert to Docker. But not Cache, which is inherently multi-process. Luca has done a great thing in his https://hub.docker.com/r/zrml/intersystems-cachedb/ Docker container where he has wrappee whole environment (including control daemon, write daemon, journal daemon, etc) in 1 handy Docker container whith single entry point implemnted in Go as ccontainrmain but this is , hmm, ... not very effecient way to use Docker.Containers is all about density of CPU/disk resources, and all the beauty of Docker is based upon the simplicity to run multiple user at the single host. Given this way of packing Cache' configuration to the single container (each user run whole set of Cache' control processes) you will get the worst scalability.It would be much, much bettrer, if there would be 2 kinds of Cache' docker containers (ran via Swarm for example) where ther would be single control container, and multiple users containers (each connecting to their separate port and separate namespace). But, today, with current security implementation, there would be big, big problem - each user would be seeing whole configuration, which is kind of unexpected in the case of docker container hosting. Once, these security issues could be resolved there would be effeciet way to host Cache' under docker, but not before.
Question
wang wang · Mar 4, 2021

InterSystems IRIS

Can InterSystems IRIS support horizontal expansion? How many nodes can it support? Pls check below,Thx! https://community.intersystems.com/post/horizontal-scalability-intersystems-iris https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=PAGE_scalability
Article
David Hockenbroch · Sep 11, 2024

Dates with InterSystems

Do not let the title of this article confuse you; we are not planning to take the InterSystems staff out to a fine Italian restaurant. Instead, this article will cover the principles of working with date and time data types in IRIS. When we use these data types, we should be aware of three different conversion issues: Converting between internal and ODBC formats. Converting between local time, UTC, and Posix time. Converting to and from various date display formats. We will start by clarifying the definitions of internal, UTC, and Posix time. If you define a property within a class as a %Date or %Time data type, it will not take long to realize that these data types are not stored in the ODBC standard way. They look like integers, where the %Time will be the number of seconds after midnight, and the date is the number of days since December 31, 1840. It comes from MUMPS. When it was created, one of its inventors heard about a Civil War veteran who was 121 years old. Since it was the 1960s, it was decided that the first date should be in the early 1840s. That would guarantee that all birthdays could be represented as positive integers. Additionally, certain algorithms worked the best when every fourth year was a leap year. That is why day one is January 1, 1841, and you can sometimes see null dates displayed as December 31, 1840. Coordinated Universal Time, or UTC, is effectively a successor to Greenwich Mean Time (GMT). It is based on a weighted average of hundreds of highly accurate atomic clocks worldwide, making it more precise than GMT. The abbreviation UTC is a compromise between English and French speakers. In English, the correct abbreviation would be CUT, whereas in French, it would be TUC. UTC was chosen as the middle ground to enable the usage of the same term in all languages. Besides, it matches the abbreviations utilized for variants of universal time, UT1, UT2, etc. Posix time is also known as Epoch time or Unix time. It shows the number of seconds that have passed since January 1, 1970. Posix time does not account for leap seconds. That is why when a positive leap second is declared, one second of Posix time is repeated. A negative leap second has never been claimed, but if that ever happened there would be a small range of Unix time slots that did not refer to any specific moment in real time. Because both time models mentioned above are defined as a number of seconds since a fixed point in time, and, perhaps, because it is not a useful concept in computing, neither UTC nor Posix time are adjusted for daylight savings time. We will begin with converting between internal and external formats. Two system variables are useful in obtaining these pieces of information: $HOROLOG, which can be shortened to $H, and $ZTIMESTAMP, sometimes abbreviated as $ZTS. $H contains two numbers separated by a comma. The first one is the current local date, and the second one is the current local time. $ZTS is formatted similarly, except that the second number has seven decimal places, and it holds the UTC date and time instead of the local ones. We could define a class with the following properties: ///Will contain the local date. Property LocalDate As %Date [ InitialExpression = {$P($H,",",1)} ]; /// Will contain the UTC date. Property UTCDate As %Date [ InitialExpression = {$P($ZTS,",",1)} ]; /// Will contain the local time. Property LocalTime As %Time [ InitialExpression = {$P($H,",",2)} ]; /// Will contain the UTC time. Property UTCTime As %Time [ InitialExpression = {$P($ZTS,",",2)} ]; /// Will contain Posix timestamp Property PosixStamp As %PosixTime [ InitialExpression = {##class(%PosixTime).CurrentTimeStamp()} ]; /// Will contain the local timestamp. Property LocalTimeStamp As %TimeStamp [ InitialExpression = {##class(%UTC).NowLocal()} ]; /// Will contain the UTC timestamp. Property UTCTimeStamp As %TimeStamp [ InitialExpression = {##class(%UTC).NowUTC()} ]; If we create an instance of this class and save it, it will contain the dates and times given when that instance was made. If we query this table in logical mode, we will see the underlying numbers. There are a few ways to make this data human-readable. First, we could switch the query mode from logical to ODBC. In the System Management Portal, this action is controlled by a drop-down box. When we are writing ObjectScript code and querying with a %SQL.Statement, we can set the %SelectMode property of that object to 1 for ODBC mode as follows: set stmt = ##class(%SQL.Statement).%New() set stmt.%SelectMode = 1 We can also use the %ODBCOUT SQL function. You can apply the following query to give us the data in ODBC standard format: SELECT ID, %ODBCOUT(LocalDate) AS LocalDate, %ODBCOUT(UTCDate) AS UTCDate, %ODBCOUT(LocalTime) AS LocalTime, %ODBCOUT(UTCTime) AS UTCTime, %ODBCOUT(PosixStamp) As PosixStamp from SQLUser.DateArticle There is also a %ODBCIN function for inserting ODBC format data into your table. The following SQL will produce a record with the same values as above: INSERT INTO SQLUser.DateArticle(LocalDate,UTCDate,LocalTime,UTCTime,PosixStamp) VALUES(%ODBCIN('2024-08-20'),%ODBCIN('2024-08-20'),%ODBCIN('13:50:37'),%ODBCIN('18:50:37.3049621'),%ODBCIN('2024-08-20 13:50:37.304954')) Unlike the above-mentioned data types, %TimeStamp follows ODBC standards. It means that if we want to create a timestamp, we should use different functions or convert the data from the HOROLOG format to ODBC. Fortunately for ObjectScript programmers, there are many useful methods to choose from stored in the %Library.UTC class already. For example, the following line of code will convert the current local date and time, or whatever other HOROLOG formatted data you give it, to a timestamp: set mytimestamp = ##class(%UTC).ConvertHorologToTimeStamp($H) Of course, if we are trying to obtain the current timestamp, we can also add the following properties to our class for the local and UTC timestamp respectively:: /// Will contain the local timestamp. Property LocalTimeStamp As %TimeStamp [ InitialExpression = {##class(%UTC).NowLocal()} ]; /// Will contain the UTC timestamp. Property UTCTimeStamp As %TimeStamp [ InitialExpression = {##class(%UTC).NowUTC()} ]; There are even functions for converting between UTC and local timestamps. They can be a major headache saver when dealing with various time zones and daylight savings time: set utctimestamp = ##class(%UTC).ConvertLocaltoUTC(##class(%UTC).NowLocal()) set localtimestamp = ##class(%UTC).ConvertUTCtoLocal(##class(%UTC).NowUTC()) For demonstration purposes, I have used the current time stamps. However, you can give these functions any timestamp you wish, and they will convert accordingly. Those of us who do a lot of integration projects, sometimes are required to convert a timestamp to XSD format. For this purpose, the %TimeStamp class has a LogicalToXSD function built into it. If we use the following line: set xsdTimeStamp = ##class(%TimeStamp).LogicalToXSD(..LocalTimeStamp) We will get a string formatted as YYYY-MM-DDTHH:MM:SS.SSZ, e.g., 2024-07-18T15:14:20.45Z. Unfortunately for SQL users, these functions do not exist in SQL, but we can fix that by adding a couple of class methods to our class using the SqlProc keyword: ClassMethod ConvertLocaltoUTC(mystamp As %TimeStamp) As %TimeStamp [ SqlProc ] { return ##class(%UTC).ConvertLocaltoUTC(mystamp) } ClassMethod ConvertUTCtoLocal(mystamp As %TimeStamp) As %TimeStamp [ SqlProc ] { return ##class(%UTC).ConvertUTCtoLocal(mystamp) } Then we can utilize SQL similar to the following to make the conversion: SELECT DateArticle_ConvertLocaltoUTC(LocalTimeStamp), DateArticle_ConvertUtctoLocal(UTCTimeStamp), DateArticle_ConverttoXSD(LocalTimeStamp) FROM DateArticle Please note that I have named my class User.DateArticle. If your class name is different, it will replace "DateArticle" in the query mentioned above. Also remember that the XSD conversion function does not care whether you give it a UTC or local timestamp since it is just converting the output format of the timestamp, not its value. DATEADD and DATEDIFF are two other important functions to know when dealing with dates. They are typically used to add and subtract increments to and from dates or find a difference between two dates. Both of them take an argument specifying what units to work with first, and then they receive two dates. For instance, DATEADD(minute,30,GETUTCDATE()) would give you the UTC timestamp 30 minutes from now whereas DATEADD(hour,30,GETUTCDATE()) will deliver the timestamp 30 hours from now. If we wanted to go backward, we would still employ DATEADD, but with a negative number. DATEADD(minute,-30,GETUTCDATE()) would give you the time 30 minutes ago. DATEDIFF is used to find the difference between two dates. You can calculate your current offset from UTC by utilizing the following line: SELECT DATEDIFF(minute,GETUTCDATE(),GETDATE()) It will give you your offset in minutes. In most time zones, hours would suffice. However, a few time zones have a thirty or forty-five-minute increment from the neighboring time zones. Therefore, you should be careful when using it since your offset might change with daylight saving time. In ObjectScript, we can also operate the $ZDATETIME function, which can be abbreviated to $ZDT, to show our timestamps in various formats, e.g., some international formats and common data formats. $ZDT converts HOROLOG formatted timestamps to those formats. The reverse of this function is $ZDATETIMEH, which can be abbreviated $ZDTH. It takes a lot of different formats of timestamps and converts them to IRIS’s internal HOROLOG format. $ZDT alone is a powerful function for converting internal data to external formats. If we use both these functions together, we can convert from any of the available formats into any of the other available formats. Both $ZDT and $ZDTH take a dformat argument specifying the date format and a tformat argument specifying the time format. It is worth taking the time to read those lists and see what formats are available. For example, dformat 4 is a European numeric format, and tformat 3 is a twelve-hour clock format. Therefore I can output the current local time in that format by applying the following line: write $ZDT($H,4,2) It will show me 24/07/2024 02:55:46PM as of right now. The separator character for the date is taken from your locale settings, so if your locale defines that separator as a . instead of a / you will see that in the output. If I were to start with that as my input and wanted to convert that time to the internal HOROLOG format, I would apply the next line: write $ZDTH(“24/07/2024 02:55:46PM”,4,2) It will write out 67045,53746. Now, let’s say I have the timestamp in the European twelve-hour clock format. However, I want to display it in an American format with a 24-hour clock. I can operate both functions at once to achieve that result as demonstrated below: write $ZDT($ZDTH(“24/07/2024 02:55:46PM”,4,2),1,1) It will write out 07/24/2024 14:55:46. It is also worth pointing out that if you do not specify a dformat or a tformat, the conversion will use the date or time format specified by your locale. Speaking of locale, those of us who do business internationally may occasionally want to work with different time zones. That is when the special $ZTIMEZONE (or $ZTZ) variable comes into play. This variable contains the current offset from GMT in minutes. You can also change it to set the time zone for a process. For instance, I live in the US Central time, so my offset is 360. If I want times to be expressed in Eastern time for one of my customers, I can set $ZTIMEZONE = 300 to make the process express dates in that time zone. As previously mentioned, UTC and Posix both count from a specific point in time. Just as they are not affected by Daylight Saving Time, they are also not affected by time zones. This means that things that rely on them, such as $ZTIMESTAMP, are not influenced by $ZTIMEZONE. Functions that refer to your local time, however, are. For example, the following sequence of commands will write out the current date and time in my local time zone (Central time), and then in Eastern time: write $ZDATETIME($H,3) set $ZTIMEZONE = 300 write $ZDATETIME($H,3) However, replacing $H above with $ZTS would just show the same time twice. If you wish to convert between UTC and local time while taking into account any changes you have made to $ZTIMEZONE in your current process, there are additional methods to help you with that. You can use them as follows: WRITE $SYSTEM.Util.UTCtoLocalWithZTIMEZONE($ZTIMESTAMP) WRITE $SYSTEM.Util.LocalWithZTIMEZONEtoUTC($H) You could also utilize the methods from the %UTC class that we covered above. With these tools, you should be able to do any kind of time conversion you need except converting the current local time to five o’clock! A very informative article. Here are a few things I can add: By setting the TZ environment variable, one can run an InterSystems process in a time zone different from the system’s local time zone. I am in Boston (currently Eastern Daylight Time): $ iris session iris USER>WRITE $ZDATETIME($HOROLOG,3,1)," Boston = ",$ZDATETIME($ZTS,3,1)," UT" 2024-09-18 12:55:55 Boston = 2024-09-18 16:55:55 UT $ TZ=America/Chicago iris session iris USER>WRITE $ZDATETIME($HOROLOG,3,1)," Chicago = ",$ZDATETIME($ZTS,3,1)," UT" 2024-09-18 11:57:22 Chicago = 2024-09-18 16:57:22 UT You can even dynamically change a process time zone by using the “C” callout feature to call tzset() with the new time zone. The documentation mentions $ZTIMEZONE for changing timezones, but this only works if both time zones change between summer and winter time at the same time, which occurs in Europe, but nowhere else. A common problem seen by InterSystems support approximately twice per year is a system that becomes an hour off because it doesn't transition between summer and winter at the correct time. This is almost always the fault of a missing operating system update. We can supply a small “C” program to dump the current time zone rules for Unix, Unix-like, and OpenVMS systems and a PowerShell script for Windows systems. excellent article! thank you for taking the time to compose and share! A very useful summary of dates, thank you very much! This is really useful, thank you.I often have to look in the Community forum when it comes to handling dates and times.

Enjoy InterSystemsTechnology

0 posts0 comments
Article
Evgeny Shvarov · Jan 19, 2016

InterSystems Community Manager

Hi everyone! I’m the new InterSystems Community Manager and responsible for everything within the InterSystems Developer Community itself including Community Events, Content and Projects etc. Please feel free to point all the criticism to me along with suggestions of course! Our team is working now on fixing some evident and annoying UI problems to make this place comfortable for all the InterSystems customers and new developers to discuss InterSystems related problems, questions, techology and projects. My duty is to make you happy with InterSystems Community Portal and gain maximum value from it! Thanks for the introduction. I'm looking forward to annoying you providing lots of feedback! Bravo - thank you Evgeny :)

#InterSystems Ideas Portal

89 Posts0 Followers
Article
Eduard Lebedyuk · Oct 18, 2016

Macros in the InterSystems Caché

In this article I would like to tell you about macros in InterSystems Caché. A macro is a symbolic name that is replaced with a set of instructions during compilation. A macro can “unfold” in various instruction sets each time it is called, depending on the parameters passed to it and activated scenarios. This can be both static code and the result of ObjectScript execution. Let's take a look at how you can use them in your application. Compilation To begin with, let's see how ObjectScript code is compiled: The class compiler uses class definitions to generate MAC codeIn some cases, the compiler uses classes as a basis for generating additional classes. You can see these classes in the studio, but you should not change them. This happens, for example, while generating classes that define web services and clientsThe class compiler also generates a class descriptor used by Caché at runtimeThe preprocessor (also referred to as macro preprocessor, MPP) uses INC files and replaces macros. Besides, it also processes embedded SQL in ObjectScript routinesAll of these changes take place in the memory; the user's code remains unchangedAfter that, the compiler creates INT code for ObjectScript routines. This layer is known as intermediate code. All access to data on this level is provided via globalsINT code is compact and can be read by a human. To view it in the studio, press Ctrl+Shift+V.INT code is used for generating OBJ codeOBJ code is used by the Caché virtual machine. Once it's generated, CLS/MAC/INT code is no longer needed and can be deleted (for example, if we want to ship a product without the source code)If the class is persistent, the SQL compiler will create corresponding SQL tables Macros As I mentioned before, a macro is a symbolic name that is replaced by the preprocessor with a set of instructions. A macro is defined with the help of the #Define command followed by the name of the macro (perhaps with a list of arguments) and its value: #Define Macro[(Args)] [Value] Where can macros be defined? Either in the code or in standalone INC files containing only macros. The necessary files are included into classes at the very beginning of class definitions using the Include MacroFileName command – this is the main and preferred method of including macros into classes. Macros included this way can be used in any part of a class. You can use the #Include MacroFileName command to include an INC file with macros into MAC routines or the code of particular class methods. Note, that method generators require #Include inside their own body if you want to use macros at compile time or use of IncludeGenerator keyword in a class. To make macro available in studio autocomple, add /// on a previous line: /// #Define Macro[(Args)] [Value] Examples Example 1 Let's jump to some examples now, and why don't we start with the standard “Hello World” message? COS code: Write "Hello, World!" We'll create a macro called HW that will write this line: #define HW Write "Hello, World!" All we need to do now is to write $$$HW ($$$ for calling the macro, then its name): ClassMethod Test() { #define HW Write "Hello, World!" $$$HW } It will be converted into the following INT code during compilation: zTest1() public { Write "Hello, World!" } The following text will be shown in the terminal when this method is called: Hello, World! Example 2 Let's use variables in the following example: ClassMethod Test2() { #define WriteLn(%str,%cnt) For ##Unique(new)=1:1:%cnt { ##Continue Write %str,! ##Continue } $$$WriteLn("Hello, World!",5) } Here the %str string is written %cnt time. The names of variables must start with %. The ##Unique(new) command creates a new unique variable in the generated code, while the ##Continue command allows us to continue defining the macro on the next line. This code converts into the following INT code: zTest2() public { For %mmmu1=1:1:5 { Write "Hello, World!",! } } The terminal will show the following: Hello, World! Hello, World! Hello, World! Hello, World! Hello, World! Example 3 Let's proceed to the more complex examples. ForEach operator can be very useful for iterating through globals, let's create it: ClassMethod Test3() { #define ForEach(%key,%gn) Set ##Unique(new)=$name(%gn) ##Continue Set %key="" ##Continue For { ##Continue Set %key=$o(@##Unique(old)@(%key)) ##Continue Quit:%key="" #define EndFor } Set ^test(1)=111 Set ^test(2)=222 Set ^test(3)=333 $$$ForEach(key,^test) Write "key: ",key,! Write "value: ",^test(key),! $$$EndFor } Here is how it looks in INT code: zTest3() public { Set ^test(1)=111 Set ^test(2)=222 Set ^test(3)=333 Set %mmmu1=$name(^test) Set key="" For { Set key=$o(@%mmmu1@(key)) Quit:key="" Write "key: ",key,! Write "value: ",^test(key),! } } What is going on in these macros? Write the name of the global to a new variable %mmmu1 ($name function)The key assumes the initial empty string valueIteration cycle startsNext value to the key is assigned using indirection and the $order functionPost-condition is used to check if the key has assumed a "" value; if it has, the iteration is completed and the cycle endsArbitrary user code is executed – in this case, key and value outputThe cycle closes The terminal shows the following when this method is called: key: 1 value: 111 key: 2 value: 222 key: 3 value: 333 If you are using lists and arrays inherited from the %Collection.AbstractIterator class, you can write a similar iterator for it. Example 4 Yet another capability of macros is the execution of arbitrary ObjectScript code on the compilation stage and substitution of its results instead of a macro. Let's create a macro for showing the compilation time: ClassMethod Test4() { #Define CompTS ##Expression("""Compiled: " _ $ZDATETIME($HOROLOG) _ """,!") Write $$$CompTS } Which transforms into the following INT code: zTest4() public { Write "Compiled: 18.10.2016 15:28:45",! } The terminal will display the following line when this method is called: Compiled: 18.10.2015 15:28:45 The ##Expression executes the code and substitutes the result. The following elements of the ObjectScript language can be used for input: Strings: "abc"Routines: $$Label^RoutineClass methods: ##class(App.Test).GetString()COS functions: $name(var)Any combination of these elements Example 5 Preprocessor directives #If, #ElseIf, #Else, #EndIf are used for selecting the source code during compilation depending on the value of the expression following a directive. For example, this method: ClassMethod Test5() { #If $SYSTEM.Version.GetNumber()="2016.2.0" && $SYSTEM.Version.GetBuildNumber()="736" Write "You are using the latest released version of Caché" #ElseIf $SYSTEM.Version.GetNumber()="2017.1.0" Write "You are using the latest beta version of Caché" #Else Write "Please consider an upgrade" #EndIf } Will be compiled into the following INT code in Caché version 2016.2.0.736: zTest5() public { Write "You are using the latest released version of Caché" } And the following will be shown in the terminal: You are using the latest released version of Caché If we use Caché downloaded from the beta-portal, the compiled INT code will look differently: zTest5() public { Write "You are using the latest beta version of Caché" } The following will be shown in the terminal: You are using the latest beta version of Caché Older versions of Caché will compile the following INT code with a suggestion to update the program: zTest5() public { Write "Please consider an upgrade" } The terminal will show the following text: Please consider an upgrade This capability may come in handy, for example, in situations where you want to ensure compatibility of the client application with older and newer versions, where new Caché features may be used. Preprocessor directives #IfDef, #IfNDef serve the same purpose by verifying the existence or absence of a macro, respectively. Conclusions Macros can make your code more readable by simplifying frequently used constructions and help you implement some of your application's business logic on the compilation stage, thus reducing the load at runtime. What's next? In my next article, I will tell you about a more practical example of using macros in an application – a logging system. Links About compilationList of preprocessor directivesList of system macrosClass with examples from this articlePart 2: Logging On the whole a nice job. Three points however.Be careful in saying the .CLS, .MAC code can be deleted. If they ever wish to change their code they will need the classes and possibly the .MAC code.There continues to be a lot of confusion around writing .INT code as a main source. The VA does it all the time. The dangers of this should be pointed out.The differences between .MAC (Macro) Routines and Macros should be explained. Using the term MACRO for code saved as .INC is quite a challenge to beginners working in .MAC and more in .CLSAlways a nice check for a trainer to verify attention of his victims Be careful when using ##Unique. The problems with scope of generated "unique" variable can introduce weird bugs (in some cases it is not as unique as you would expect; WRC problems 879820 and 879901). Citing ISC support from WRC 879820: The problem is that the variable name created by ##Unique is only unique among the set of variables for that method. However, the variable itself is globally scoped. Since the variable name is globally scoped, I think the variable names should be unique for the entire class, not just for the method. I will report this to development. Please note, however, since the variable is globally scoped, using the preprocessor directive as you are here will leak information to any code that calls this class. To prevent this, you should NEW the variable prior to using it: #define A(%x) NEW #Unique(new) SET #Unique(old)=%x Also ##Unique directives cannot be used in nested macros (macro calling another macro; WRC 879901).

#InterSystems IRIS for Health

2.2K Posts11 Followers