Clear filter
Announcement
Anastasia Dyubaylo · Aug 26, 2019
Hi Developers!
InterSystems Developers Community today unites more than 7,000 developers from all over the world. Since 2016, our community has been growing and improving for you, our dear developers!
Together we've done a lot over these years, and much more is planned for the future!
So, who makes our community better every day? Who tries for all of us and improves the space for developers?
Let's warmly greet our team:
@Evgeny.Shvarov – founder of InterSystems Developers community, Startups and Community manager at InterSystems.
@David.Reche – founder & manager of Spanish Developers Community & Senior Sales Engineer at InterSystems.
@Anastasia.Dyubaylo – our Community & Social Media Manager at InterSystems. She leads Global Masters Advocacy Hub and all InterSystems Developers social networks. Anastasia also reviews many of InterSystems' events on the community and on social media.
@Olga.Zavrazhnova2637 – our Global Masters Advocacy Hub Manager at InterSystems. She is managing Global Masters since it's launching in 2016. Now Olga creating engagement campaigns and exploring new rewards ideas for Global Masters.
@Julia.Fedoseeva – Educational and Logistics manager at InterSystems and also our Global Masters Advocacy Hub Manager. She organizes the delivery of GM Rewards around the whole world.
Gamification and Community Management – it's about these guys. They're supporting you on your way with InterSystems Global Masters Advocacy Hub!
And...
Of course, our remarkable Moderators in Developers Community team.
Please welcome:
@Eduard.Lebedyuk – Sales Engineer at InterSystems in Moscow, Russia.
@Dmitry.Maslennikov – Developer Advocate, co-founder of CaretDev corp.
@Sean.Connelly – Managing Director / Software Engineer at MemCog LTD.
@John.Murray – Senior Product Engineer at George James Software.
@Scott.Roth – Senior Applications Development Analyst at the Ohio State University Wexner Medical Center.
@Jeffrey.Drumm – Vice President and Chief Operating Officer at Healthcare Integration Consulting Group (HICG).
@Henrique – System Management Specialist, Database Administrator at Sao Paulo Federal Court.
And our Moderators of the Spanish Community Team:
@Francisco.López1549 – Project Manager & Head of Interoperability at Salutic Soluciones, S.L.
@Nancy.Martinez – Solution Consultant at Ready Computing.
So!
Now you know all InterSystems Developer Community heroes!
Stay tuned with us! 🚀 Awesome!!! I've always got superb guidance, and direction to all my questions. Thank you guys !!Nice to put faces to the people :) Great job guys !! Thanks, @Eric.David! Thank you! Happy to work with such people! Great team, perfect Community! A nice overview -- I like those "cheat sheets" Applause for your all!Keep up the good work all! Like the community very much ! Thanks to all of you, guys, for your effort and help!! Great job!! (And remember, with great power comes great responsibility ) Thanks, Udo! Another version of KYC - Know Your Community ) Thanks, Marco! See you on GS2019!
Discussion
Nikita Savchenko · Dec 12, 2019
Hello, InterSystems community!
Lately, you have probably heard of the new InterSystems Package Manager - ZPM. If you're familiar with it or with such package managers as NPM, Dep, pip/PyPI, etc. or just know what is it all about -- this question is for you! The question I want to arise is actually a system design question, or, in other words, "how should ZPM implement it".
In short, ZPM (the new package manager) allows you to install packages/software to your InterSystems product in a very convenient, manageable way. Just open up the terminal, run ZPM routine, and type install samples-objectscript: you will have a new package/software installed and ready to use! In the same way, you can easily delete and update packages.
From the developer's point of view, quite as same as in other package managers, ZPM requires the package/software to have a package description, fairly represented as module.xml file. Here's an example of it. This file has a description of what to install, which CSP applications to create, which routines to run once installed and so on.
Now, straight to the point. You've also probably heard of InterSystems WebTerminal - one of my projects which is quite widely used (over 500 installs over the last couple of months). We try to bring WebTerminal to ZPM.
So far, anyone could install WebTerminal just by importing an XML file with its code - no more actions were needed. During the class compilation, WebTerminal runs the projection and does all required settings on its own (web application, globals, etc - see here). In addition to this, WebTerminal has its own self-updating mechanism, which allows it to self-update when the new version comes out, made exactly with the use of projections. Apart from that, I have 2 more projects (ClassExplorer, Visual Editor) that use the same import-and-install convenient installation mechanism.
But, it was decided that ZPM won't accept projections as a paradigm and everything should be described in module.xml file. Hence, to publish WebTerminal for ZPM, the team tried to remove Installer.cls class (one of WebTerminal's classes which did all the install-update magic with the use of projections) and manually replaced it with some module.xml metadata. It turned to be quite enough for WebTerminal to work but it potentially leads to unexpected incompatibilities to be 100% compatible with ZPM (see below). Thus, the source code changes are needed.
So the question is, should ZPM really avoid all projection-enabled classes for its packages? The decision of avoiding projections might be changed via the open discussion here. It's not a question of why can't I rewrite WebTerminal's code, but rather why not just accept original software code even if it uses projections?
My opinion was quite strong against avoiding projection-enabled classes in ZPM modules. For multiple reasons. But first of all, because projections are the way how the programming language works, and I see no constructive reasoning against using them for whatever the software/package is designed for. Avoiding them and cutting Installer.cls class from the release is absolutely the same as patching a working module. I agree that the packages which ship specifically for ZPM should try to use all installation features which module.xml provides, however, WebTerminal is also shipped outside of ZPM, and maintaining 2 versions of WebTerminal (at least, because of the self-update feature) makes me think that something is wrong here.
I see the next pros of keeping all projection-enabled classes in ZPM:
The package/software will still be compatible with both ZPM and a regular installation done for years (via XML/classes import)
No original package/software source code changes needed to bring it to ZPM
All designed functions work as expected and don't cause problems (for instance, WebTerminal self-updates - upon the update, it loads the XML file with the new version and imports it, including projection-enabled Installer.cls file anyway)
Cons of keeping all projection-enabled classes in ZPM:
Side effects made during the installation/uninstallation, made by projection-enabled classes won't be statically described in the module.xml file, hence they are "less auditable". There is an opinion that any side effect must be described in module.xml file.
Please indicate any other pros/cons if this isn't the full list. What do you think?
Thank you! Exactly not for installing purposes, you're right, I agree. But what do you think about the WebTerminal case in particular?
1. It's already developed and bound to projections: installation, its own update mechanism, etc.2. It's also shipped outside of ZPM3. It would work as usual if only ZPM supported projections
I see you're pointing out to "It might need to support Projections eventually because as you said it's a part of language" - that's what mostly my point is about. Why not just to allow them. Thanks! Exactly, I completely agree about simplicity, transparency, and installation standard. But see my answer to Sergey's answer - what to do with WebTerminal in particular?
1. Why would I need to rewrite the update mechanism I developed years ago (for example)?2. Why would I need to maintain 2 code bases for ZPM & regular installations (or automate it in a quite crazy way, or just drop self-update feature when ZPM is detected)3. Why all these changes to the source code are needed, after all, if it "just works" normally without ZPM complications (which is how the ObjectScript works)
I think this leads to either "make a package ZPM-compatible" or "make ZPM ObjectScript-compatible" discussion, isn't it? The answer to all this could be "To make the world the better place").
Because if you do all 3 you get:
the same wonderful Web terminal, but with simple, transparent, and standard installing mechanism with and yet another channel for distribution, cause ZPM seems to be a very handy and popular way to install/try the staff.
Maybe yet another channel of clear and handy app distribution is enough argument to change something in the application too?
True points. For sure, developers can customize it. I can do another version of WebTerminal specifically for ZPM, but it will involve additional coding and support:
1. A need to change how the self-update mechanism works or shut it down completely. Right now, the user gets a message on the UI, suggesting to update WebTerminal to the latest version. There's quite a lot of things happen under the hood.2. Thus, create an additional pipeline (or split the codebase) for 2 WebTerminal versions: ZPM's one and a regular one with all the tests and so on.
I am wondering is it worth doing so in WebTerminal's perspective, or is it better to make WebTerminal a kind of an exception for ZPM. Because, still, inserting a couple of if (isZPMInstalled) { ... } else { ... } conditions to WebTerminal (even on front-end side) looks as anti-pattern to me. Thanks! Considering the points others mention, I agree that projections should not be the way to install things but rather the acceptable exception as for WebTerminal and other complex packages. Another option rather than having two versions of the whole codebase could be having a wrapper module around webterminal (i.e., another module that depends on webterminal) with hooks in webterminal to allow that wrapper to turn off projection-based installation-related features. I completely agree, and to get to
standard installing mechanism
for USERS, we need to zpm-enable as many existing projects as possible. To enable these projects we need to simplify the zpm-enabling, leveraging existing code if possible (or not preventing developers from leveraging the existing code). I think allowing developers to use already existing installers (whatever form they may take) would help with this goal. This is very wise, thanks Ed!
For zpm-enabling we plan to add some boilerplate module.xml generator for the repo, stay tuned Hi Nikita,
> A need to change how the self-update mechanism works or shut it down completely.If a package is distributed via package manager, its self-update should be completely removed. It should be a responsibility of package manager to alert the user that new version of package is available and to install it.
> Thus, create an additional pipeline (or split the codebase) for 2 WebTerminal versions: ZPM's one and a regular one with all the tests andso on.Some package managers allow to apply patches to software before packaging it, but I don't think it's the case for ZPM at the moment. I believe you will need to do a separate build for ZPM/non-ZPM versions of your software. You can either apply some patches during the build, or refactor the software so that it can run without auto updater if it's not installed. Hi Nikita!
Do you want the ZPM exception for the Webterminal only or for all your InterSystems solutions? ) The whole purpose of package manager is to get rid of individual installer/updater scripts written by individual developers and replace them with package management utility. So that you have a standard way of installing, removing and updating your packages. So I don't quite understand why this question is raised in this context -- of course package manager shouldn't support custom installers and updaters. It might need to support Projections eventually because as you said it's a part of language, but definitely not for installing purposes. I completely support inclusion of projections.
ObjectScript Language allows execution of arbitrary code at compile time through three different mechanisms:
Projections
Code generators
Macros
All these instruments are entirely unlimited in their scope, so I don't see why we need to prohibit one way of executing code at compilation.
Furthermore ZPM itself uses Projections to install itself so closing this avenue to other projects seems strange. Hi Nikita!
Thanks for the good question!
The answer is on why module.xml vs installer.cls on projections is quite obvious IMHO.
Compare module.xml and installer.cls which does the same thing.
Examining module.xml you can clearly say what the installation does and easily maintain/support it.
In this case, the package installs:
1. classes from WebTerminal package:
<Resource Name="WebTerminal.PKG" />
2. creates one REST Web app:
<CSPApplication
Url="/terminal"
Path="/build/client"
Directory="{$cspdir}/terminal"
DispatchClass="WebTerminal.Router"
ServeFiles="1"
Recurse="1"
PasswordAuthEnabled="1"
UnauthenticatedEnabled="0"
CookiePath="/"
/>
3. creates another REST Web app:
<CSPApplication
Url="/terminalsocket"
Path="/terminal"
Directory="{$cspdir}/terminalsocket"
ServeFiles="0"
UnauthenticatedEnabled="1"
MatchRoles=":%DB_CACHESYS:%DB_IRISSYS:{$dbrole}"
Recurse="1"
CookiePath="/"
/>
I cannot say the same for Installer.cls on projections - what does it do to my system?
Simplicity, transparency, and installation standard with zpm module.xml approach vs what?
From the pros/cons, it seems the objectives are:
Maintain compatibility with normal installation (without ZPM)
Make side effects from installation/uninstallation auditable by putting them in module.xml
I'd suggest as one approach to accomplish both objectives:
Suppress the projection side effects when running in a package manager installation/uninstallation context (either by checking $STACK or using some trickier under-the-hood things with singletons from the package manager - regardless, be sure to unit test this behavior!).
Add "Resource Processor" classes (specified in module.xml with Preload="true" and not included in normal WebTerminal XML exports used for non-ZPM installation) - that is, classes extending %ZPM.PackageManager.Developer.Processor.Abstract and overriding the appropriate methods - to handle your custom installation things. You can then use these in your module manifest, provided that such inversion of control still works without bootstrapping issues following changes made in https://github.com/intersystems-community/zpm.
Generally-useful things like creating a %All namespace should probably be pushed back to zpm itself.
Question
Alex Van Schoyck · Jan 30, 2019
ProblemI'm working on exporting data from an Intersystems Cache database through the Cache ODBC Driver. There is a particular table that is giving me an error message. The ODBC Driver crashes and reports an error from the Cache system. I think I was able to trace down where the error is coming from, but I do not know how to debug or fix the error.The table I am trying to extract is called SEDMIHP.Here's the Error:
[Cache Error: <<UNDEFINED>%0AmBd16^%sqlcq.PRD.3284 ^SEDMIHP(4,77)>]
[Location: <ServerLoop - Query Fetch>]
Research/Trial & Error
I was able to open up Cache Management Studio and find the class that matched up with the table name. I should mention that this is my very first time working with Intersystems Cache, so I apologize if I'm sounding dumb or inexperienced here.
Within the SQLMap, I found this code:
<Data name="DESCRIP_2">
<RetrievalCode> S {DESCRIP_2}=$P($G(^PHPROP({L1},"DESC_CODES")),"\",2) S {DESCRIP_2}=$S($L({DESCRIP_2}):^SEDMIHP($P({DESCRIP_2},","),$P({DESCRIP_2},",",2)),1:{DESCRIP_2})
S {DESCRIP_2}=$E({DESCRIP_2},1,80)
</RetrievalCode>
</Data>
I'm thinking that the code in here is causing an issue. With my very limited understanding of ObjectScript, I think this code is manipulating the text/string, and maybe if there's an undefined or bad value in the data, its causing those functions to throw an error?
I have limited access to the Cache Management Portal, and I am able to find the table in the SQL Schema and run a query on it. About 300 rows of data are loaded before the same Error as above shows up, and it stops loading any more rows. This is why I'm thinking there is bad data.
I tried using ISNULL() and IFNULL() in the SELECT statement to try and skip any bad data, but had the same error in the same spot every time.
Questions
Is there an easy solution from the SQL side that can avoid this error?Is there anything I can do with the class code in Studio to debug or get more info about this error?
Any and all help is greatly appreciated!
Additional Info
Cache Version: Cache for OpenVMS/IA64 V8.4 (Itanium) 2012.1.5 (Build 956 + Adhoc 12486) 17-APR-2013 19:49:58.07 Dmitry, Thank you so much! That worked perfectly and solved the issue! I've been coming up empty handed for hours trying to figure out this issue. I really appreciate your help! If you can edit this code, you can try change to this.
<Data name="DESCRIP_2"> <RetrievalCode> S {DESCRIP_2}=$P($G(^PHPROP({L1},"DESC_CODES")),"\",2) S {DESCRIP_2}=$S($L({DESCRIP_2}):$Get(^SEDMIHP($P({DESCRIP_2},","),$P({DESCRIP_2},",",2))),1:{DESCRIP_2}) S {DESCRIP_2}=$E({DESCRIP_2},1,80) </RetrievalCode> </Data>
But not sure, if this correct.
What I did there, is, wrapped retrieving data from global ^SEDMIHP with the function $Get()
Or this way, with the default value
<Data name="DESCRIP_2"> <RetrievalCode> S {DESCRIP_2}=$P($G(^PHPROP({L1},"DESC_CODES")),"\",2) S {DESCRIP_2}=$S($L({DESCRIP_2}):$Get(^SEDMIHP($P({DESCRIP_2},","),$P({DESCRIP_2},",",2)),{DESCRIP_2}),1:{DESCRIP_2}) S {DESCRIP_2}=$E({DESCRIP_2},1,80) </RetrievalCode> </Data>
Announcement
Andreas Dieckow · Jan 9, 2019
InterSystems has completed the verification process for OpenJDK 8.Customers now have the option to either use the Oracle JDK, or the OpenJDK with all InterSystems products and versions that support Java 8. Support for future versions will continue to be supported on both of these Java Development Kits. Nice another step to much more visibility.
Announcement
Anastasia Dyubaylo · May 22, 2019
Hi Community!
Please welcome a new video on InterSystems Developers YouTube Channel:
InterSystems IRIS from Spark to Finish
This video demonstrates spinning up a cluster that combines InterSystems IRIS' powerful data management with Apache Spark's unique approach to parallelizing complex data engineering and analytics. Together, they let you make the best use of your distributed environment.
Takeaway: The combination of InterSystems IRIS and Apache Spark enables me to build powerful analytical solutions.Presenter: @Amir.Samary
And...
Additional materials to the video you can find in this InterSystems Online Learning Course.
Don't forget to subscribe our InterSystems Developers YouTube Channel.
Enjoy and stay tuned!
Announcement
Anastasia Dyubaylo · May 13, 2019
Hey Community!
It's time again for good tidings for you!
For the first time, InterSystems will be part of the WeAreDevelopers World Congress in Berlin, Germany, which brings together developers, IT experts and digital innovators to discuss and shape the future of application development.
From 6 to 7 June, we’re ready to welcome you at our booth #A5 and show you how InterSystems technologies enable intelligent interoperability and accelerate the creation of powerful, data-driven applications.
Schedule your individual meeting with InterSystems @ WeAreDevelopers in Berlin by quickly filing out the form on our event page [https://dach.intersystems.de/WeAreDevelopers2019] – the first three applicants will receive a FREE ticket!
To make it more challenging for you, we’ve decided to provide the webpage & form in German only.
So...Do not miss your chance!
Join the World’s Largest Developers Congress with InterSystems!
Article
Nikita Savchenko · Feb 12, 2019
ˮ This is one of my articles which was never published in English. Let's fix it!
Hello! This article is about quite a practical way of developing InterSystems solutions without using the integrated tools like Studio or Atelier. All the code of the project can be stored in the form of "traditional" source code files, edited in your favorite development environment (for example, Visual Studio Code), indexed by any version control system and arbitrarily combined with many external tools for code analysis, preprocessing, packaging and so on.
The approach described in this article is suitable for any types of projects on top of InterSystems products. In my case, I developed a couple of my applications (WebTerminal, Visual Editor, Class Explorer) using this approach. This article demonstrates a development cycle which is not traditional for InterSystems, but rather is the practical one, which you may prefer to use for some of your developments.
TL;DR Here are some examples of projects that utilize the approach described in this article: WebTerminal, Class Explorer, Visual Editor, Entity Browser (possibly, some other projects have picked up this idea - comment below!). If you want to check the file structure of these projects, click "Open" located right after "Repository" label, and you'll be redirected to GitHub. I've been developing these projects completely without Studio/Atelier!
Below is a description of several simplest ways to organize such project development technique. Each method can be modified and expanded to a full-fledged tool for importing, assembling, or even debugging projects in InterSystems Caché, however, the purpose of this article is to provide the basics only and to show that it can work.
The described approach to development feature the following:
The entire project (its source code) is located in the file system, with any arbitrary directory structure.
The project directory is indexed by the Git version control system, has readme file, configs and scripts required for importing/compiling the project.
The source code of the classes is in CLS format (as it appears in Studio/Atelier).
Work on the project is carried out entirely in the file system, code writing - in any external text editor or IDE.
The main feature of this approach is that you can connect any additional tools, for example, code preprocessing (like stub replacements at the compilation stage), front end assembly and so on.
This article will not cover ObjectScript routines, CSP and other files but only ObjectScript class files. The work with the first ones can be organized in the same way as with ObjectScript classes. When necessary, by analogy with the presented example, you can implement the support of importing ObjectScript routines yourself. Regarding CSP files, these are just files on the disk, so you don't need to import them at all. To make CSP files work with your InterSystems application, just copy them to the directory of your application.
The method described in the article does not require any additional tools and platforms, except for the installed version 2016.2+ of InterSystems Caché (Ensemble, Healthcare) or InterSystems IRIS. Additional assembly and preprocessing of the client code in this article is build with Node.JS, however, you can use any other technology you like. NodeJS is an open source and easy to use platform, which is chosen here because there are many ready-to-go packages already built for the tasks we are about to perform.
Motivation Behind Development in Non-InterSystems IDEs
The question arises, why not just continue to develop in the Studio, or switch to a “new studio”, Atelier? What is the point of not using these IDEs at all?
The ObjectScript programming language is very different from other common languages such as C#, Java, JavaScript, Python, Golang and others. The key difference here is that the language is "closed" by itself. Out-of-the box, many tools come directly from InterSystems, which is slowly changing with the introduction of InterSystems Open Exchange, a collection of community-created applications and tools for InterSystems products, and the politics of the corporation to make InterSystems more open. On my opinion, these changes are necessary to make ObjectScript a world-class player in the list of programming languages.
Moreover, historically, ObjectScript programs, as well as their source code, are stored directly in the DBMS itself. Before the UDL support was introduced in InterSystems Caché 2016.2 (or CDL in version 2013.2 - read below), in order to extract the source code from the database, it was necessary to write a considerable program to export plain text sources to the files, and put even more effort into getting the code back into DBMS. Now, exporting and importing plain text source code is possible with just a single command, so that you can easily organize a “traditional” model for solutions development: editing source code files — compiling — getting results.
Before Atelier, it wasn't simply possible to develop InterSystems applications on Linux / MacOS without a VM, since Caché Studio was supported only for Windows. Now that Atelier is based on Eclipse IDE, you can develop on any platform supported by Eclipse. However, the method described in the article is completely cross-platform.
Some projects have many other sources and files besides ObjectScript classes. The question here is how to properly organize the source code of the entire project. Today, the next development cycle is used for projects using InterSystems Tech: you work on sources in Studio/Atelier, and then you can do export XML/CLS files to a vcs-indexed file system with a help of embedded tools. These exported files are not intended for modifications. In case of Atelier, the development cycle is designed around Atelier only, and each and every extension has to be supported by the IDE. There is a little support of external tools, build tools, code analyzers, preprocessors, as well as there is no support for an arbitrary project structure and so on. To sum up, mostly what was designed initially is supported only.
Finally, the most important motivation, taking into account all of the above, is to open ObjectScript Programming Language for the whole world. This has already began: InterSystems introduced InterSystems Open Exchange, an ObjectScript support was developed for Sublime Text Editor, Atom and Visual Studio Code, and so on. See? That's what it is about!
Syntax highlighting in Visual Studio Code - an "external" IDE for ObjectScript
Introduction
Exporting ObjectScript program sources to UDL (Universal Definition Language) format landed completely just in InterSystems Caché 2016.2. In previous versions, starting with InterSystems Caché 2013.2, the class code export was also possible, using the methods of the class %Compiler.UDL.TextServices. It is also worth mentioning that, starting from version 2016.2, the Atelier REST API is also available for importing/exporting class definitions for InterSystems products.
Before UDL, it was only possible to export and import the XML representation of classes, which was just a big mess for version control systems. Through XML class definition also had a plaintext code you could edit, you weren't able to see clean commit diffs (example: one of my projects which still has some XMLs in it), do merge requests and so on. UDL cleans this up and opens new possibilities for projects development on top of InterSystems products.
The result of this article is the simplest possible project organized entirely in the file system, and several scripts that create a single command to build and import the whole thing into DBMS.
Prerequisites
Let's assume that we have an ObjectScript project, which consists of class definitions (as well as, possibly, some routine code and a static front end). This is a necessary and sufficient condition to start applying the development method described in this article to an existing or a new project.
It is assumed that the machine you work on has a locally installed DBMS IRIS/Caché/Ensemble/HealthShare version 2016.2+. To implement this method of development in earlier versions of InterSystems Caché (starting with 2013.2), you will need to adapt the suggested examples using the %Compiler.UDL.TextServices methods. If you don't have any InterSystems' products installed, you can try one out here. During the installation, specify the Unicode encoding instead of 8-bit, and leave all the other items suggested by the installation wizard unchanged.
The article uses the Git version control system. If you do not have Git installed, you have to install it.
Creating a Project
The directory structure of the demonstration project is as follows:
Wherein:
The source code of the project is located in the “source” directory, and, in the corresponding “cls” subdirectory, there is a tree of packages and classes. In the screenshot of the project structure above, as an example, the DevProject package is located, along with Robot class (DevProject.Robot) and REST subpackage.
The import.* Script imports the project into the DBMS.
The project code shown above is available on GitHub. It is suggested to clone the project to the local machine by following the instructions below:
git clone https://github.com/ZitRos/cache-dev-project
The project contains the source/cls directory, which contains the usual package hierarchy. For the demonstration, a simplest class was created containing the Message class-method, which displays the message “Welcome, Anonymous!”:
Class DevProject.Robot{ClassMethod Message(name As %String = "Anonymous"){ write "Welcome, ", name, "!"}}
To import this and other classes into DBMS, you can use one of the following ways:
1. Use Atelier:
It doesn't make sense to perform all these steps each time we would like to test our project. Hence, we are going to automate this.
2. Execute the following command in the terminal window:
do $system.OBJ.ImportDir("D:/Path/To/the/Project/source/cls",,"ck /checkuptodate=all",,1)
This command recursively loads all files from the D:/Path/To/the/Project/source/cls directory into the current namespace, and also compiles those classes that have changed since the last import. Thus, reloaded classes without changes will not take time to compile.
The second option is also isn't the most convenient solution - every time a project starts, you need to open a Caché terminal, enter a login-password pair (on instances with a normal protection level enabled), switch to the desired namespace and finally execute the command saved somewhere in a notebook. Finally, it is possible to automate this using the third option.
3. Create a script to automate all the routine stuff and use just this:
import
Calling the latter command in the case of development in almost any external IDE can be simplified even more, to the click of a single button or running a program which will watch files and re-import each time something changes.
Thus, the entire project is in the file system, work is being done with the plaintext files, and if necessary, just a single command imports and compiles the whole project without a hassle.
The Import Script
Let's take a closer look onto a script that imports a project into DBMS. In order to do this, it needs some additional information about your InterSystems instance, namely, it's install location, import namespace, as well as the username and password to log in to the system. This data is coded directly into the script, however, it could be separated to a config file.
The source code of the script is available on GitHub for Windows and *nix systems. All that needs to be done is several variables change in the script once before starting work on the project.
The script executes the cache.exe executable file, which is located in the /bin/ directory of the installed DBMS, and passes two arguments to it: the database directory and the namespace. Then, the script sends a user name, a password, and a few simple ObjectScript commands to the input interface via the terminal interface, importing classes and reporting a successful import or error.
Thus, the user gets all the necessary information about the import and compilation of the classes, as well as any errors that may have occurred during the compilation process. Here's the example of the output of the import.bat script:
Importing project...Node: DESKTOP-ILGFMGK, Instance: ENSEMBLE20162
USER>Load of directory started on 06/29/2016 22:59:10
Loading file C:\Users\ZitRo\Desktop\cache-dev-project\source\cls\DevProject\Robot.cls as udlLoading file C:\Users\ZitRo\Desktop\cache-dev-project\source\cls\DevProject\REST\Index.cls as udl
Compilation started on 06/29/2016 22:59:10 with qualifiers 'ck /checkuptodate=all'Class DevProject.REST.Index is up-to-date.Compiling class DevProject.RobotCompiling routine DevProject.Robot.1Compilation finished successfully in 0.003s.
Load finished successfully.IMPORT STATUS: OK
Now we can ensure that the project was indeed imported:
USER > do ##class(DevProject.Robot).Message()Welcome, Anonymous!
More Complex Example
To maximize the benefits of developing a project using InterSystems technologies, we will try to do something more attractive by adding a graphical interface and building the project with the use of NodeJS platform and Gulp task runner. The result is a web page which image is shown below.
Emphasis will be placed on how it is possible to organize the development of such a project. First let's took at architecture of the suggested solution.
The project consists of static client code (HTML, CSS, JavaScript), a class on the server that describes the REST interface, and one persistent class.
A client with a GET request gets a list of robots that are located on the server. Also, when you click on the “Spawn a new robot!” button, the client sends a GET request to the server, as a result of which a new instance of the Robot class is created and added to the display list. (note: robot creation request should actually be a POST request, but we won't complicate things much in this example)
Technical implementation of the project can be viewed on GitHub (in “extended” branch). In the article, further attention will be paid to the method of developing such projects.
Here, unlike the previous example, the client part of the application is added, which is located in the source/static directory, and the project is built using Node.JS and Gulp.
For example, besides the other code in the project there you can find some special comments like these:
<div class="version">v<!-- @echo package.version --></div>
When building a project, this comment will be replaced with the project version, which is listed in the package.json file. The build script also minimizes CSS and JavaScript code, and copies all the processed code into the `build` directory.
In the import script, unlike the previous example, the following changes were added:
Before importing, the project is assembled (bundled).
Import files are now imported from the build directory, as they pass through the preprocessor.
The files are copied from the build/static directory to the Caché's csp/user directory with CSP files. Thus, after importing the application it immediately becomes available.
Detailed instructions for installing and running this project are available in the description of the repository.
The result is a project that needs to be set up only once, by modifying several variables in the import.* file.
The considered development cycle is used in several of my own projects: WebTerminal, Class Explorer, Visual Editor, Entity Browser. Soon it may be used in other projects, including your own ones :)
IDE and Debugging
This development method does not provide any debugging utilities, it is simple as that. If you use more comprehensive debugging tools for your ObjectScript code rather than simply logging something to globals, you still have to use integrated debugging tools in InterSystems products.
However, besides of that, the described development method has a big advantage: the ability to use your favorite development environment for writing ObjectScript code, whether this is vim or a simple notebook, MacOS or *nix or any other programming language is used - you get the same workflow. But on the other hand, ObjectScript does not feature such a comprehensive support outside the Studio/Atelier. This means that even syntax highlighting is currently not quite well handled by external editors, let alone autocompletion. But all this is about to change in the near future, as more and more effort is being put into the open-source initiatives.
In the meantime, you can use elementary syntax highlighting by keywords that some IDEs offer, such as IntelliJ or Visual Studio Code:
In the case of IntelliJ IDEA is your favorite IDE, you can try it right now - here is the settings file you need to import using the File -> Import Settings menu. Highlighting is quite simple and incomplete, any additions are welcome.
Conclusion
The purpose of this article is to introduce something new into the world of development of InterSystems applications, present another version of development to the public and contribute to the spread of ObjectScript as a programming language as a whole. Any feedback, ideas and discussions are very welcome!
Thank you! Nice writing, Nikita!Just want to mention that there is a new community option to code ObjectScript you've probably never tried - VSCode plugin for ObjectScript by @Dmitry.Maslennikov.A lot of developers can name VSCode as "favorite" IDE and the plugin can do really a lot for InterSystems IRIS developers today. This is an interesting approach, and I do like it. One question that comes to mind is handling the concept of refactoring packages or class names/files. An example if your cache-dev-project gets restructured in a branch that would be used for testing and DevProject package turns into something more descriptive like RobotProject. When the import script runs the code the server will have a DevProject package and a RobotProject if it was switching between the branches. If one of the packages or classes becomes obsolete then it would be nice to have a way to delete the Class from the server code.I don't think VSCode has a way to handle this. Just a little food for thought. I would not be so sure in your doubts about VSCode. VSCode itself supports refactoring staff, we just do not have it in ObjectScript extension, yet. Deleting obsolete classes, for sure very interesting and quite difficult task. But better to solve it another way, with just clean rebuild. Or for example I can add action delete in the context menu in server explorer, so, a developer will be able to manually delete any class/routine on the server from VSCode. Hello Matthew! Thank you for your feedback.Indeed good point. One idea that comes to my mind for this case is to improve the import script to file the list of classes which were ever imported and those which are used now. By using this list the import script can resolve which classes to delete and which to keep. However, deleting classes can always introduce unwanted side effects, but in terms of a project this should be consistent. Dmitry this sounds great to have a way to do refactoring. I would love to get some update on coming features to VSCode as I do use it often. As for adding the Action to delete in the context menu that would be a very helpful feature. There are times a class is created for new functionality and then gets deleted if a better solution is found. I would much rather manually delete 1 class than perform a clean rebuild. I will admit sometimes a clean rebuild is needed, but depending on the circumstances can take longer. Fill the issue, please, so, I will keep it saved
Article
Evgeny Shvarov · Mar 14, 2019
Hi Community!
I think everyone keeps the source code of the project in the repository nowadays: Github, GitLab, bitbucket, etc. Same for InterSystems IRIS projects check any on Open Exchange.
What do we do every time when start or continue working with a certain repository with InterSystems Data Platform?
We need a local InterSystems IRIS machine, have the environment for the project set up and the source code imported.
So every developer performs the following:
Check out the code from repo
Install/Run local IRIS installation
Create a new namespace/database for a project
Import the code into this new namespace
Setup all the rest environment
Start/continue coding the project
If you dockerize your repository this steps line could be shortened to this 3 steps:
Check out the code from repo
Run docker-compose build
Start/continue coding the project
Profit - no any hands-on for 3-4-5 steps which could take minutes and bring head ache sometime.
You can dockerize (almost) any your InterSystems repo with a few following steps. Let’s go!
How to dockerize the repo and what does this mean?
Basically, the idea is to have docker installed in your machine which will build the code and environment into a container which will then run in docker and will work in the way developer introduced at a first place. No any "What is the OS version?", "What else did you have on this IRIS installation?".
It's every time a clean page (or a clean IRIS container) which we use to setup environment (namespaces, databases, web-apps, users/roles ) and import code into a clean just created database.
Will this "dockerize" procedure harm greatly your current repo?
No. It will need to add 2-3 new files in the root of the repo and following a few rules which you can setup on your own.
Prerequisites
Download and Install docker.
Download and install IRIS docker image. In this example, I will use full InterSystems IRIS preview: iris:2019.1.0S.111.0 which you can download from WRC-preview., see the details.
If you work with the instance which needs a key place the iris.key in the place you will use all the time. I put it on my Mac into Home directory.
Dockerising the repo
To dockerise your repo you need to add three files into the root folder of your repo.
Here is the example of dockerized repo - ISC-DEV project, which helps to import/export source code from IRIS database. This repo has additional Dockerfile, docker-compose.yml and installer.cls I will describe below.
First is Dockerfile, which will be used by docker-compose build command
Dockerfile
FROM intersystems/iris:2019.1.0S.111.0 # need be the same image as installed
WORKDIR /opt/app
COPY ./Installer.cls ./
COPY ./cls/ ./src/
RUN iris start $ISC_PACKAGE_INSTANCENAME quietly EmergencyId=sys,sys && \
/bin/echo -e "sys\nsys\n" \
# giving %ALL to the user admin
" Do ##class(Security.Users).UnExpireUserPasswords(\"*\")\n" \
" Do ##class(Security.Users).AddRoles(\"admin\", \"%ALL\")\n" \
# importing and running the installer
" Do \$system.OBJ.Load(\"/opt/app/Installer.cls\",\"ck\")\n" \
" Set sc = ##class(App.Installer).setup(, 3)\n" \
" If 'sc do \$zu(4, \$JOB, 1)\n" \
# introducing OS Level authorization (to remove login/pass prompt in container)
" Do ##class(Security.System).Get(,.p)\n" \
" Set p(\"AutheEnabled\")=p(\"AutheEnabled\")+16\n" \
" Do ##class(Security.System).Modify(,.p)\n" \
" halt" \
| iris session $ISC_PACKAGE_INSTANCENAME && \
/bin/echo -e "sys\nsys\n" \
| iris stop $ISC_PACKAGE_INSTANCENAME quietly
CMD [ "-l", "/usr/irissys/mgr/messages.log" ]
This Dockerfile copies installer.cls and the source code from /cls folder of repo into /src foler into the container
It also runs some config settings, which give admin user %All role, infinite password ‘SYS’, introduces OS level authorization and runs the %Installer.
What’s in %Installer?
Class App.Installer
{
XData MyInstall [ XMLNamespace = INSTALLER ]
{
<Manifest>
<Default Name="NAMESPACE" Value="ISCDEV"/>
<Default Name="DBNAME" Value="ISCDEV"/>
<Default Name="APPPATH" Dir="/opt/app/" />
<Default Name="SOURCESPATH" Dir="${APPPATH}src" />
<Default Name="RESOURCE" Value="%DB_${DBNAME}" />
<Namespace Name="${NAMESPACE}" Code="${DBNAME}-CODE" Data="${DBNAME}-DATA" Create="yes" Ensemble="0">
<Configuration>
<Database Name="${DBNAME}-CODE" Dir="${APPPATH}${DBNAME}-CODE" Create="yes" Resource="${RESOURCE}"/>
<Database Name="${DBNAME}-DATA" Dir="${APPPATH}${DBNAME}-DATA" Create="yes" Resource="${RESOURCE}"/>
</Configuration>
<Import File="${SOURCESPATH}" Recurse="1"/>
</Namespace>
</Manifest>
}
ClassMethod setup(ByRef pVars, pLogLevel As %Integer = 3, pInstaller As %Installer.Installer, pLogger As %Installer.AbstractLogger) As %Status [ CodeMode = objectgenerator, Internal ]
{
Return ##class(%Installer.Manifest).%Generate(%compiledclass, %code, "MyInstall")
}
}
It creates the namespace/database ISCDEV and imports the code from source folder -/src.
Next is docker-compose.yml file, which will be used when we run the container with docker-compose up command.
version: '2.4'
services:
iris:
build: .
restart: always
ports:
- 52773:52773
volumes:
- ~/iris.key:/usr/irissys/mgr/iris.key
This config will tell docker on what port we will expect IRIS working on our host. First (52773) is a host, second is container’s internal port of a container (52773)
in volumes section docker-compose.yml provides access to an iris key on you machine inside the container in the place, where IRIS is looking for it:
- ~/iris.key:/usr/irissys/mgr/iris.key
To start coding with this repo you do the following:
1. Clone/git pull the repo into any local directory.
2. Open the terminal in this directory and run
user# docker-compose build
this will build the container.
3. Run the IRIS container with your project
user# docker-compose up -d
Open your favorite IDE, connect to the server on localhost://52773 and develop your success with InterSystems IRIS Data Platforms ;)
You can use this 3 files to dockerize your repository. Just put the right name for source code in Dockerfile, the right namespace(s) in Installer.cls and place for iris.key in docker-compose.yml and use benefits of Docker containers in your day-to-day development with InterSystems IRIS.
Nice one Evgeny! I like it!I'm sure it'll help all those that want to leverage the agility of containers and our quarterly container releases. Thank you, Luca! Besides agility, I like the saving of the developer's time on environment setup. Docker IMHO is the fastest way for a developer to start compiling the code. And what's even better - it's a standard way from project-to-project:
docker-compose build #(when needed)
docker-compose up -d #(always)
And forgot to add, that to open IRIS terminal just call the following:
user$ docker-compose exec iris iris session iris
Very nice tip, worked fine, and with this I can use IRIS terminal from VSCode terminal VSCode, while configured to work with docker, have a short action to open terminal, through menu on connection status
Hello,
And a great tutorial, thanks! One question tho. Do you know how to pass environment variables to the %Installer? In my scenario, I would like to configure a CI/CD pipeline and define some environment variables when building the docker container and reference those variables within the %Installer to create for ex. A specific user account (username and password as env variables). How can I achieve this?
I've tried setting the env variables within the Dockerfile with ENV someEnv="1234" and try to get the variable with %System.Util.GetEnviron("someEnv") within %Installer, but it just returns an empty string.
If you have any insight or tips, it would be appreciated.
Cheers!Kari Vatjus-Anttila
Not sure why this doesn't work. Calling for experts @Dmitry.Maslennikov @Eduard.Lebedyuk Yeah, working with Environment variables is quite tricky, it may not be in a place where you would expect it. I would not recommend it for %Installer, you should focus on Variables feature there, and pass variable to setup method when you call it. It should work. Here's an example:
Installer
Dockerfile
Please consider providing sample code. 💡 This article is considered as InterSystems Data Platform Best Practice.
Announcement
Anastasia Dyubaylo · Jul 17, 2019
Hi Community!You're very welcome to watch a new video on InterSystems Developers YouTube, recorded by @Benjamin.DeBoe, InterSystems Product Manager:Natural Language Processing with InterSystems IRIS This video provides a quick overview of what kinds of problems InterSystems IRIS’ NLP capabilities can solve. The technology is available in the Community Edition, so no reason not to spend the first five minutes of your lunch break watching the video, and the remaining time on kicking the tires!And...Additional materials to the video you can find on the InterSystems Video Portal.Enjoy and stay tuned! @Benjamin.DeBoe !Transferring a question from YouTube where IRIS NLP could be found in IRIS Community Edition? Could you please help? IRIS NLP, previously known as iKnow, is an embedded technology, meaning it's there in the form of APIs. These articles on building a domain and using the knowledge portal should be a helpful start, as is this series of step-by-step videos (which are a little older I'll admit; start with the "fundamentals" one) and of course other articles on the developer community tagged for iKnow.
Announcement
Anastasia Dyubaylo · Jul 10, 2019
Hi Community!New video is already on InterSystems Developers YouTube Channel:InterSystems Platforms and FHIR STU3This video provides an overview of our FHIR STU3 support, with a demonstration to showcase key features, including data transformation APIs.Takeaway: InterSystems enables me to use the FHIR STU3 standard.Presenter: @Craig.Lee, @Marc.Mundt Additional materials to the video you can find in this InterSystems Online Learning Course.Enjoy watching the video!
Announcement
Anastasia Dyubaylo · Jul 12, 2019
Hi Everyone!
You're very welcome to watch the new video on InterSystems Developers YouTube, recorded by @Sourabh.Sethi6829 in a new format called "Coding Talks":
Locking in InterSystems ObjectScript
In this video, we all are going to do a deep dive into a very common yet complex topic: locks in InterSystems ObjectScript.In this video, we will start from the very basics to the most complex and interesting aspects of locks and data integrity.
Recommended Audience: All developers who are aware of MUMPS or Caché Objects.
Codeset you can find here.
For any questions, please write to @Sourabh.Sethi6829 at sethisourabh.hit@gmail.com.
Enjoy watching this video! Thank you for another interesting video. However, if the sound quality was better, it would make the viewing much more enjoyable. Simply use lavalier microphone and adjust the compression for much better results. Hi Pasi,Thanks for your feedback! We will take into account your preferences. @Sourabh.Sethi6829, very helpful video and good work!But sharing the code in Google Docs? Could you please share it on Github and in ObjectScript? E.g. like in this project?
Question
Kishan Ravindran · Jul 1, 2017
In my cache studio i couldn't find the a namespace of iknow so how can i check is my studio version is compatible to to the one i am using now. If i don't have one then can be able to create a new namespace in studio? I checked out how it can be done using intersystem doc. But for mine i think for my user the license does not apply is there any other way i can work on. Can anyone help me on this. Thank you John Murray and Benjamin DeBoe for your answers. Both your Document was helpful. Here's a way of discovering if your license includes the iKnow feature:USER>w $system.License.GetFeature(11)1USER>A return value of 1 indicates that you are licensed for iKnow. If the result is 0 then your license does not include iKnow.See here for documentation about this method, which tells you that 11 is the feature number for iKnow.Regarding namespaces, these are created in Portal, not in Studio. See this documentation. Thanks John,indeed, you'd need a proper license in order to work with iKnow. If the method referred above would return 0, please contact your sales representative to request a temporary trial license and appropriate assistance for implementing your use case.Also, iKnow doesn't come as a separate namespace. You can create (regular) namespaces as you prefer and use them to store iKnow domain data. You may need to enable your web application for iKnow, which is disabled by default for security reasons in the same way DeepSee is. See this paragraph here for more details.
Announcement
Derek Robinson · Nov 22, 2017
Hi all! We have just released a new online course, Getting Started with ICM, that provides an introduction to InterSystems Cloud Manager (ICM) -- one of the new technologies coming with the release of InterSystems IRIS!After taking this one-hour course, you will be able to:Explain what ICM is and the business benefits that come with itIdentify the major cloud computing providers and the benefits of cloud computingProvision a multi-node infrastructure on your selected cloud platformDeploy your InterSystems IRIS applications to your provisioned infrastructureUnprovision your infrastructure to avoid costly chargesRun additional commands to further manage and modify your cloud deployments with ICMWe hope you enjoy the course!
Question
Mike Kadow · Nov 8, 2017
In trying to understand Atelier I am directed to go through its hierarchy type of documentation.Is the Atelier documentation going to continue as a hierarchy or at some point is it going to be integrated into the InterSystems type of documentation?When looking for an answer it would be nice to use only one method. On a side note, the attached Relevant Articles seem to have nothing to do with the subject of my query. There are currently no plans to merge the Atelier documentation with the docs for other InterSystems technologies (Caché, Ensemble, HealthShare, InterSystems IRIS Data Platform). Atelier is a separate product and will continue to have its own documentation that follows industry standards for Eclipse plug-ins.
Question
Ponnumani Gurusamy · Oct 6, 2016
Difference between function , routine and procedure in object script. I think this is what you are looking for: http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GORIENT_ch_cos A function is something that takes a bunch of inputs and returns one or more values. If the returned values are entirely determined by the inputs, and the function doesn't have any side effects (logging, perhaps, or causing state changes outside itself), then it's called a pure function.A procedure is a function that doesn't return a value. In particular, this means that a procedure can only cause side effects. (That might include mutating an input parameter!)A routine is either a procedure or a function or is the bridge between a function and a procedure, should also include instructions for accessing the function arguments and returning the result.