Clear filter
Announcement
Evgeny Shvarov · Jan 2, 2020
Hi Developers!
Happy New Year!
And I'm pleased to share the notes on the new DC Release. What comes?
New design for Articles
Voice over for Articles
Format painter feature for editor
New private messages informer, Github profile and other minor enhancements
See the details below, here we go!
New Articles Design
We made articles special. Articles really need more focus, more of your attention cause sometimes (we hope) it contains a lot of technical wisdom which expects special attention. So we tried to make you comfortable reading articles. We made the font different, and we removed all the ads and announcements from the side. I hope it makes a difference: Example 1, Example 2. Read them all! And in Spanish ;)
Voice Over for Articles
Sometimes we don't have the option to read the article but can listen to it. Sometimes we don't have time for reading but can listen to it during the ride or morning running. Sometimes listening is just better than reading - you decide!
And we introduced the option of 'articles listening' with the release! Each article now has the small audio player on top (or the link to generate audio):
You are able to download it or use the embedded player.
Listen to them all!
And it works for Spanish too!
Format Painter
With this release, we introduced a format painter which gives you an option to copy the format in one place and implement it to another section of text which is a very useful option for WYSIWYG editor.
Minor Enhancements
We introduced a "New Direct Message" informer which appears on top near your name when you get a new Direct Message.
Member page updated: Github, Open Exchange, and LinkedIn profiles introduced:
See all the solved tasks last month.
Please check the new kanban and submit your requests and bug reports!
Happy new year!
Thanks for all the improvements! Very useful!
Esther Thanks, Esther!
Discussion
Raj Singh · Jan 6, 2020
Happy new year! I’m Raj Singh, InterSystems’ product manager for Developer Experience and I’d love your feedback on how you use IDEs today and your thoughts for the future.
We understand you depend on a solid, intuitive and flexible IDE from InterSystems - whether you are an ObjectScript expert or new to it; whether ObjectScript is at the core of your applications, or you develop more in Java, Python, C# or Node.js. ObjectScript has proven to be a powerful and productive paradigm that gives you an edge over other platforms, and with that comes our responsibility to provide developer tools to support your work.
Our long-standing IDE, Studio, will continue to be the principal IDE for shops with the most sophisticated ObjectScript requirements, but many users will be working in an increasingly heterogeneous language environment in the future.
With the introduction of InterSystems IRIS and its deeper commitment to language “freedom of choice”, it’s time once again to take stock of InterSystems’ IDEs. Last year we paused and took a step back to study the IDE landscape and formulate a plan that would embrace our ever expanding language options – you can now do even more with Java, C#, Python and Node.js as well as ObjectScript – without forgetting that ObjectScript is as powerful as ever. In doing so we’ve become extremely excited about Visual Studio Code. Increasingly, the highest quality and most innovative developer productivity tools are being built on the that platform, and according to the 2019 Stack Overflow survey, over half of developers use Visual Studio Code already.
As we develop plans to support InterSystems development on Visual Studio Code, we’d like your input. What do you like or dislike about using Studio or Atelier today? What external tools (source control, testing, etc.) do you wish you could integrate more tightly with InterSystems IRIS? Have you tried ObjectScript development with 3rd party VS Code extensions (Serenji or vscode-objectscript)? Do you expect to be writing code in a web-based editor five years from now?
Feel free to comment below or direct message me. As always, the voice of our customers plays a major role in this important process. Hi Raj,
I use different IDE's and combinations :
- Caché Studio without any extensions for development where i am self-employed and no other developers are involved
- Caché Studio with Serenji source control from Gerorge James for a particular customer where i am one of the developers
- Visual Studio Code with the vscode-objectscript extention from Dmitry Maslennikov for another customer where i am one of the developers.
I like Caché Studio since I use it from day 1, because it is build-in so no need to configure anything. I use the 'projects' feature a lot, and export my projects regularly as a sort of 'light' source control. (I even edit my html and js files in Studio, even when they contain no COS code at all.)
But when developing in teams a good build-in source control is essential, so that's why i also use the other options. I am still learning Visual Studio Code but it seems the way for the future. Thanks for sharing Danny! I'm not using IRIS yet. Having options is good, especially for those coming from other development languages and/or not primarily focused on ObjectScript. Hopefully Studio continues to be supported, even better, receives future updates.
I have no issues working in Studio. I'm primarily developing with ObjectScript and XSLT. I've also used Studio for JavaScript, CSP, and web development (mixed with ObjectScript).
I'm not using Atelier now, but used it at a few previous jobs in team environments. Linking and using it with Git was very easy. After primarily using Studio for many years, adjusting to Atelier wasn't a steep learning curve. Initially there were things I switched back to Studio for, like "Find in files". But after a few updates and getting used to it, it was comfortable, flow felt logical.
I haven't used VS Code for ObjectScript. I tried it recently as a potential replacement for UltraEdit/Studio. I found it cumbersome, no toolbars, not intuitive...for me. I suspect if I were to dive in with it, as I had to with Atelier, it would feel better. Scott, could you give some more details about your experience with VSCode, privately directly to me or publicly here? What do you expect, and what would help you decide to use VSCode instead of Studio? Thanks Scott. In case I was vague on the point, but let me say that Studio will definitely be supported.
p.s. I'm an old UltraEdit user too! Hello Raj,
I'm fine with Studio, but... what are the chances of:
* Native Studio support for Linux and macOS.
* Default studio editor support for up-to-date language syntaxes like recent ECMAScript revisions.
* API allowing access to ObjectScript AST, parser and lexer as well. (this might be out-of-context for this discussion, but still it could help the community creating transpilable languages).
* Integrated and full featured terminal with support for special characters including colors.
* Improved debug support for jobs or simply multiple job debugging (this would make it easier to debug HTTP requests).
* Studio editor API that allows users to add new language syntaxes, like JSX, TypeScript, Flow, etc.
? Excellent list Rubens. In fact all of these are possible with VS Code.
I'm curious what you would do with API access to the ObjectScript AST, parser and lexer? For example, some months ago there was a discussion about how to bring functional programming to ObjectScript. I think this could be possible if we had a way to transpile functional-oriented code to pure ObjectScript. Dmitriy, I have very limited experience with VSCode, none using it with Cache coding. I looked at using it to replace UltraEdit/UEStudio and Notepad ++. I use those daily for XML, HTML file formatting, base64 decoding, file comparisons, CSS and JavaScript.
Using VSCode instead of Studio isn't an option for me at this time, team uses Studio with source hooks. If that changed, I'd be open to using it side by side with Atelier to compare. I've read many of your post and other's on using VSCode for Cache development. I assumed that path was being taken as the future of Atelier was unsure, and suggested at times that it had no future, other than support for what was already there. When your team uses Studio with source hooks, it's even easier to move to VSCode. VSCode supports source control class hooks, and even some types of actions from menu. You can the latest beta release, to get more features.
VSCode can also be used to edit files directly on a server, almost the same way as Studio. Just at the moment it does not check any changes on the server, and uses your opened files as a source of truth.
Hi @Raj.Singh5479 !
We recently introduced a new post type - Discussion, I changed this announcement to Discussion - I think this is more relevant.
Discussion is the type of posting where you want a conversation, input, discussions on the topic.
What do you like or dislike about using Studio or Atelier today?
Like: Studio is fast. My REPL cycle is very short.
Dislike:
Studio in synchronous so after every connection issue I need to restart it. REST based IDE don't have this problem and it's great.
Version pinning (need 2019.4 Studio to connect to 2019.4 server).
What external tools (source control, testing, etc.) do you wish you could integrate more tightly with InterSystems IRIS?
We should offer APIs, so any tool can be integrated.
Have you tried ObjectScript development with 3rd party VS Code extensions (Serenji or vscode-objectscript)?
I use VSCode extension. With direct server edit it's good. Subjectively slower that Studio but now it's comparable.
Like: The interface is way more modern. Crossplatform.
Dislike: XDatas are unformatted. Only one server connection. No BPL editor.
Do you expect to be writing code in a web-based editor five years from now?
Not really. Our IDEs don't take that much space. And REPL timings would go up. Studio :
Like :
It's fast
Find in files directly on the server side
Intuitive
Debugging is not perfect but manageable
Dislike :
Only on windows
No easy to use with other third party file or software
Atelier :
Like :
Easy to use with source control
Dislike :
File synchronization with server especially BPL, DT, RecorMap
Complicate to configure
Slow
VS Code with Dmitriy's plugin
Like :
Modern UI
Crossplatform
Lite
Intuitive with no file synchronization concern
Fast
Dislike :
Can't edit, manage CSP files
No find in file on server side
Debugging not easy to use
Serenji :
I haven't tried it yet because you need to install some classes on the server side, but sound very promising for debugging.
Conclusion :
My main IDE is VSCode with Dmitriys's plugin and some times Studio for find in files, csp managment and debugging.
Do you expect to be writing code in a web-based editor five years from now?
Yes, why not, already a lot of people do it on Jupyter Notebook. Hi Raj,
Just like others in the community, I use different IDE's.
I like the Caché Studio; it's fast, reliable, and useful for debugging.
My favorite IDE right now it's VSCode with the vscode-objectscript extension from Dmitry Maslennikov. To work with Docker, Github, and other extensions that make my workflow faster.
I tried to use Atelier, but comparing with others that I mentioned above, it's the last IDE for me.
Do you expect to be writing code in a web-based editor five years from now?
No, I agree with @Eduard.Lebedyuk answer.
Regards,
Henrique
Only one server connection.
At the moment it is possible to have multiple folders configured for each own server, and tied up with .code-workspace file. I see you already use server-side editing, so, you can just extend your file. And the link for the info, how to configure it.
Curiously, what do you expect from XData? Could you fill the issue, so, for any other ideas as well? Could you explain a bit, how would you like to see work with CSP files? For me, when you working with own local instance, you can just open csp folder in VSCode, and edit as a usual file, InterSystems will compile it automatically with the next request. I think it would be possible to add CSP editor only as part of Server-side editing feature.
Server-side search, not yet available, because, search engine not yet publicly released. I've already implemented server-side search when server-side editing enabled, and you can test it with the latest beta version of vscode-objectscript, and only with Code-Insiders version, and with flag --enable-proposed-api daimor.vscode-objectscript
Could you add your expectations from debugging feature, as an issue here?
So workspace per server and multiple namespaces per workspace?
Curiously, what do you expect from XData?
Filed. It is interesting that you are asking this as we have tried, for well over a year, via various mediums, at various levels within Intersystems, to get an answer as to the future of IDE's given that previous announcements and responses from WRC state that both Atelier and Studio are end-of-life.
We have never received a response even after Evgeny pinged such a request to John fried and others after some posts.
Your statements seem to indicate that Studio is not end-of-life so it would be nice to have some clarification as even WRC are saying non-critical fixes or changes will not be done. You also indicate work being done on VSCode integration, again any details would be appreciated.
The reason for wanting to know is that is difficult to commit to a particular toolset or plan changes to DevOps etc... if the future of the tools to be used is unknown, the actual answer and whether it is an individuals or companies preference is immaterial, it is simply that an answer is required but everyone has been reluctant for a long time now to commit to anything.
Coming back to your questions, we use Studio with our own version control based on StudioHooks and some use of Serenji for debugging.
We have tried Atelier but noone liked using it and we have worked in VSCode via Dmitry's plugin and have also tried the Serenji plugin both of which were generally a good experience but migration would require changes to development processes and version control.
We are likely to move to a GIT based Version control and whether this is intergrated with Studio or VSCode will likely depend on Intersystems commitment to the IDE though I suspect there would be a preference to stick with Studio as it is familiar and has integration to Cube etc... though as we continue to do more Web based development in VSCode this may change. Something like this, you can have many serverN folders, with own settings.json there, configured for any server.
{
"folders": [
{
"name": "root",
"path": ".",
},
{
"name": "server1",
"path": "server1",
},
{
"uri": "isfs://server1",
"name": "server1",
},
{
"uri": "isfs://server1?ns=%25SYS",
"name": "server1 sys",
},
{
"name": "server2",
"path": "server2",
},
{
"uri": "isfs://server2",
"name": "server2",
},
{
"uri": "isfs://server2?ns=%25SYS",
"name": "server2 sys",
},
{
"name": "server3",
"path": "server3",
},
{
"uri": "isfs://server3",
"name": "server3",
},
{
"uri": "isfs://server3?ns=%25SYS",
"name": "server3 sys",
}
],
"settings": {
"objectscript.serverSideEditing": true
}
} I've already published a beta version with support for CSP editing, It would be cool, if you could test it and give some feedback. Not sure if it's already included, but just wanted to know if we're going to be able to view and develop business processes using BPL, or data transformations, etc...in their graphical way. If this is meant to be a real modern alternative to Studio, support all the IRIS components is a must... Otherwise we'll have to be changing from one IDE to the other... which is not practical and prone to errors.
Announcement
Evgeny Shvarov · Mar 4, 2020
Hi Developers!
Here are the release notes of changes we made to InterSystems Open Exchange since the previous release in December 2019. What's new?
Email notifications and subscriptions;
Better UI/UX for application publishing process;
Mobile UI for tablets and phones.
See the details below.
Email notifications and subscriptions
Starting with this release you'll be able to receive email notifications about the releases of applications in Open Exchange. You can set up these settings via Profile->Subscription menu:
There are several levels of subscription:
1. Everything
If this is in the mode "ON" you will receive all the notifications of the site.
But you can alter it and subscribe only for companies, members or applications in which you are interested.
2. Application subscription
You can subscribe to the application's releases on the application page - click Follow in the top right corner to subscribe or cancel the subscription:
In your Profile->Subscription page you can check what subscriptions for applications you have at the moment and alter it. Change the category filter to App and see all the current subscriptions:
2. Member subscription
If you want to get notifications for all applications a particular Open Exchange developer releases you can subscribe to this Open Exchange member. Click on the author's page and Choose 'Follow' to subscribe to all the releases of the member's applications:
And with the similar filter in your Profile->Subscription page you can see developers you are subscribed for and alter it:
3. Company subscription
Same as with Open Exchange members you are able to subscribe to all the releases of the apps of the particular company. Open Company's profile and click Follow:
And you can alter the list of companies you subscribed for in your Subscription settings:
If you don't want to receive any emails turn off with Everything filter or click on "Unsubscribe from everything" in the email notifications from Open Exchange.
Better UI/UX for the application publishing process
We received the feedback from you that the new UI is not obvious that you need to send the application fro approval after saving all the changes.
We made this button more obvious now in the new release of Open Exchange.
When you save the draft of the application you will see the "Send for approval" button in the top-right corner.
Click it to follow up with the application version and release notes and send it for the approval chain.
Please share your feedback to make the publishing process easier and robust.
Mobile UI for tablets and phones
It should be just great! Check it on your mobile!
We also made a few small enhancements and bugfixes - check the kanban here. Here is the new kanban, and don't hesitate to share your feedback, bug reports and enhancement requests!
Stay tuned!
Announcement
Evgeny Shvarov · Aug 22, 2019
Hi Developers!Here is a new release of InterSystems Open Exchange - InterSystems applications gallery!What's new?Github Integration is improved;Better application update mechanism;UI/UX enhancements.See the details below.Better Github IntegrationWith this release, if your InterSystems application source code is hosted on Github a lot of mandatory Open Exchange application fields could be imported from Github repo. Enter the URL of the GitHub repo and the name, download URL, short description, license, full description will be imported from Github. The only mandatory fields to feel in will be InterSystems Data platform and tag-list. Check the video which illustrates the process.Improved application update procedureAll the parameters of Open Exchange application need to pass the approval procedure. And starting from this release you don't need to unpublish the repository if you want to make changes and publish the next release. If you want to publish the next release of the application do the following steps:1. Open the published app and click Edit:2. Make the necessary changes and click Save - this will create a draft version of your application. You may see both published and draft entries on My Apps section:3. Once you good with the new changes send the draft application for approval review.4. When and if the new version of the app is approved the previously published version will be changed to the newly approved version.Minor UI/UX improvementsWith this release, we've improved the performance of Open Exchange and we introduced a set of minor UI enhancements such as sorting selector for apps, better image icon update for apps and companies, clickable category on the application page and things like this which could improve your experience with InterSystems Open Exchange.Also, don't hesitate to use Google Adwords vouchers option we are introducing with this release to promote your Open Exchange application which you can redeem with Global Masters program. Learn more.Check all the tasks we solved in this version and welcome the new kanban for the next release where we plan email subscriptions, better integration with Developers Community and Partner hub, better analytics, and other nice features!Introduce your bug reports and feature requests and stay tuned!
Announcement
Evgeny Shvarov · Aug 2, 2019
Hi Developers!Happy to share with you the release notes of InterSystems Developers community site in August 2019. What's new?Direct messages between registered members;Accounts merging;Better navigation for serial posts;Spanish DC enhancements: translated tags, analytics.See the details below.Direct messagesYou asked about this a lot - so we've added this. Now members can communicate with each other, respond directly for job offers or job-seeking announcements. We allow the feature only for those members who contribute to the site. Learn more on direct messages.Accounts mergingInterSystems Developers account depends on the email. You register with another email and you are another DC member. Sometimes it causes numerous alter-egos on the site of one person if a developer changes emails, changes work but keep staying with Community. We appreciate that you stay with the community for years. But we can make this convenient and we can move your content from one account to another.If you want to move your assets from different accounts to a one and/or block the unused accounts please contact our content manager @Irina.Podmazko and Irina will help you.Serial posts navigationWe love when you guys introduce the posts with several parts. There are a lot of examples of such: check this one or another one. But the support from the site for the navigation of such posts was poor. Until this release! Today you can connect the multi-part postings with Next and Prev parts and let your readers easily navigate between episodes of the story. How to do that?Edit the post and add the link to the Next or Prev part or both and this will introduce the arrows in the title of the article.Check how it works with these posts: part1, part2.And looking for more multi-episode stories about InterSystems IRIS!Spanish Tags and Spanish AnalyticsWe keep enhancements for the Spanish DC. This release brings the totally translated tags and public analytics for the members, postings, and views of Spanish DC.As always we've fixed tons of bugs, introduced some plans for the next release and are looking forward to your feature requests and bug reports.Stay tuned!
Announcement
Anastasia Dyubaylo · Aug 9, 2019
Hi Community!
You're very welcome to watch a new video on InterSystems Developers YouTube, recorded by @Stefan.Wittmann, InterSystems Product Manager:
InterSystems API Manager Introduction
InterSystems API Manager is a tool that allows you to manage your APIs within InterSystems IRIS™ and InterSystems IRIS for Health™. In this video, @Stefan.Wittmann describes what API Manager is and describes several use cases in which the tool would be useful.
And...
Additional info about InterSystems API Manager you can find in this post.
Enjoy and stay tuned!
Announcement
Anastasia Dyubaylo · Aug 29, 2019
Hi Everyone!
New video, recorded by @Benjamin.DeBoe, is already on InterSystems Developers YouTube:
Scaling Fluidly with InterSystems IRIS
In this video, @Benjamin.DeBoe, InterSystems Product Manager, explains how sharding capabilities in InterSystems IRIS provide a more economical approach to fluidly scaling your systems than traditional vertical scaling.
And...
To learn more about InterSystems IRIS and its scalability features, you can browse more content at http://www.learning.intersystems.com.
Enjoy watching the video!
Article
Evgeny Shvarov · Aug 29, 2019
Hi Developers!
Often when we develop some library, tool, package, whatever on InterSystems ObjectScript we have a question, how we deploy this package on the target machine?
Also, we often expect that some other libraries already installed, so our package depends on them, and often on some particular version of it.
When you code on javascript, python, etc the role of packages deployment with dependency management takes package manager.
So, I'm pleased to announce that InterSystems ObjectScript Package Manager available!
CAUTION!
Official Disclaimer.
InterSystems ObjectScript Package Manager server situated on pm.community.intersystems.com and InterSystems ObjectScript Package Manager client installable from pm.community.intersystems.com or from Github are not supported by InterSystems Corporation and are presented as-is under MIT License. Use it, develop it, contribute to it on your own risk.
How does it work?
InterSystems ObjectScript Package Manager consists of two parts. There is a Package Manager server which hosts ObjectScript packages and exposes API for ZPM clients to deploy and list packages. Today we have a Developers Community Package Manager server available at pm.community.intersystems.com.
You can install any package into InterSystems IRIS via ZPM client installed first into IRIS system.
How to use InterSystems Package Manager?
1. Check the list of available packages
Open https://pm.community.intersystems.com/packages/-/all to see the list of currently available packages.
[{"name":"analyzethis","versions":["1.1.1"]},{"name":"deepseebuttons","versions":["0.1.7"]},{"name":"dsw","versions":["2.1.35"]},{"name":"holefoods","versions":["0.1.0"]},{"name":"isc-dev","versions":["1.2.0"]},{"name":"mdx2json","versions":["2.2.0"]},{"name":"objectscript","versions":["1.0.0"]},{"name":"pivotsubscriptions","versions":["0.0.3"]},{"name":"restforms","versions":["1.6.1"]},{"name":"thirdpartychartportlets","versions":["0.0.1"]},{"name":"webterminal","versions":["4.8.3"]},{"name":"zpm","versions":["0.0.6"]}]
Every package has the name and the version.
If you want to install one on InterSystems IRIS you need to have InterSystems ObjectScript Package Manager Client aka ZPM client installed first.
2. Install Package Manager client
Get the release of the ZPM client from ZPM server: https://pm.community.intersystems.com/packages/zpm/latest/installer
It is ObjectScript package in XML, so it could be installed by importing into classes via Management Portal, or by terminal:
USER>Do $System.OBJ.Load("/yourpath/zpm.xml","ck")
once installed it can be called from any Namespace cause it installs itself in %SYS as Z-package.
3. Working with ZPM client
Zpm client has CLI interface. Call zpm in any namespace like:
USER>zpm
zpm: USER>
Call help to see the list of all the available commands.
Check the list of currently available packages on ZPM server (pm.community.intersystems.com):
zpm: USER>repo -list-modules -n registry
deepseebuttons 0.1.7 dsw 2.1.35 holefoods 0.1.0 isc-dev 1.2.0 mdx2json 2.2.0 objectscript 1.0.0 pivotsubscriptions 0.0.3 restforms 1.6.1 thirdpartychartportlets 0.0.1 webterminal 4.8.3 zpm 0.0.6
Installing a Package
To install the package call
install package-name version
This will install the package with all the dependencies. You can omit version to get the latest package. Here is how to install the latest version of web terminal:
zpm: USER> install webterminal
How to know what is already installed?
Call list command:
zpm:USER> list
zpm 0.0.6
webterminal 4.8.3
Uninstall the Package
zpm: USER> uninstall webterminal
Supported InterSystems Data Platforms
Currently, ZPM supports InterSystems IRIS and InterSystems IRIS for Health.
I want my package to be listed on Package Manager
It's possible. The requirements are:
Code should work in InterSystems IRIS
You need to have module.xml in the root.
Module.xml is the file which describes the structure of the package and what is need to be set up on the deployment phase. Examples of module.xml could be very simple, e.g.
ObjectScript Example
Or relatively simple:
Samples BI (previously known as HoleFoods),
Web Terminal
Module with dependencies:
DeepSee Web expects MDX2JSON to be installed and this is how it described in module.xml:DeepSeeWeb
If you want your application to be listed on the Community Package Manager comment in this post or DM me.
Collaboration and Support
ZPM server source code is not available at the moment and will be available soon.
ZPM client source code is available here and is currently supported by InterSystems Developers Community and is not supported by InterSystems Corporation. You are welcome to submit issues and pull requests.
Roadmap
The current roadmap is:
introduce Open Exchange support,
introduce the automation for package updating and uploading;
open source ZPM server.
Stay tuned and develop your InterSystems ObjectScript packages on InterSystems IRIS! Interesting... however I have some questions:
1 - Is there any plans to automatize the `module.xml` generation by using something like a Wizard?
2 - Is there any plans to support non-specific dependency versions like NPM does?
3 - Is it possible to run pre/post-install scripts as well? Kind of what installer classes do.
4 - Is also possible to use the `module.xml`to provide a contextual root? this way it would be used to run UnitTests without the need of defining (or overwriting) the UnitTestRoot global. I already did it with Port, so it's not hard to implement as you basically have to overwrite the `Root` method:
```objectscript
Class Port.UnitTest.Manager Extends %UnitTest.Manager
{
ClassMethod Root() As %String
{
// This provides us the capability to search for tests unrelated to ^UnitTestRoot.
return ##class(%File).NormalizeFilename(##class(Port.Configuration).GetWorkspace())
}
ClassMethod RunTestsFromWorkspace(projectName As %String, targetClass As %String = "", targetMethod As %String = "", targetSpec As %String = "") As %Status
{
set recursive = "recursive"
set activeProject = $get(^||Port.Project)
set ^||Port.Project = projectName
if targetClass '= "" set target = ##class(Port.UnitTest.Util).GetClassTestSpec(targetClass)
else set target = ##class(Port.Configuration).GetTestPath()
if targetMethod '= "" {
set target = target_":"_targetMethod
set recursive = "norecursive"
}
set sc = ..RunTest(target_$case(targetSpec, "": "", : ":"_targetSpec), "/"_recursive_"/run/noload/nodelete")
set ^||Port.Project = activeProject
return sc
}
}
``` 1 - Is there any plans to automatize the module.xml generation by using something like a Wizard?Submit an issue? More over, craft a module which supports that! And PR - it's a Community Package manager.3 - Is it possible to run pre/post-install scripts as well? Kind of what installer classes do.I think, this already in place. @Dmitry.Maslennikov who contributed a lot will comment.4 - Is also possible to use the module.xmlto provide a contextual root?We maybe can use the code! Thanks! @Dmitry.Maslennikov ? And!It worth to mention, that credit in development goes to:@Timur.Safin6844 @Timothy.Leavitt @Dmitry.Maslennikov Thank you very much, guys! I hope the list of contributors will be much longer soon! 1 - Is there any plans to automatize the module.xml generation by using something like a Wizard?Any reasons for it? Are you so lazy, that you can't write this simple XML by hand? Just kidding, not yet, I think best and fastest what I can do it, add Intellisense in vscode for such files, so you can help to do it easier. Any UI, at the moment, is just a waste of time, it is not so important. And anyway, is there any wizard from NPM?2 - Is there any plans to support non-specific dependency versions like NPM does?It is already there, should work the same as semver in npm3 - Is it possible to run pre/post-install scripts as well? Kind of what installer classes do.There is already something like this, but I would like to change this way.4 - Is also possible to use the module.xmlto provide a contextual root? Not sure about contextual root. But if saying about UnitTests, yes actually there are many things which should be changed in the original %UnitTests engine. But in this case, it has some way to run tests without care about UnitTestRoot global. ZPM itself has own module.xml, and look there. You will find lines about UnitTests. with this definition, you run these commands, and it will run tests in different phases
zpm: USER>packagename test
zpm: USER>packagename verify
> Any reasons for it? Are you so lazy, that you can't write this simple XML by hand? Just kidding, not yet, I think best and fastest what I can do it, add Intellisense in vscode for such files, so you can help to do it easier. Any UI, at the moment, is just a waste of time, it is not so important. And anyway, is there any wizard from NPM?
Haha, I'll overlook this first line. I meant something like a CLI wizard really, that asks you for steps, but maybe this can be something apart, you know Yeoman don't you?
NPM does have the command `npm init` which asks you the basic information about your package and generates a package.json.
> It is already there, should work the same as semver in npm
Nice! Does it follow the same format as the one from NPM? (symbolically speaking ^ and *)
> Not sure about contextual root. But if saying about UnitTests, yes actually there are many things which should be changed in the original %UnitTests engine. But in this case, it has some way to run tests without care about UnitTestRoot global. ZPM itself has own module.xml, and look there. You will find lines about UnitTests. with this definition, you run these commands, and it will run tests in different phases.
Yeah, that's exactly what I meant about contextual root, my wording looked wrong because I intended for this feature to be used outside UnitTest, but now I see that there isn't much use for it outside of unit testing. NPM does have the command npm init which asks you the basic information about your package and generates a package.json.Yes, kind of init command sounds useful. You know, we anyway have many differences with npm, for instance. Like, zpm works inside the database with nothing on disk, while npm in OS close to source files, but I think we can find the way how to achieve the best way.It is already there, should work the same as semver in npmNice! Does it follow the same format as the one from NPM?Yes, the same way This brought me to another question. How does the ZPM handle two modules that use the same dependency but both have different versions while being used as peer dependencies?
Example:
Module A using dependency C requires its major 1.
Module B also uses dependency C but requires the newer major 2.
The requisite for this case is: You must have both because you're making a module D that uses both module A and B. In our case, it should actually fail the installation, as we don't have any way to install two versions at the same time.But, not sure, if we already have such check. Hi @Evgeny.Shvarov
In Step "2. Install Package Manager client", it says to run the following code:
Do $System.OBJ.Load("/yourpath/zpm.xml")
It also needs to be compiled and should probably be:
Do $System.OBJ.Load("/yourpath/zpm.xml","ck") Can ZPM run in Cache' ?
Tried to install ZPM on Cache' but failed:
%SYS>D $system.OBJ.Load("/tmp/deps/zpm.xml","ck")Load started on 04/21/2021 11:30:27Loading file /tmp/deps/zpm.xml as xmlImported class: %ZPM.InstallerCompiling class %ZPM.InstallerCompiling routine : %ZPM.Installer.G1.MACERROR #6353: Unexpected attributes for element Import (ending at line 9 character 45): Recurse > ERROR #5490: Error reported while running generator for method '%ZPM.Installer:setup' > ERROR #5030: An error occurred while compiling class %ZPM.InstallerCompiling routine %ZPM.Installer.1ERROR: Compiling method/s: ExtractPackageERROR: %ZPM.Installer.1(3) : MPP5610 : Referenced macro not defined: 'FileTempDir' TEXT: Set pFolder = ##class(%File).NormalizeDirectory($$$FileTempDir)ERROR: Compiling method/s: MakeERROR: %ZPM.Installer.1(7) : MPP5610 : Referenced macro not defined: 'FileTempDir' TEXT: Set tmpDir = ##class(%File).NormalizeDirectory($$$FileTempDir)%ZPM.Installer.1.INT(79) ERROR #1002: Invalid character in tag : '##class(%Library.File).NormalizeDirectory($$$FileTempDir)' : Offset:61 [zExtractPackage+1^%ZPM.Installer.1] TEXT: Set pFolder = ##class(%Library.File).NormalizeDirectory($$$FileTempDir)%ZPM.Installer.1.INT(117) ERROR #1002: Invalid character in tag : '##class(%Library.File).NormalizeDirectory($$$FileTempDir)' : Offset:62 [zMake+4^%ZPM.Installer.1] TEXT: Set tmpDir = ##class(%Library.File).NormalizeDirectory($$$FileTempDir)Detected 5 errors during load.
ZPM works in IRIS only, but you can search around on DC, and OEX - there were attempts to provide alternative support of ZPM in Cache. I hope this ZPM will be a yet-another-reason to upgrade to IRIS ;) When I try to install webterminal, I am getting the following...
IRIS for Windows (x86-64) 2022.1 (Build 209U) Tue May 31 2022 12:16:40 EDT
zpm:USER>install webterminal [USER|webterminal] Reload START (C:\InterSystems\HealthConnect_2022_1\mgr\.modules\USER\webterminal\4.9.2\)[USER|webterminal] Reload SUCCESS[webterminal] Module object refreshed.[USER|webterminal] Validate START[USER|webterminal] Validate SUCCESS[USER|webterminal] Compile STARTInstalling WebTerminal application to USERCreating WEB application "/terminal"...WEB application "/terminal" is created.Assigning role %DB_CACHESYS to a web application; resulting roles: :%DB_CACHESYS:%DB_USERCreating WEB application "/terminalsocket"...WEB application "/terminalsocket" is created.ERROR #879: Target role %DB_CACHESYS does not exist.[webterminal] Compile FAILUREERROR! Target role %DB_CACHESYS does not exist. > ERROR #5090: An error has occurred while creating projection WebTerminal.Installer:Reference. Hi Scott! which version of IRIS it is? What is the version of ZPM client? I have seen this on both... HealthShare Health Connect
IRIS for Windows (x86-64) 2022.1 (Build 209U) Tue May 31 2022 12:16:40 EDT
IRIS for UNIX (Red Hat Enterprise Linux 8 for x86-64) 2022.1 (Build 209U) Tue May 31 2022 12:13:24 EDT
using the link from https://pm.community.intersystems.com/packages/zpm/latest/installer
zpm:USER>version
%SYS> zpm 0.5.0https://pm.community.intersystems.com - 1.0.6 This was fixed by me on https://github.com/intersystems-community/webterminal/pull/149 in July 2022.
The ZPM package is outdated.
I recommend you use instructions at https://intersystems-community.github.io/webterminal/#docs to install the latest version. @Nikita.Savchenko7047 made a ZPM release - now could be installable via ZPM too
Thanks @John.Murray ! ZPM package is updated When a config for instance CSPApplication already exist and the module.xml has the CSPApplication section, can you opt to not override config on server for if the client for instance made changes to application Roles then after an upgrade it is missing
Announcement
Evgeny Shvarov · Sep 4, 2019
Hi InterSystems Developers!Here is are the release notes of Developers Community new features added in August 2019. What's new?Site performance fix;New features for the job opportunity announcements;feature to embed DC posts into other sites;minor enhancements.See the details below.DC Performance fixIn August we've fixed a major problem in performance and think that we don't need any more tweaks on it. If you feel that some DC functionality needs performance fixes please submit an issue.New features for Job Opportunities on InterSystems Developers communityDevelopers!We want to let you know about all the opportunities for a job as InterSystems Developers so with this release we've introduced a few new features to make Job opportunities announcements on DC more visible and handy to use.Every post with Job Opportunity now have the button - I'm interested which, if you click it, sends a Direct Message to a member who submitted a vacancy with the following text:"Hi! I'm interested in your job opportunity "name of job opportunity"(link). Send me more information please".Job Opportunities now have a new icon - suitcase (for your laptop?).Job posters!If the job opportunity is closed you can click a Close button now which will hide it and which will let developers know that this opportunity is no longer available.This action will hide the opportunity into drafts.DC articles embedded to other sitesWith this release, we introduced a feature to embed DC post (an article, or event, or question) into another site.Just add
"?iframe"
parameter to an article and this gives you a link ready to be embedded into another site as an iframe.
Example:
Minor changes
Again we've fixed a lot of bugs and introduced some nice features, e.g. better support for tables in markdown, new options for members sorting - by posts rating, comments rating, answers rating.
Also, we added an extra link to the app in the bottom of the article if the link to the Open Exchange app is introduced.
We also added a few enhancements in translated articles management - see new articles in Spanish Community.
See the full list of changes with this release and submit your issues and feature requests in a new kanban.
Stay tuned!
Article
Sean Connelly · Sep 10, 2019
In this article, we will explore the development of an IRIS client for consuming RESTful API services that have been developed to the OData API standard.We will be exploring a number of built-in IRIS libraries for making HTTP requests, reading and writing to JSON payloads, and seeing how we can use them in combination to build a generic client adaptor for OData. We will also explore the new JSON adapter for deserializing JSON into persistent objects.Working with RESTful APIsREST is a set of engineering principles that were forged from the work on standardizing the world wide web. These principles can be applied to any client-server communication and are often used to describe an HTTP API as being RESTful.REST covers a number of broad principles that include stateless requests, caching, and uniform API design. It does not cover implementation details and there are no general API specifications to fill in these gaps.The side effect of this ambiguity is that RESTful APIs can lack some of the understanding, tools, and libraries that often build up around stricter ecosystems. In particular, developers must construct their own solutions for the discovery and documentation of RESTful APIs.ODataOData is an OASIS specification for building consistent RESTful API's. The OASIS community is formed from a range of well-known software companies that include Microsoft, Citrix, IBM, Red Hat, and SAP. OData 1.0 was first introduced back in 2007, and the most recent version 4.1 was released this year.The OData specification covers things like metadata, consistent implementations of operations, queries, and exception handling. It also includes additional features such as actions and functions.Exploring the TripPinWS OData APIFor this article we’ll be using the TripPinWS API, which is provided as an example by Odata.org.As with any RESTful API, we would typically expect a base URL for the service. Visiting this base URL in OData will also return a list of API entities.https://services.odata.org:443/V4/(S(jndgbgy2tbu1vjtzyoei2w3e))/TripPinServiceRWWe can see that the API includes entities for Photos, People, Airlines, Airports, Me, and a function called GetNearestAirport.The response also includes a link to the TripPinWS metadata document.https://services.odata.org/V4/(S(djd3m5kuh00oyluof2chahw0))/TripPinServiceRW/$metadataThe metadata is implemented as an XML document and includes its own XSD document. This opens up the possibility of consuming metadata documents using code generated from the IRIS XML schema wizard.The metadata document might look fairly involved at first glance, but it's just describing the properties of types that are used to construct entity schema definitions.We can get back a list of People from the API by using the following URL.https://services.odata.org/V4/(S(4hkhufsw5kohujphemn45ahu))/TripPinServiceRW/PeopleThis returns a list of 8 people, 8 being a hard limit for the number of entities per result. In the real world, we would probably use a much larger limit. It does, however, provide an example of how OData includes additional hypertext links such as the @odata.nextLink, which we can use to fetch the next page of People in the search results.We can also use query string values to narrow down the results list, such as selecting only the top 1 result.https://services.odata.org/V4/(S(4hkhufsw5kohujphemn45ahu))/TripPinServiceRW/People?$top=1We can also try filtering requests by FirstName.https://services.odata.org/V4/(S(4hkhufsw5kohujphemn45ahu))/TripPinServiceRW/People?$filter=FirstName eq 'Russell'In this instance, we used the eq operator to filter on all FirstNames that equal 'Russell'. Note the importance of wrapping strings in single quotes. OData provides a variety of different operators that can be used in combination to build up highly expressive search queries.IRIS %Net PackageIRIS includes a comprehensive standard library. We’ll be using the %Net package, which includes support for protocols such as FTP, Email, LDAP, and HTTP.To use the TripPinWS service we will need to use HTTPS, which requires us to register an HTTPS configuration in the IRIS management portal. There are no complicated certificates to install so it’s just a few steps:Open the IRIS management portal.Click on System Administration > Security > SSL/TLS Configurations.Click the "Create New Configuration" button.Enter the name "odata_org" and hit save.You can choose any name you’d like, but we’ll be using odata_org for the rest of the article.We can now use the HttpRequest class to get a list of all people. If the Get() worked, then it will return 1 for OK. We can then access the response object and output the result to the terminal:DC>set req=##class(%Net.HttpRequest).%New()DC>set req.SSLConfiguration="odata_org"DC>set sc=req.Get("https://services.odata.org:443/V4/(S(jndgbgy2tbu1vjtzyoei2w3e))/TripPinServiceRW/People")DC>w sc1DC>do req.HttpResponse.OutputToDevice()Feel free to experiment with the base HttpRequest before moving on. You could try fetching Airlines and Airports or investigate how errors are reported if you enter an incorrect URL.Developing a generic OData ClientLet's create a generic OData client that will abstract the HttpRequest class and make it easier to implement various OData query options.We’ll call it DcLib.OData.Client and it will extend %RegisteredObject. We’ll define several subclasses that we can use to define the names of a specific OData service, as well as several properties that encapsulate runtime objects and values such as the HttpRequest object. To make it easy to instantiate an OData client, we will also override the %OnNew() method (the class's constructor method) and use it to set up the runtime properties.Class DcLib.OData.Client Extends %RegisteredObject{Parameter BaseURL;Parameter SSLConfiguration;Parameter EntityName;Property HttpRequest As %Net.HttpRequest;Property BaseURL As %String;Property EntityName As %String;Property Debug As %Boolean [ InitialExpression = 0 ];Method %OnNew(pBaseURL As %String = "", pSSLConfiguration As %String = "") As %Status [ Private, ServerOnly = 1 ]{ set ..HttpRequest=##class(%Net.HttpRequest).%New() set ..BaseURL=$select(pBaseURL'="":pBaseURL,1:..#BaseURL) set ..EntityName=..#EntityName set sslConfiguration=$select(pSSLConfiguration'="":pSSLConfiguration,1:..#SSLConfiguration) if sslConfiguration'="" set ..HttpRequest.SSLConfiguration=sslConfiguration quit $$$OK}}We can now define a client class that is specific to the TripPinWS service by extending DcLib.OData.Client and setting the BaseURL and SSL Configuration parameters in one single place.Class TripPinWS.Client Extends DcLib.OData.Client{Parameter BaseURL = "https://services.odata.org:443/V4/(S(jndgbgy2tbu1vjtzyoei2w3e))/TripPinServiceRW";Parameter SSLConfiguration = "odata_org";}With this base client in place, we can now create a class for each entity type that we want to use in the service. By extending the new client class all we need to do is define the entity name in the EntityName parameter.Class TripPinWS.People Extends TripPinWS.Client{Parameter EntityName = "People";}Next, we need to provide some more methods on the base DcLib.OData.Client class that will make it easy to query the entities.Method Select(pSelect As %String) As DcLib.OData.Client{ do ..HttpRequest.SetParam("$select",pSelect) return $this}Method Filter(pFilter As %String) As DcLib.OData.Client{ do ..HttpRequest.SetParam("$filter",pFilter) return $this}Method Search(pSearch As %String) As DcLib.OData.Client{ do ..HttpRequest.SetParam("$search",pSearch) return $this}Method OrderBy(pOrderBy As %String) As DcLib.OData.Client{ do ..HttpRequest.SetParam("$orderby",pOrderBy) return $this}Method Top(pTop As %String) As DcLib.OData.Client{ do ..HttpRequest.SetParam("$top",pTop) return $this}Method Skip(pSkip As %String) As DcLib.OData.Client{ do ..HttpRequest.SetParam("$skip",pSkip) return $this}Method Fetch(pEntityId As %String = "") As DcLib.OData.ClientResponse{ if pEntityId="" return ##class(DcLib.OData.ClientResponse).%New($$$ERROR($$$GeneralError,"Entity ID must be provided"),"") set pEntityId="('"_pEntityId_"')" if $extract(..BaseURL,*)'="/" set ..BaseURL=..BaseURL_"/" set sc=..HttpRequest.Get(..BaseURL_..EntityName_pEntityId,..Debug) set response=##class(DcLib.OData.ClientResponse).%New(sc,..HttpRequest.HttpResponse,"one") quit response}Method FetchCount() As DcLib.OData.ClientResponse{ if $extract(..BaseURL,*)'="/" set ..BaseURL=..BaseURL_"/" set sc=..HttpRequest.Get(..BaseURL_..EntityName_"/$count") set response=##class(DcLib.OData.ClientResponse).%New(sc,..HttpRequest.HttpResponse,"count") quit response}Method FetchAll() As DcLib.OData.ClientResponse{ #dim response As DcLib.OData.ClientResponse if $extract(..BaseURL,*)'="/" set ..BaseURL=..BaseURL_"/" set sc=..HttpRequest.Get(..BaseURL_..EntityName,..Debug) set response=##class(DcLib.OData.ClientResponse).%New(sc,..HttpRequest.HttpResponse,"many") if response.IsError() return response //if the response has a nextLink then we need to keep going back to fetch more data while response.Payload.%IsDefined("@odata.nextLink") { //stash the previous value array, push the new values on to it and then //set it back to the new response and create a new value iterator set previousValueArray=response.Payload.value set sc=..HttpRequest.Get(response.Payload."@odata.nextLink",..Debug) set response=##class(DcLib.OData.ClientResponse).%New(sc,..HttpRequest.HttpResponse) if response.IsError() return response while response.Value.%GetNext(.key,.value) { do previousValueArray.%Push(value) } set response.Payload.value=previousValueArray set response.Value=response.Payload.value.%GetIterator() } return response}We've added nine new methods. The first six are instance methods for defining query options, and the last three are methods for fetching one, all, or a count of all entities.Notice that the first six methods are essentially a wrapper for setting parameters on the HTTP request object. To make implementation coding easier, each of these methods returns an instance of this object so that we can chain the methods together. Before we explain the main Fetch() method let’s see the Filter() method in action.set people=##class(TripPinWS.People).%New().Filter("UserName eq 'ronaldmundy'").FetchAll()while people.Value.%GetNext(.key,.person) { write !,person.FirstName," ",person.LastName }If we use this method, it returns:Ronald MundyThe example code creates an instance of the TripPinWS People object. This sets the base URL and certificate configuration in its base class. We can then call its Filter method to define a filter query and then FetchAll() to trigger the HTTP request.Note that we can directly access the people results as a dynamic object, not as raw JSON data. This is because we are also going to implement a ClientResponse object that makes exception handling simpler. We also generate dynamic objects depending on the type of result that we get back.First, let's discuss the FetchAll() method. At this stage, our implementation classes have defined the OData URL in its base class configuration, the helper methods are setting additional parameters, and the FetchAll() method needs to build the URL and make a GET request. Just as in our original command-line example, we call the Get() method on the HttpRequest class and create a ClientResponse from its results.The method is complicated because the API only returns eight results at a time. We must handle this in our code and use the previous result's nextLink value to keep fetching the next page of results until there are no more pages. As we fetch each additional page, we store the previous results array and then push each new result on to it.The Fetch(), FetchAll() and FetchCount() methods return an instance of a class called DcLib.OData.ClientResponse. Let's create that now to handle both exceptions and auto deserialize valid JSON responses.Class DcLib.OData.ClientResponse Extends %RegisteredObject{Property InternalStatus As %Status [ Private ];Property HttpResponse As %Net.HttpResponse;Property Payload As %Library.DynamicObject;Property Value;Method %OnNew(pRequestStatus As %Status, pHttpResponse As %Net.HttpResponse, pValueMode As %String = "") As %Status [ Private, ServerOnly = 1 ]{ //check for immediate HTTP error set ..InternalStatus = pRequestStatus set ..HttpResponse = pHttpResponse if $$$ISERR(pRequestStatus) { if $SYSTEM.Status.GetOneErrorText(pRequestStatus)["<READ>" set ..InternalStatus=$$$ERROR($$$GeneralError,"Could not get a response from HTTP server, server could be uncontactable or server details are incorrect") return $$$OK } //if mode is count, then the response is not JSON, its just a numeric value //validate that it is a number and return all ok if true, else let it fall through //to pick up any errors that are presented as JSON if pValueMode="count" { set value=pHttpResponse.Data.Read(32000) if value?1.N { set ..Value=value return $$$OK } } //serialise JSON payload, catch any serialisation errors try { set ..Payload={}.%FromJSON(pHttpResponse.Data) } catch err { //check for HTTP status code error first if $e(pHttpResponse.StatusCode,1)'="2" { set ..InternalStatus = $$$ERROR($$$GeneralError,"Unexpected HTTP Status Code "_pHttpResponse.StatusCode) if pHttpResponse.Data.Size>0 return $$$OK } set ..InternalStatus=err.AsStatus() return $$$OK } //check payload for an OData error if ..Payload.%IsDefined("error") { do ..HttpResponse.Data.Rewind() set error=..HttpResponse.Data.Read(32000) set ..InternalStatus=$$$ERROR($$$GeneralError,..Payload.error.message) return $$$OK } //all ok, set the response value to match the required modes (many, one, count) if pValueMode="one" { set ..Value=..Payload } else { set iterator=..Payload.value.%GetIterator() set ..Value=iterator } return $$$OK}Method IsOK(){ return $$$ISOK(..InternalStatus)}Method IsError(){ return $$$ISERR(..InternalStatus)}Method GetStatus(){ return ..InternalStatus}Method GetStatusText(){ return $SYSTEM.Status.GetOneStatusText(..InternalStatus)}Method ThrowException(){ Throw ##class(%Exception.General).%New("OData Fetch Exception","999",,$SYSTEM.Status.GetOneStatusText(..InternalStatus))}Method OutputToDevice(){ do ..HttpResponse.OutputToDevice()}}Given an instance of the ClientResponse object, we can first test to see if there was an error. Errors can happen on several levels, so we want to return them in a single, easy-to-use solution.set response=##class(TripPinWS.People).%New().Filter("UserName eq 'ronaldmundy'").FetchAll()if response.IsError() write !,response.GetStatusText() quitThe IsOK() and IsError() methods check the object for errors. If an error occurred, we can call GetStatus() or GetStatusText() to access the error, or use ThrowException() to pass the error to an exception handler.If there is no error, then the ClientResponse will assign the raw payload object to the response Payload property:set ..Payload={}.%FromJSON(pHttpResponse.Data)It will then set the response Value property to the main data array within the payload, either as a single instance or as an array iterator to traverse many results.I've put all of this code together in a single project on GitHub https://github.com/SeanConnelly/IrisOData/blob/master/README.md which will make more sense when reviewed as a whole. All of the following examples are included in the source GitHub project.Using the OData ClientThere is just one more method we should understand on the base Client class: the With() method. If you don't want to create an instance of every entity, you can instead use the With() method with just one single client class. The With() method will establish a new client with the provided entity name:ClassMethod With(pEntityName As %String) As DcLib.OData.Client{ set client=..%New() set client.EntityName=pEntityName return client}We can now use it to fetch all people using the base Client class:/// Fetch all "People" using the base client class and .With("People")ClassMethod TestGenericFetchAllUsingWithPeople(){ #dim response As DcLib.OData.ClientResponse set response=##class(TripPinWS.Client).With("People").FetchAll() if response.IsError() write !,response.GetStatusText() quit while response.Value.%GetNext(.key,.person) { write !,person.FirstName," ",person.LastName }}Or, using an entity per class approach:/// Fetch all "People" using the People classClassMethod TestFetchAllPeople(){ #dim people As DcLib.OData.ClientResponse set people=##class(TripPinWS.People).%New().FetchAll() if people.IsError() write !,people.GetStatusText() quit while people.Value.%GetNext(.key,.person) { write !,person.FirstName," ",person.LastName }}As you can see, they’re very similar. The correct choice depends on how important autocomplete is to you with concrete entities, and whether you want a concrete entity class to add more entity-specific methods.DC>do ##class(TripPinWS.Tests).TestFetchAllPeople()Russell WhyteScott KetchumRonald Mundy… more peopleNext, let's implement the same for Airlines:/// Fetch all "Airlines"ClassMethod TestFetchAllAirlines(){ #dim airlines As DcLib.OData.ClientResponse set airlines=##class(TripPinWS.Airlines).%New().FetchAll() if airlines.IsError() write !,airlines.GetStatusText() quit while airlines.Value.%GetNext(.key,.airline) { write !,airline.AirlineCode," ",airline.Name }}And from the command line ...DC>do ##class(TripPinWS.Tests).TestFetchAllAirlines()AA American AirlinesFM Shanghai Airline… more airlinesAnd now airports:/// Fetch all "Airports"ClassMethod TestFetchAllAirports(){ #dim airports As DcLib.OData.ClientResponse set airports=##class(TripPinWS.Airports).%New().FetchAll() if airports.IsError() write !,airports.GetStatusText() quit while airports.Value.%GetNext(.key,.airport) { write !,airport.IataCode," ",airport.Name }}And from the command line...DC>do ##class(TripPinWS.Tests).TestFetchAllAirports()SFO San Francisco International AirportLAX Los Angeles International AirportSHA Shanghai Hongqiao International Airport… more airportsSo far we’ve been using the FetchAll() method. We can also use the Fetch() method to fetch a single entity using the entity’s primary key:/// Fetch single "People" entity using the persons IDClassMethod TestFetchPersonWithID(){ #dim response As DcLib.OData.ClientResponse set response=##class(TripPinWS.People).%New().Fetch("russellwhyte") if response.IsError() write !,response.GetStatusText() quit //lets use the new formatter to pretty print to the output (latest version of IRIS only) set jsonFormatter = ##class(%JSON.Formatter).%New() do jsonFormatter.Format(response.Value)}In this instance, we are using the new JSON formatter class, which can take a dynamic array or object and output it to formatted JSON.DC>do ##class(TripPinWS.Tests).TestFetchPersonWithID(){ "@odata.context":"http://services.odata.org/V4/(S(jndgbgy2tbu1vjtzyoei2w3e))/TripPinServiceRW/$metadata#People/$entity", "@odata.id":"http://services.odata.org/V4/(S(jndgbgy2tbu1vjtzyoei2w3e))/TripPinServiceRW/People('russellwhyte')", "@odata.etag":"W/\"08D720E1BB3333CF\"", "@odata.editLink":"http://services.odata.org/V4/(S(jndgbgy2tbu1vjtzyoei2w3e))/TripPinServiceRW/People('russellwhyte')", "UserName":"russellwhyte", "FirstName":"Russell", "LastName":"Whyte", "Emails":[ "Russell@example.com", "Russell@contoso.com" ], "AddressInfo":[ { "Address":"187 Suffolk Ln.", "City":{ "CountryRegion":"United States", "Name":"Boise", "Region":"ID" } } ], "Gender":"Male", "Concurrency":637014026176639951}Persisting ODataIn the final few examples, we will demonstrate how the OData JSON could be deserialized into persistent objects using the new JSON adapter class. We will create three classes — Person, Address, and City — which will reflect the Person data structure in the OData metadata. We will use the %JSONIGNOREINVALIDFIELD set to 1 so that the additional OData properties such as @odata.context do not throw a deserialization error.Class TripPinWS.Model.Person Extends (%Persistent, %JSON.Adaptor){Parameter %JSONIGNOREINVALIDFIELD = 1;Property UserName As %String;Property FirstName As %String;Property LastName As %String;Property Emails As list Of %String;Property Gender As %String;Property Concurrency As %Integer;Relationship AddressInfo As Address [ Cardinality = many, Inverse = Person ];Index UserNameIndex On UserName [ IdKey, PrimaryKey, Unique ];} Class TripPinWS.Model.Address Extends (%Persistent, %JSON.Adaptor){Property Address As %String;Property City As TripPinWS.Model.City;Relationship Person As Person [ Cardinality = one, Inverse = AddressInfo ];} Class TripPinWS.Model.City Extends (%Persistent, %JSON.Adaptor){Property CountryRegion As %String;Property Name As %String;Property Region As %String;}Next, we will fetch Russel Whyte from the OData service, create a new instance of the Person model, then call the %JSONImport() method using the response value. This will populate the Person object, along with the Address and City details.ClassMethod TestPersonModel(){ #dim response As DcLib.OData.ClientResponse set response=##class(TripPinWS.People).%New().Fetch("russellwhyte") if response.IsError() write !,response.GetStatusText() quit set person=##class(TripPinWS.Model.Person).%New() set sc=person.%JSONImport(response.Value) if $$$ISERR(sc) write !!,$SYSTEM.Status.GetOneErrorText(sc) return set sc=person.%Save() if $$$ISERR(sc) write !!,$SYSTEM.Status.GetOneErrorText(sc) return}We can then run a SQL command to see the data is persisted.SELECT ID, Concurrency, Emails, FirstName, Gender, LastName, UserNameFROM TripPinWS_Model.PersonID Concurrency Emails FirstName Gender LastName UserNamerussellwhyte 637012191599722031 Russell@example.com Russell@contoso.com Russell Male Whyte russellwhyteFinal ThoughtsAs we’ve seen, it’s easy to consume RESTful OData services using the built-in %NET classes. With a small amount of additional helper code, we can simplify the construction of OData queries, unify error reporting, and automatically deserialize JSON into dynamic objects.We can then create a new OData client just by providing its base URL and, if required, an HTTPS configuration. We then have the option to use this one class and the .With('entity') method to consume any entity on the service, or create named subclasses for the entities that we are interested in.We have also demonstrated that it's possible to deserialize JSON responses directly into persistent classes using the new JSON adaptor. In the real world, we might consider denormalizing this data first and ensure that the JSON adapter class works with custom mappings.Finally, working with OData has been a real breeze. The consistency of service implementation has required much less code than I often experience with bespoke implementations. Whilst I enjoy the freedom of RESTful design, I would certainly consider implementing a standard in my next server-side solution. Awesome! Thanks Sean,
This helped me understand the OData specification in a quick way.
Paul Excellent post. My app allows expose IRIS as a Odata server, see: https://openexchange.intersystems.com/package/OData-Server-for-IRIS
Announcement
Olga Zavrazhnova · Sep 22, 2022
Hi Community! InterSystems will be a technical sponsor at CalHacks hackathon by UC Berkeley, October 14-16 2022. We can't reveal our challenge at the moment, but as a little spoiler - it is related to healthcare ;)The team from our side @Evgeny.Shvarov @Dean.Andrews2971 @Regilo.Souza @Akshat.Vora and more!Join InterSystems in this fun and inspirational event - apply to be a mentor, volunteer, or judge here
Announcement
Anastasia Dyubaylo · Feb 14, 2023
Hi Developers,
Enjoy watching the new video on InterSystems Developers YouTube:
⏯ InterSystems Startup Accelerator Pitching Sessions @ Global Summit 2022
Check out the innovative solutions introduced by cutting-edge healthcare startups. Members of InterSystems 2022 FHIR Startup Accelerator - Caelestinus - will present their solutions, all built on InterSystems FHIR Cloud and Health Connect services.
Presenters:
🗣 @Evgeny.Shvarov, Startups and Community Manager, InterSystems🗣 @Dean.Andrews2971, Head of Developer Relations, InterSystems🗣 Martin Zubek, Business Development Manager🗣 Tomas Studenik, Caelestinus Incubator cofounder
Enjoy watching and stay tuned! 👍
Article
Chad Severtson · Apr 12, 2023
Spoilers: Daily Integrity Checks are not only a best practice, but they also provide a snapshot of global sizes and density.
Tracking the size of the data is one of the most important activities for understanding system health, tracking user activity, and for capacity planning ahead of your procurement process. InterSystems products store data in a tree-structure called globals. This article discusses how to determine global sizes – and therefore the size of your data. The focus is on balancing impact versus precision.
SQL tables are simply a projection of underlying globals. Looking at table size currently means needing to look at the corresponding global sizes. A more efficient sampling-based mechanism is currently being developed. Understanding the relationship between tables and globals may require some additional steps, discussed below.
Data
The specific data that needs to be collected varies depending on the specific question you're trying to answer. There is a fundamental difference between the space "allocated" for a global and the space "used" by a global which is worth considering. In general, the allocated space is usually sufficient as it corresponds to the space used on disk. However, there are situations where the used space and packing data are essential -- e.g. when determining if a global is being stored efficiently following a large purge of data.
Allocated Space - These are units of 8KB blocks. Generally, only one global can use one block. Therefore, even the smallest global occupies at least 8KB. This is also functionally the size on disk of the global. Determining allocated space only requires examining bottom-pointer blocks (and data-blocks which contain big-strings). Except in rare or contrived scenarios, there are typically multiple orders of magnitude fewer pointer blocks than data blocks. This metric is usually sufficient to understand growth trends if collected on a regular basis.
Used Space – “Used” is the sum of the data stored within the global and the necessary overhead. Globals often allocate more space on disk than is actually “used” as a function of usage patterns and our block structure.
Packing: Calculating the actual space used will also provide information about the global “packing” – how densely the data is stored. It can sometimes be necessary or desirable to store the globals more efficiently -- especially if they are not frequently updated. For systems with random updates, inserts, or deletes, a packing of 70% is generally considered optimal for performance. This value fluctuates based on activity. Spareness most often correlates with deletions.
IO Cost: Unfortunately, with great precision comes great IO requirements. Iterating 8KB block by 8KB block through a large database will not only take a long time, but it may also negatively impact performance on systems that are already close to their provisioned limits. This is much more expensive than determining if a block is allocated. This operation will take on the order of (# of parallel processes) / (read latency) * (database size – free space) to return an answer.
InterSystems provides several tools for determining the size of globals within a particular database. Generally, both the global name and the full path of the underlying database directory need to be known in order to determine the size. For more complex deployments, math is required to determine the total size of a global spread across multiple databases via subscript level mapping.
Determining Global Names:
Use the Extent Manager to list the globals associated with a table:
SQL: Call %ExtentMgr.GlobalsUsed('Package.Class.cls')
Review the storage definition within the Management Portal, within VS Code (or Studio), or by querying %Dictionary.StorageDefinition.
SQL: SELECT DataLocation FROM %Dictionary.StorageDefinition WHERE parent = 'Package.ClassName'
ObjectScript: write ##class(%Dictionary.ClassDefinition).%OpenId("Package.ClassName").Storages.GetAt(1).DataLocation
Hashed Global Names are common when the tables are defined using DDLs, i.e. CREATE TABLE. This behavior can be modified by specifying USEEXTENTSET and DEFAULTGLOBAL. Using hashed global names and storing only one index per global have shown performance benefits. I use the following query to list the non-obvious globals in a namespace
SQL for All Classes:
SELECT Parent, DataLocation, IndexLocation, StreamLocation
FROM %Dictionary.StorageDefinition
WHERE Parent->System = 0 AND DataLocation IS NOT NULL
SQL for Specific Classes:
CALL %ExtentMgr.GlobalsUsed('Package.Class.cls')
Determining Database Path:
For the simplest deployments where a namespace does not have additional global mappings for application data, it is often possible to substitute "." for the directory. That syntactic sugar will tell the API to look at the current directory for the current namespace.
For SQL oriented deployments, CREATE DATABASE follows our best practices and creates TWO databases -- one for code and one for data. It’s best to verify the default globals database for the given Namespace in the Management Portal or in the CPF.
It is possible to programmatically determine the destination directory for a particular global (or subscript) in the current namespace:
ObjectScript:
set global = "globalNameHere"
set directory = $E(##class(%SYS.Namespace).GetGlobalDest($NAMESPACE, global),2,*)
For more complex deployments with many mappings, it may be necessary to iterate through Config.MapGlobals in the %SYS Namespace and sum the global sizes:
SQL: SELECT Namespace, Database, Name FROM Config.MapGlobals
Determining Global Sizes:
Once the name of the global and the destination database path are determined, it is possible to collect information on the global size. Here are a few options:
Integrity Check – Nightly Integrity Checks are a good practice. An even better practice is to perform them against a restored backup to also verify the backup and restore process while offloading the IO to another system. This process verifies the physical integrity of the database blocks by reading each allocated block. It also tabulates both the allocated space of all the globals AND tracks the average packing of the blocks along the way.
See Ray’s great post on Integrity Check performance.
In IRIS 2022.1+, Integrity Checks can now even multi-process a single global.
Example Integrity Check Output:
Global: Ens.MessageHeaderD 0 errors found
Top Pointer Level: # of blocks=1 8kb (2% full)
Pointer Level: # of blocks=25 200kb (19% full)
Bottom Pointer Level: # of blocks=3,257 25MB (79% full)
Data Level: # of blocks=2,630,922 20,554MB (88% full)
Total: # of blocks=2,634,205 20,579MB (88% full)
Elapsed Time = 238.4 seconds, Completed 01/17/2023 23:41:12
%Library.GlobalEdit.GetGlobalSize – The following APIs can be used to quickly determine the allocated size of a single global. This may still take some time for multi-TB globals.
ObjectScript: w ##class(%Library.GlobalEdit).GetGlobalSize(directory, globalName, .allocated, .used, 1)
Embedded Python:
import iris
allocated = iris.ref("")
used =iris.ref("")
fast=1
directory = "/path/to/database"
global = "globalName"
iris.cls('%Library.GlobalEdit').GetGlobalSize(directory, global, allocated, used, fast)
allocated.value
used.value
%Library.GlobalEdit.GetGlobalSizeBySubscript – This is helpful for determining the size of subscript or subscript range. E.g. Determine the size of one index. It will include all descendants within the specified range. Warning: as of IRIS 2023.1 there is not a “fast” flag to only return the allocated size. It will read all of the data blocks within the range.
ObjectScript: ##class(%Library.GlobalEdit).GetGlobalSizeBySubscript(directory, startingNode, EndingNode, .size)
%SYS.GlobalQuery.Size – This API is helpful for surveying multiple globals within a database, with or without filters. A SQL Stored Procedure available for customers that primarily interact with IRIS via SQL.
SQL: CALL %SYS.GlobalQuery_Size('database directory here', '','*',,,1)
^%GSIZE – Executing this legacy utility and choosing to “show details” will read each data block to determine the size of the data. Without filtering the list of globals, it may read through almost the entire database block by block with a single thread.
Running ^%GSIZE with details is the slowest option for determining global sizes. It much slower than our heavily optimized Integrity Checks!
There is an additional entry point that will return the allocated size for a particular global – including when scoped to a subscript. Unfortunately, it does not work on subscript ranges.
ObjectScript: write $$AllocatedSize^%GSIZE("global(""subscript"")")
Database Size – The easiest case for determining global size is when there is only a single global within a single database. Simply subtract the total free space within the database from the total size of the database. The database size is available from the OS or via SYS.Database. I often use a variation of this approach to determine the size of a disproportionately large global by subtracting the sum of all the other globals in the database.
ObjectScript: ##class(%SYS.DatabaseQuery).GetDatabaseFreeSpace(directory, .FreeSpace)
SQL: call %SYS.DatabaseQuery_FreeSpace()
Embedded Python:
import iris
freeSpace = iris.ref("")
directory = "/path/to/database"
iris.cls('%SYS.DatabaseQuery').GetDatabaseFreeSpace(directory, freeSpace)
freeSpace.value
Process Private Globals - PPGs are special process-scoped globals stored within IRISTEMP. They are often not enumerated by the other tools. When IRISTEMP is expanding rapidly or reporting low freespace, PPGs are frequently the explanation. Consider examining the per process usage of PPGs via %SYS.ProcessQuery.
SQL: SELECT PID, PrivateGlobalBlockCount FROM %SYS.ProcessQuery ORDER BY PrivateGlobalBlockCount DESC
Questions for the readers:
How often do you track your global sizes?
What do you do with global size information?
For SQL focused users, do you track the size of individual indices?
Chad, thank you for complete explanation of available options. As to you questions:
1. We have a TASKMGR task which calculates the size of each global in all databases. It's usually scheduled by our customers for daily run.2. The main purpose of collecting such info is the ability to quickly answer the questions like this: "why my database is growing so fast?". Integrity Check is not used for the similar purpose because it can't be scheduled for daily run due to its relative slowness in our versions of Cache and IRIS. Wow, what a useful thread! Thank you.
I especially like the fact that you offer solution in each IRIS language : SQL, Python, ObjectScript. Great article Chad!
FWIW, we're working on a faster version of ^%GSIZE (with an API more fitting the current century ;-) ) that uses stochastic sampling similar to the faster table stats gathering introduced in 2021.2? I'll also take the opportunity for a shameless plug of my SQL utilities package, which incorporates much of what's described in this article and will take advantage of that faster global size estimator as soon as it's released. I plan to delete this article the second that the version containing Stochastic Sampling is installed everywhere. In the meantime, I can think of a few customers that need alternatives.
Your SQL utilities are great and also available via Open Exchange.
Article
Shanshan Yu · Apr 19, 2023
With the improvement of living standards, people pay more and more attention to physical health. And the healthy development of children has become more and more a topic of concern for parents. The child's physical development can be reflected from the child's height and weight. Therefore, it is of great significance to predict the height and weight in a timely manner. Pay attention to the child's developmental state through scientific prediction and comparison.
The project uses InterSystems IRIS Cloud SQL to support by entering a large number of weight and height related data, and establishes AutoML based on IntegratedML for predictive analysis. According to the input parent height, it can quickly predict the future height of children, and judge whether the child's body mass index is based on the current height and weight status. In the normal range.
Key Applications: InterSystems IRIS Cloud SQL, IntegratedML
Function:
By applying this program, the height of children in normal developmental state can be quickly predicted. Through the results, parents can judge whether the child's development is normal and whether clinical intervention is required, which will help to understand the child's future height; through the current weight status Determine whether the current child's BMI is normal and understand the child's current health status.
Application Scenario
1. Children's height prediction
2. Monitoring of child development
Application Principles
The client and server of the application were completed using Vue and Java respectively, while the database was completed using InterSystems Cloud SQL, a cloud database platform from Intersystems.
***The main prediction function of the application uses the Integrated ML of InterSystems Cloud SQL. It effectively helped me quickly create and train data models, and successfully implemented prediction functions.
Test Flow
① Select the module
② Fill in the relevant data. If there is adult sibling data, you can click add to fill in the information.
③ Click Submit and wait for the prediction result to appear in a while.
Article
Vadim Aniskin · Apr 28, 2023
Hey Community!
Here is a short article on how to create an idea on InterSystems Ideas.
0. Register on Ideas Portal if you aren't a member yet or log in. You can easily register using your InterSystems Developer Community ID.
1. Click on the "Add a new idea" button
and you will see the form to add the idea.
2. First, provide a one-sentence summary of the idea which is the required field. When you start typing, you will see a list of ideas with similar words in their names or tags. In case a similar idea is already created, vote or comment on this idea. The optimal size of an idea summary is 4-12 words.
3. Next, describe the idea in the "Please add more details" field.
In addition to text, you can attach screenshots or other files and insert tables and links. There is a full-screen mode that helps you see the whole description of your idea without scrolling.
4. Then you need to fill in the required field "Category". The correct category will help to assign your idea to the appropriate expert in the InterSystems team.
In case you first sorted ideas by category and then pushed the button "Add a new idea", the idea's category will be added automatically.
5. Optionally, you can add tags to your idea, so other users can find it easily based on tags. The list of tags starts with tags having an "InterSystems" title, all other tags are sorted in alphabetical order.
6. Click on "Add idea" to submit.
Hope this helps you share your ideas with others! If you have any questions, please send a direct message to @Vadim.Aniskin.
---------------------
* Please take into account that ideas and comments should be in English.* Ideas Portal admins can ask questions using Developer Community direct messages to clarify the idea and its category. Please answer these questions to make your idea visible to all users.* When you create an idea you automatically subscribe to e-mail notifications related to your idea, including:
changes in the status of your idea
comments on your idea posted by portal admins (you don't get notifications about comments from other users)