Clear filter
Article
Murray Oldfield · Nov 12, 2016
# Index
This is a list of all the posts in the Data Platforms’ capacity planning and performance series in order. Also a general list of my other posts. I will update as new posts in the series are added.
> You will notice that I wrote some posts before IRIS was released and refer to Caché. I will revisit the posts over time, but in the meantime, Generally, the advice for configuration is the same for Caché and IRIS. Some command names may have changed; the most obvious example is that anywhere you see the `^pButtons` command, you can replace it with `^SystemPerformance`.
---
> While some posts are updated to preserve links, others will be marked as strikethrough to indicate that the post is legacy. Generally, I will say, "See: some other post" if it is appropriate.
---
#### Capacity Planning and Performance Series
Generally, posts build on previous ones, but you can also just dive into subjects that look interesting.
- [Part 1 - Getting started on the Journey, collecting metrics.][1]
- [Part 2 - Looking at the metrics we collected.][2]
- [Part 3 - Focus on CPU.][3]
- [Part 4 - Looking at memory.][4]
- [Part 5 - Monitoring with SNMP.][5]
- [Part 6 - Caché storage IO profile.][6]
- [Part 7 - ECP for performance, scalability and availability.][7]
- [Part 8 - Hyper-Converged Infrastructure Capacity and Performance Planning][8]
- [Part 9 - Caché VMware Best Practice Guide][9]
- [Part 10 - VM Backups and IRIS freeze/thaw scripts][10]
- [Part 11 - Virtualizing large databases - VMware cpu capacity planning][11]
#### Other Posts
This is a collection of posts generally related to Architecture I have on the Community.
- [AWS Capacity Planning Review Example.][29]
- [Using an LVM stripe to increase AWS EBS IOPS and Throughput.][28]
- [YASPE - Parse and chart InterSystems Caché pButtons and InterSystems IRIS SystemPerformance files for quick performance analysis of Operating System and IRIS metrics.][27]
- [SAM - Hacks and Tips for set up and adding metrics from non-IRIS targets][12]
- [Monitoring InterSystems IRIS Using Built-in REST API - Using Prometheus format.][13]
- [Example: Review Monitor Metrics From InterSystems IRIS Using Default REST API][14]
- [InterSystems Data Platforms and performance – how to update pButtons.][15]
- [Extracting pButtons data to a csv file for easy charting.][16]
- [Provision a Caché application using Ansible - Part 1.][17]
- [Windows, Caché and virus scanners.][18]
- [ECP Magic.][19]
- [Markdown workflow for creating Community posts.][20]
- [Yape - Yet another pButtons extractor (and automatically create charts)][21] See: [YASPE](https://community.intersystems.com/post/yaspe-yet-another-system-performance-extractor).
- [How long does it take to encrypt a database?][22]
- [Minimum Monitoring and Alerting Solution][23]
- [LVM PE Striping to maximize Hyper-Converged storage throughput][24]
- [Unpacking pButtons with Yape - update notes and quick guides][25]
- [Decoding Intel processor models reported by Windows][26]
Murray Oldfield
Principle Technology Architect
InterSystems
Follow the community or @murrayoldfield on Twitter
[1]: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-1
[2]: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-2
[3]: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-3-focus-cpu
[4]: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-part-4-looking-memory
[5]: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-part-5-monitoring-snmp
[6]: https://community.intersystems.com/post/data-platforms-and-performance-part-6-cach%C3%A9-storage-io-profile
[7]: https://community.intersystems.com/post/data-platforms-and-performance-part-7-ecp-performance-scalability-and-availability
[8]: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-8-hyper-converged-infrastructure-capacity
[9]: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-9-cach%C3%A9-vmware-best-practice-guide
[10]: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-vm-backups-and-cach%C3%A9-freezethaw-scripts
[11]: https://community.intersystems.com/post/virtualizing-large-databases-vmware-cpu-capacity-planning
[12]: https://community.intersystems.com/post/sam-hacks-and-tips-set-and-adding-metrics-non-iris-targets
[13]: https://community.intersystems.com/post/monitoring-intersystems-iris-using-built-rest-api
[14]: https://community.intersystems.com/post/example-review-monitor-metrics-intersystems-iris-using-default-rest-api
[15]: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-how-update-pbuttons
[16]: https://community.intersystems.com/post/extracting-pbuttons-data-csv-file-easy-charting
[17]: https://community.intersystems.com/post/provision-cach%C3%A9-application-using-ansible-part-1
[18]: https://community.intersystems.com/post/windows-cach%C3%A9-and-virus-scanners
[19]: https://community.intersystems.com/post/ecp-magic
[20]: https://community.intersystems.com/post/markdown-workflow-creating-community-posts
[21]: https://community.intersystems.com/post/yape-yet-another-pbuttons-extractor-and-automatically-create-charts
[22]: https://community.intersystems.com/post/how-long-does-it-take-encrypt-database
[23]: https://community.intersystems.com/post/minimum-monitoring-and-alerting-solution
[24]: https://community.intersystems.com/post/lvm-pe-striping-maximize-hyper-converged-storage-throughput
[25]: https://community.intersystems.com/post/unpacking-pbuttons-yape-update-notes-and-quick-guides
[26]: https://community.intersystems.com/post/decoding-intel-processor-models-reported-windows
[27]: https://community.intersystems.com/post/yaspe-yet-another-system-performance-extractor
[28]: https://community.intersystems.com/post/using-lvm-stripe-increase-aws-ebs-iops-and-throughput
[29]: https://community.intersystems.com/post/aws-capacity-planning-review-example
Announcement
Janine Perkins · Jan 17, 2017
Take this online course to learn the foundations of the Caché ObjectScript language especially as it relates to use in creating variables and objects in Caché. You will learn about variables, commands, and operators as well as how to find more information using the InterSystems DocBooks when needed. This course contains instructional videos and exercises.Learn More.
Announcement
Anastasia Dyubaylo · May 15, 2020
Hey Developers,
We're pleased to invite you to join the next InterSystems IRIS 2020.1 Tech Talk: DevOps on June 2nd at 10:00 AM EDT!
In this InterSystems IRIS 2020.1 Tech Talk, we focus on DevOps. We'll talk about InterSystems System Alerting and Monitoring, which offers unified cluster monitoring in a single pane for all your InterSystems IRIS instances. It is built on Prometheus and Grafana, two of the most respected open source offerings available.
Next, we'll dive into the InterSystems Kubernetes Operator, a special controller for Kubernetes that streamlines InterSystems IRIS deployments and management. It's the easiest way to deploy an InterSystems IRIS cluster on-prem or in the Cloud, and we'll show how you can configure mirroring, ECP, sharding and compute nodes, and automate it all.
Finally, we'll discuss how to speed test InterSystems IRIS using the open source Ingestion Speed Test. This tool is available on InterSystems Open Exchange for your own testing and benchmarking.
Speakers:🗣 @Luca.Ravazzolo, InterSystems Product Manager 🗣 @Robert.Kuszewski, InterSystems Product Manager - Developer Experience
Date: Tuesday, June 2, 2020Time: 10:00 AM EDT
➡️ JOIN THE TECH TALK!
Additional Resources:
SAM Documentation
SAM InterSystems-Community Github repo
Docker Hub SAM
Hi Everyone,
We’ve had a presenter change for this Tech Talk. Please welcome @Robert.Kuszewski as a speaker at the DevOps Webinar!
In addition, please check the updated Additional Resources above.
Don't forget to register here! Tomorrow! Don't miss this webinar! 😉
PLEASE REGISTER HERE! I registered last week and I have gotten the reminder on Sunday. But I cannot get into the session at all - it loops back to the page to save to your calendar. This event will start in 1.5 hours My apologies for my human error - I have a new work schedule that I am still getting used to hi, is there a replay for this? Thx!
Article
Timothy Leavitt · Jun 4, 2020
Over the past year or so, my team (Application Services at InterSystems - tasked with building and maintaining many of our internal applications, and providing tools and best practices for other departmental applications) has embarked on a journey toward building Angular/REST-based user interfaces to existing applications originally built using CSP and/or Zen. This has presented an interesting challenge that may be familiar to many of you - building out new REST APIs to existing data models and business logic.
As part of this process, we've built a new framework for REST APIs, which has been too useful to keep to ourselves. It is now available on the Open Exchange at https://openexchange.intersystems.com/package/apps-rest. Expect to see a few more articles about this over the coming weeks/months, but in the meanwhile, there are good tutorials in the project documentation on GitHub (https://github.com/intersystems/apps-rest).
As an introduction, here are some of our design goals and intentions. Not all of these have been realized yet, but we're well on the way!
Rapid Development and Deployment
Our REST approach should provide the same quick start to application development that Zen does, solving the common problems while providing flexibility for application-specific specialized use cases.
Exposing a new resource for REST access should be just as easy as exposing it a a Zen DataModel.
Addition/modification of REST resources should involve changes at the level being accessed.
Exposure of a persistent class over REST should be accomplished by inheritance and minimal overrides, but there should also be support for hand-coding equivalent functionality. (This is similar to %ZEN.DataModel.Adaptor and %ZEN.DataModel.ObjectDataModel.)
Common patterns around error handling/reporting, serialization/deserialization, validation, etc. should not need to be reimplemented for each resource in each application.
Support for SQL querying, filtering, and ordering, as well as advanced search capabilities and pagination, should be built-in, rather than reimplemented for each application.
It should be easy to build REST APIs to existing API/library classmethods and class queries, as well as at the object level (CRUD).
Security
Security is an affirmative decision at design/implementation time rather than an afterthought.
When REST capabilities are gained by class inheritance, the default behavior should be to provide NO access to the resource until the developer actively specifies who should receive access and under what conditions.
Standardized implementations of SQL-related features minimize the surface for SQL injection attacks.
Design should take into consideration the OWASP API Top 10 (see: https://owasp.org/www-project-api-security)
Sustainability
Uniformity of application design is a powerful tool for an enterprise application ecosystem.
Rather than accumulating a set of diverse hand-coded REST APIs and implementations, we should have similar-looking REST APIs throughout our portfolio. This uniformity should lead to:
Common debugging techniques
Common testing techniques
Common UI techniques for connecting to REST APIs
Ease of developing composite applications accessing multiple APIs
The set of endpoints and format of object representations provided/accepted over REST should be well-defined, such that we can automatically generate API documentation (e.g., Swagger/OpenAPI) based on these endpoints.
Based on industry-standard API documentation, we should be able to generate portions of client code (e.g., typescript classes corresponding to our REST representations) using third-party/industry-standard tools.
Awesome! @Timothy.Leavitt this is amazing!
I'll be making use of it in my application :)
I was looking into the OpenExchange description, and in the Tutorial and User Guide, I think the links are broken. I got a "Not found" message when I try to access the URLs.
https://openexchange.intersystems.com/docs/sample-phonebook.md
https://openexchange.intersystems.com/docs/user-guide.md
Thank you for your interest, and for pointing out that issue. I saw it after publishing and fixed it in GitHub right away. The Open Exchange updates from GitHub at midnight, so it should be all set now.
minimum platform version of InterSystems IRIS 2018.1
Porting old apps with a framework available on new version of the platform (IRIS) only, no contradictions here? :) Is there something fundamental preventing the framework from being used on Cache too? Maybe I'm wrong, but the minimum requirement here it's because you don't have %JSON.Adaptor on Caché.
The %JSON. Adaptor is missing in Caché but %JSON.Formatter was backported half a year ago. it is in OpenExchange available @Henrique.GonçalvesDias is right - that's the reason for the minimum requirement.
IMO, getting an old app running on the new version of the platform is a relatively small effort compared to a Zen -> Angular migration (for example). Hi @Timothy.Leavitt I'm testing the AppS.REST to create a new application, following the Tutorial and Sample steps in Github I created a Dispatch Class:
Class NPM.REST.Handler Extends AppS.REST.Handler
{
ClassMethod AuthenticationStrategy() As %Dictionary.CacheClassname
{
Quit ##class(AppS.REST.Authentication.PlatformBased).%ClassName(1)
}
ClassMethod GetUserResource(pFullUserInfo As %DynamicObject) As AppS.REST.Authentication.PlatformUser
{
Quit ##class(AppS.REST.Authentication.PlatformUser).%New()
}
}
And a simple persistent class:
Class NPM.Model.Task Extends (%Persistent, %Populate, %JSON.Adaptor, AppS.REST.Model.Adaptor)
{
Parameter RESOURCENAME = "task";
Property RowID As %String(%JSONFIELDNAME = "_id", %JSONINCLUDE = "outputonly") [ Calculated, SqlComputeCode = {Set {*} = {%%ID}}, SqlComputed, Transient ];
Property TaskName As %String(%JSONFIELDNAME = "taskName");
/// Checks the user's permission for a particular operation on a particular record.
/// <var>pOperation</var> may be one of:
/// CREATE
/// READ
/// UPDATE
/// DELETE
/// QUERY
/// ACTION:<action name>
/// <var>pUserContext</var> is supplied by <method>GetUserContext</method>
ClassMethod CheckPermission(pID As %String, pOperation As %String, pUserContext As AppS.REST.Authentication.PlatformUser) As %Boolean
{
Quit ((pOperation = "QUERY") || (pOperation = "READ") || (pOperation = "CREATE") || (pOperation = "UPDATE"))
}
}
But when I try the REST API using Postman GET: http://localhost:52773/csp/npm-app-rest/api/task/1
I'm getting a 404 Not Found message.
Am I doing something wrong or missing something?
Thanks
@Henrique.GonçalvesDias , do you have a record with ID 1? If not, you can populate some data with the following (since you extend %Populate):
Do ##class(NPM.Model.Task).Populate(10) Yes, I already populated the class.
Give CSPSystem user access to the database with a REST broker. @Eduard.Lebedyuk is probably right on this. If you add auditing for <PROTECT> events you'll probably see one before the 404. I added auditing on everything, and the <PROTECT> error never showed up. So, I started everything from scratch and found out a typo on Postman.
Thanks, @Eduard.Lebedyuk @Timothy.Leavitt PS: Sorry, guys. I think not sleeping enough hours isn't good for health and cause this kind of mistakes This is really cool, and we will be using this in a big way.
But I have encountered an issue I can't fix.
I took one of my data classes (Data.DocHead) and had it inherit from AppS.REST.Model.Adaptor and %JSON.Adaptor, set the RESOURCENAME and other things and tested using Postman and it worked perfectly! Excellent!
Due to the need to have multiple endpoints for that class for different use cases, I figured I would set it up using the AppS.REST.Model.Proxy, so I created a new class for the Proxy, removed the inheritance in the data class (left %JSON.Adaptor), deleted the RESOURCENAME and other stuff in the data class.
I used the same RESOURCENAME in the proxy that I had used in data class originally.
I compiled the proxy class, and get the message:
ERROR #5001: Resource 'dochead', media type 'application/json' is already in use by class Data.DocHead > ERROR #5090: An error has occurred while creating projection RestProxies.Data.DocHead:ResourceMap.
I've recompiled the entire application with no luck. So there must be a resource defined somewhere that is holding dochead like it was still attached to Data.Dochead via a RESOURCENAME, but that parameter is not in that class anymore.
How do I clear that resource so I can use it in the proxy? @Richard.Schilke, I'm glad to hear that you're planning on using this, and we're grateful for your feedback.
Quick fix should just be: Do ##class(AppS.REST.ResourceMap).ModelClassDelete("Data.DocHead")
Background: metadata on REST resources and actions is kept in the AppS.REST.ResourceMap and AppS.REST.ActionMap classes. These are maintained by projections and it seems there's an edge case where data isn't getting cleaned up properly. I've created a GitHub issue as a reminder to find and address the root cause: https://github.com/intersystems/apps-rest/issues/5 That did the trick - thank you so much!
Best practice check: When I have a data class (like Data.DocHead) that will need multiple Mappings (Base, Expanded, Reports), then the recommended way is to use the proxy class and have a different proxy class for Data.DocHead for each mapping?
For example, RESTProxies.Data.DocHead.Base.cls would be the proxy for the Base mapping in Data.DocHead, while RESTProxies.Data.DocHead.Expanded.cls would be the proxy for the Expanded mapping in Data.DocHead, etc. (the only difference might be the values for the JSONMAPPING and RESOURCENAME prameters)? I'm fine with that, just checking that you don't have some other clever way of doing that... @Timothy.Leavitt, I've run into another issue.
The proxy is setup and working great for general GET access. But since my system is a multi-tenant, wide open queries are not a thing I can use, so I decided to try to use a defined class Query in the data class Lookups.Terms:
Query ContactsForClientID(cClientOID As %String) As %SQLQuery{SELECT * FROM Lookups.TermsWHERE ClientID = :cClientOIDORDER BY TermsCode}
Then I setup the Action Mapping in my proxy class RESTProxies.Lookups.Terms.Base:
XData ActionMap [ XMLNamespace = "http://www.intersystems.com/apps/rest/action" ]{<actions xmlns="http://www.intersystems.com/apps/rest/action"><action name="byClientID" target="class" method="GET" modelClass="Lookups.Terms" query="Lookups.Terms:ContactsForClientID"><argument name="clientid" target="cClientOID" source="url"/></action></actions>}
And I invoked this using this URL in a GET call using Postman (last part only):
terms_base/$byClientID?clientid=290
And the result:
406 - Client browser does not accept the MIME type of the requested page.
In the request, I verified that both Content-Type and Accept are set to application/json (snip from the Postman):
So what have I missed? @Richard.Schilke , yes, having a separate proxy for each mapping would be best practice. You could also have Data.DocHead extend Adaptor for the primary use case and have proxies for the more niche cases (if one case is more significant - typically this would be the most complete representation). What's the MEDIATYPE parameter in Lookups.Terms (the model class)? The Accept header should be set to that.
Also, you shouldn't need to set Content-Type on a GET, because you're not supplying any content in the request. (It's possible that it's throwing things off.)
If you can reproduce a simple case independent of your code (that you'd be comfortable to share), feel free to file a GitHub issue and I'll try to knock it out soon. I'll also note - the only thing that really matters from the class query is the ID. If nothing else is using the query you could just change it to SELECT ID FROM ... - it'll constitute the model instances based on that. (This is handy because it allows reuse of class queries with different representations.) Good to know and, yes, very handy!
Thank you! I posted an issue with my source to Github.
Surfaced another issue this week-end. (I remember when I used to take week-ends off, but no whining!)
So I have a multiple linked series of classes in Parent/Child relationships:
DocHead->DocItems->DocItemsBOM->DocItemsBOMSerial
So if I wanted to express all of this in a JSON object, I would need to make the "Default" mapping the one that exposes all the Child Properties, because it looks like I can't control the Mapping of the Child classes from the Parent class.
This doesn't bother me, as I had already written a shell that does this, and your Proxy/Adaptor makes it work even better, but just wanted to check that the Parent can't tell the Child what Proxy the child should use to display its JSON. It's even more complicated than that, as sometimes I want to show DocHead->DocItems (and stop), while, in other Use Cases, I have to show DocHead, DocItems, and DocItemsBOM (and stop), while in other Use Cases, I need the entire stack. Thanks for posting - I'm taking a look now. This issue is starting to ring a bell; I think this looks like a bug we fixed in another branch internally to my team. (I've had reconciling the GitHub branch and our internal branch on my list for some time - I'll try to at least get this fix in, soon.)
Re: customizing mappings of relationship/object properties, see https://docs.intersystems.com/healthconnectlatest/csp/docbook/Doc.View.cls?KEY=GJSON_adaptor#GJSON_adaptor_xdata_define - this is doable in %JSON.Adaptor mapping XData blocks via the Mapping attribute for an object-valued property included in the mapping. Wow - I think that means I can handle all my Use Cases with that capability. Nice!
Thanks again! @Timothy.Leavitt , have you had a chance to see if this error I'm getting on Actions was resolved? @Richard.Schilke I'm planning to address it tomorrow or Friday. Keep an eye out for the next AppS.REST release on the Open Exchange - I'll reply again here too. (This will also include a fix for the other issue you reported; I've already merged a PR for that.) @Timothy.Leavitt , I will be looking for it.
I'm trying to do something with a custom Header that I want to provide for the REST calls. Do I have access to the REST Header somewhere in the service that I can pull the values, like a %request?
And in something of an edge case, we're calling these REST services from an existing ZEN application (for now as we start a slow pull away from Zen), so the ZEN app gets a %Session created for it, and then calls the REST service. It seems that Intersystems is managing the License by recognizing that the browser has a session cookie, and it doesn't burn a License for the REST call - that's very nice (but I do have a request in to the WRC about whether that is expected behavior or not so I don't get surprised if it gets "fixed"!). Does that mean your REST service can see that %Session, as that would be very helpful, since we store User/Multi-tenant ID, and other important things in there (the %Session, not the cookie). @Richard.Schilke - on further review, it's an issue with the Action map. See my response in https://github.com/intersystems/apps-rest/issues/7 (and thank you for filing the issue!). I'll still create a new release soon to pick up the projection bug you found.
Regarding headers - you can reference %request anywhere in the REST model classes, it just breaks abstraction a bit. (And for the sake of unit testing, it would be good to behave reasonably if %request happens not to be defined, unless your planning on using Forgery or equivalent.)
Regarding sessions - yes, you can share a session with a Zen application via a common session cookie path or using GroupById. You can reference this as needed as well, though I'd recommend wrapping any %session (or even %request) dependencies in the user context object that gets passed to CheckPermissions(). @Timothy.Leavitt - thanks so much for the response. The Action worked perfectly with your corrections!
I will take your advice and work with the %session/headers in the context object, since that makes the most sense.
What are the plans (if any) to enable features in a resultset such as pagination, filters, and sorting?
Users are horrible, aren't they? No matter what good work you do, they always want more! I appreciate what you have done here, and it will save my company probably hundreds of hours of work, plus it is very elegant... @Richard.Schilke - great!
We have support for filtering/sorting on the collection endpoints already, though perhaps not fully documented. Pagination is a challenge from a REST standpoint but I'd love to add support for it (perhaps in conjunction with "advanced search") at some point. I'm certainly open to ideas on the implementation there. :)
Users are the best, because if you don't have them, it's all just pointlessly academic. ;) @Timothy.Leavitt - stuck again.
I'm in ClassMethod UserInfo, and found out some interesting things.
First off, I was wrong about the REST service using the session cookie from the Zen application when it is called from the Zen application. Displaying the %session.SessionId parameters for each call shows that they are all different, and not the same as the SessionId of the Zen application calling the REST service. So the idea that it holds a license for 10 seconds can't be correct, as it seems almost immediate. I run 20 REST calls to different endpoints in a loop, and I saw a single License increase.
You said I should be able to expose the session cookie of the Zen application, but I don't see a way to do that either.
I can't even find a way to see the header data in the UserInfo ClassMethod of the current REST call.
Sorry to be a pest...but since you''re giving answers, I'll keep asking questions!
Have a nice evening... @Richard.Schilke , you should be able to share a session by specifying the same CSP session cookie path for your REST web application and the web application(s) through which your Zen pages are accessed. Alternatively, you could assign the web applications the same GroupById in their web application configuration.
You likely also need to configure your REST handler class (your subclass of AppS.REST.Handler) to use CSP sessions (from your earlier description, I assumed you had). This is done by overriding the UseSession class parameter and setting it to 1 (instead of the default 0).
To reference header data in the UserInfo classmethod, you should just be able to use %request (an instance of %CSP.Request) and %response (an instance of %CSP.Response) as appropriate for request/response headers. 💡 This article is considered as InterSystems Data Platform Best Practice. How would the AppS.REST Handler co-exist with a 'Spec-first' approach, where the dispatch class should not be modified manually - only by re-importing the API spec?
The AppS.REST user-guide states: 'To augment an existing REST API with AppS.REST features, forward a URL from your existing REST handler to this subclass of AppS.REST.Handler.' How would this work in practice with the above?
Thanks in advance.
Announcement
Anastasia Dyubaylo · Feb 24, 2021
Hi Community,
See how X12 SNIP validation can be used in InterSystems IRIS data platform and how to create a fully functional X12 mapping in a single data transformation language (DTL):
⏯ Building X12 Interfaces in InterSystems IRIS
Subscribe to InterSystems Developers YouTube and stay tuned!
Announcement
Anastasia Dyubaylo · Mar 1, 2021
Hey Developers,
This week is a voting week for the InterSystems Grand Prix Contest! So, it's time to give your vote to the best solutions built with InterSystems IRIS.
🔥 You decide: VOTING IS HERE 🔥
How to vote and what's new?
Any developer can vote for their application - votes will be counted in both Expert and Community nominations automatically (in accordance with the Global Masters level).
All InterSystems employees can vote in both Expert and Community nominations.
With the voting engine and algorithm for the Experts and Community nomination, you can select 3 projects: the 1st, the 2nd, and the 3rd place upon your decision.
This is how it works for the Community leaderboard:
Community Leaderboard:
Place
Points
1st
3
2nd
2
3rd
1
And there will be more complex math for the Experts leaderboard, where different levels of experts have more "points" power:
Experts Leaderboard:
Level
Place
1st
2nd
3rd
VIP level in GM, Moderators, Product Managers
9
6
3
Expert level in Global Masters
6
4
2
Specialist level in Global Masters
3
2
1
Experts' votes will also contribute 3-2-1 points to the Community leaderboard too.
Voting
1. Sign in to Open Exchange – DC credentials will work.
2. Make any valid contribution to Developer Community – answer or ask questions, write an article, contribute applications on Open Exchange - and after 24 hours you'll be able to vote. Check this post on the options to make helpful contributions to the Developer Community.
If you changed your mind, cancel the choice and give your vote to another application – you have 7 days to choose!
Contest participants are allowed to fix the bugs and make improvements to their applications during the voting week, so don't miss and subscribe to application releases!
➡️ Also, please check out the full voting rules for InterSystems online contest here. Hi Community,
Don't forget to support the application you like!
➡️ Voting here ⬅️ Hi Developers, just to remind you that you can help each other making pull requests and report about bugs on GitHub to make applications even greater!
On Global Masters we award points:
500 points for a submitted pull request + 500 bonus points if PR is merged
500 points for a bug report + 2,500 points more if the bug is fixed (closed)
More info about these challenges in this article. Voting for the InterSystems Grand Prix Contest goes ahead!
And here're the results at the moment:
Expert Nomination, Top 3
vscode-intersystems-iris – 83
iris-rad-studio – 78
HealthInfoQueryLayer – 60
➡️ The leaderboard.
Community Nomination, Top 3
vscode-intersystems-iris – 79
iris-rad-studio – 72
HealthInfoQueryLayer –71
➡️ The leaderboard. There were a lot of questions on how to vote: active contributing members of the Developer Community are eligible to vote - 24 hours after the first contribution. The contribution should be helpful to the community to let DC moderators faster decide on the approval.
The examples of helpful contributions are listed here. Hello everyone,
There is a lot of good application here, good luck to every team they deserve it! Developers! Only 1 day left before the end of voting.
Please check out the Contest Board and vote for the solutions you like! 👍🏼
Announcement
Anastasia Dyubaylo · Mar 9, 2021
Hey developers,
We want to hear from you! Give us your feedback on the InterSystems Grand Prix Contest! Please answer some questions to help us improve our contests.
👉🏼 Quick survey: InterSystems Grand Prix Contest Survey
Or please share your thoughts in the comments to this post!
Announcement
Anastasia Dyubaylo · Apr 6, 2021
Hi Community,
Find out how to work with FHIR profiles and conformance resources when building FHIR applications on InterSystems IRIS for Health:
⏯ Working with FHIR Profiles in InterSystems IRIS for Health
👉🏼 Subscribe to InterSystems Developers YouTube.
Enjoy and stay tuned!
Announcement
Marcus Wurlitzer · Apr 21, 2021
Hi Developers, I am glad to announce Git for InterSystems IRIS, my first submission to OpenExchange and part of the current Developer Tools Contest.
Git for InterSystems IRIS is a source control package that aims to facilitate a native integration of the Git workflow with the InterSystems IRIS platform. It is designed to work as a transparent link between InterSystems IRIS and a Git-enabled code directory that, once setup, requires no user interaction. A detailed description can be found on GitHub.
I am looking forward to learn what you think about this approach. Does it make sense? Would this help you with establishing a Git-based deployment pipeline? Are there any issues that may have been overlooked?
A ready-to-run docker demo is available on OpenExchange. The application is in a usable proof-of-concept state, with some features still to be implemented. I am happy to receive any feedback from you. Thank you for publishing!!
I am curious ... did you start with one of the existing open source Git hooks for ObjectScript or did you start from scratch with this project? Hi Ben, the project started as a fork of Caché Tortoize Git, which was a good starting point, and initially I intended to change only a few things. As development went on, however, most of the code has been rewritten and I think only 10-20% is left from the original code. There were just too many differences in the basic concepts, including the Globals structure, handling of namespaces and projects, and interaction with Git (hooks -> REST) and Studio (none). This is really interesting - I've been starting on a similar project with the same starting point. Got it Marcus - thanks for the history :) Thank you Marcus, great initiative! Any thoughts about how to manage environment specific variables in the pipeline e.g. different interoperability host configurations for dev / prod? @Janne.Korhonen - typically these are managed using System Default Setting: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=ECONFIG_other#ECONFIG_other_default_settings Hello Marcus,
Thank you for sharing.
I'm building a dockerised DEV environnement.
The main issue I am encountering at this moment is the modification from the portal of Business Process or Transform for exemple. I have to export manually my new processes. If I forget it, I just lose it ...
I try to install and configure GIT directly on my image.
The aim is to link my local repo to the container. I don't want GIT at all. This way, I will have a automatical export.
But, it is quite difficult to use in command line.
I always have to perform an INIT when I start my container.
I tried to perform an init from my DockerFile (do ##class(SourceControl.Git.Utils).UserAction("","%SourceMenu,Init")) but it does not work.
Do you have a clean way to install and configure GIT from a DockerFile ?
Thanks
Regards,
Matthieu.
My iris start in my DockerFile :
RUN iris start IRIS \
&& iris session IRIS -U %SYS < /tmp/iris.script \
&& iris stop IRIS quietly
my iris.script :
//On installe ZPM
set $namespace="%SYS", name="DefaultSSL" do:'##class(Security.SSLConfigs).Exists(name) ##class(Security.SSLConfigs).Create(name) set url="https://pm.community.intersystems.com/packages/zpm/latest/installer" Do ##class(%Net.URLParser).Parse(url,.comp) set ht = ##class(%Net.HttpRequest).%New(), ht.Server = comp("host"), ht.Port = 443, ht.Https=1, ht.SSLConfiguration=name, st=ht.Get(comp("path")) quit:'st $System.Status.GetErrorText(st) set xml=##class(%File).TempFilename("xml"), tFile = ##class(%Stream.FileBinary).%New(), tFile.Filename = xml do tFile.CopyFromAndSave(ht.HttpResponse.Data) do ht.%Close(), $system.OBJ.Load(xml,"ck") do ##class(%File).Delete(xml)
do ##class(%SYSTEM.Process).CurrentDirectory("/opt/irisapp")
//On charge les installer et les deployer
do $SYSTEM.OBJ.Load("InstallerLibrary.cls", "ck")
//On installe les namespaces
set sc = ##class(App.InstallerLibrary).setup()
// Je ne sais pas pourquoi mais je dois redéfinir le dossier de travail
do ##class(%SYSTEM.Process).CurrentDirectory("/opt/irisapp")
//On importe les default settings + le plugin GIT (SURTOUT LAISSER LES PASSAGES A LA LIGNE)
zn "LIBRARY"
zpm "install git-source-control"
d ##class(SourceControl.Git.API).Configure()
/irisdev/app/LIBRARY/
// Pour éviter de devoir modifier le mdp SuperUser.
zn "%SYS"
w ##class(Security.Users).UnExpireUserPasswords("*")
// Pour faire fonctionner le plugin Git, il faut que le path défini existe, par défaut il est à chaine vide et cela fait planter le plugin. En l'enlevant cela fonctionne
k ^SYS("SourceControl","Git","%gitBinPath")
zn "LIBRARY"
do ##class(SourceControl.Git.Utils).UserAction("","%SourceMenu,Init")
halt Hi Matthieu,
so you want to use Git for IRIS for an automated export of classes and set it up from the iris.script, which will be invoked in the Dockerfile.
From the code you have pasted, it seems like you use a different Git Source Control implementation (zpm "install git-source-control“). The implementation discussed in this thread would be installed with
zpm "install git-for-iris“
There, you can use the API functions in SourceControl.Git.Utils in the iris.script:
do ##class(SourceControl.Git.Utils).AddDefaultSettings()
do ##class(SourceControl.Git.Utils).AddPackageToSourceControl(„<My.Package.Name>“, „<MyNamespace>“)
do ##class(SourceControl.Git.Utils).SetSourceControlStatus(1)
A default package is added to source control via module.xml for demo purposes, as well as the /csp/user/sc web application for callbacks from git, both of which you may want to remove.
As a final step, you will have to activate the source control class in IRIS. The manual process is described here https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=ASC#ASC_Hooks_activating_sc_class, you might look into the corresponding CSP page to find out how to do it programatically.
Hope this helps. Currently there is a select few of us in the group that use Git and Local Repos for VS code, but I want to make this more wide spread for our team as most use the Editors off of the Management Portal to do their coding.
Does anyone have steps they have used in the past to move towards Server Side Source Control from creating the Repos on your Server, getting the IRIS Code into the new Repo you created on your server, and pushing it to github? Git for IRIS has been updated to v0.3. It now provides source control for lookup tables and supports deletion of classes. Also, improvements were made to provision default settings, and Git hooks are now disabled by default to avoid unwanted side-effects (writing hooks to the .git directory, setting a random password for the technical user).
A complete guide for deploying Git for IRIS to an existing IRIS instance has been added to README.md along with detailed descriptions of settings, Globals and behaviour.
Announcement
Anastasia Dyubaylo · May 10, 2021
Hey developers,
We want to hear from you! Give us your feedback on the past InterSystems Developer Tools Contest! Please answer some questions to help us improve our contests.
👉 Quick survey: InterSystems Developer Tools Contest Survey
Or just share your thoughts in the comments to this post!
Announcement
Anastasia Dyubaylo · Jun 16, 2021
Hey Community,
We want to hear from you! Give us your feedback on the past InterSystems FHIR Accelerator Contest! Please answer some questions to help us improve our contests.
👉 Quick survey: InterSystems FHIR Accelerator Contest Survey
Or just share your thoughts in the comments to this post! FHIR as service on public cloud is not available in China. Is it possible to deploy it in China? Thx! Michael
Announcement
Anastasia Dyubaylo · May 30, 2021
Hi Developers,
Watch the execution of a speed test for a heavy-ingestion use case on InterSystems IRIS:
⏯ InterSystems IRIS Speed Test: High-Volume Ingestion
Try the full demo at https://github.com/intersystems-community/irisdemo-demo-htap
Speakers:
🗣 @Amir.Samary, InterSystems Director, Solution Architecture
🗣 @Derek.Robinson, InterSystems Senior Online Course Developer, InterSystems
Subscribe to InterSystems Developers YouTube and stay tuned!
Announcement
Jeff Fried · Mar 26, 2021
Three new sets of maintenance releases are now available:
Caché 2018.1.5, Ensemble 2018.1.5, and HSAP 2018.1.5
InterSystems IRIS 2019.1.2, IRIS for Health 2019.1.2, and HealthShare Health Connect 2019.1.2
InterSystems IRIS 2020.1.1, IRIS for Health 2020.1.1, and HealthShare Health Connect 2020.1.1
Installation kits and containers can be downloaded from the WRC Software Distribution site.
These are maintenance releases with many updates across a wide variety of areas. For information about the corrections in these releases, refer to the documentation for that version, which includes a Release Notes and Upgrade Checklist, and a Release Changes list, as well as the Class Reference and a full set of guides, references, tutorials, and articles. All documentation can be reached via docs.intersystems.com.
New platform support has also been added to these releases. In particular, Ubuntu 20.04 LTS support has been added to all releases, IBM AIX 7.1 and 7.2 for System p-64 support has been added to 2019.1.2 (and was already in 2020.1), and ARM64 support for Linux was added to 2020.1.1. For details, see the Supported Platforms document for each release.
Build numbers for these releases are shown in the table below:
Version
Product
Build number
2018.1.5
Caché and Ensemble
2018.1.5.659.0
2018.1.5
Caché Evaluation
2018.1.5.659.0su
2018.1.5
HealthShare Health Connect (HSAP)
2018.1.5HS.9056.0
2019.1.2
InterSystems IRIS
2019.1.2.718.0
2019.1.2
IRIS for Health
2019.1.2.718.0
2019.1.2
HealthShare Health Connect
2019.1.2.718.0
2020.1.1
InterSystems IRIS
2020.1.1.408.0
2020.1.1
IRIS for Health
2020.1.1.408.0
2020.1.1
HealthShare Health Connect
2020.1.1.408.0
2020.1.1
InterSystems IRIS Community
2020.1.1.408.0
2020.1.1
IRIS for Health Community
2020.1.1.408.0
2020.1.1
IRIS Studio
2020.1.1.408.0
Very exciting!! Congratulations to all involved in getting these out the door :)
Announcement
Anastasia Dyubaylo · Apr 13, 2021
Hi Developers,
See how a FHIR implementation can be built in InterSystems IRIS for Health, leveraging both PEX and InterSystems Reports:
⏯ FHIR Implementation Patterns in InterSystems IRIS for Health
👉🏼 Subscribe to InterSystems Developers YouTube.
Enjoy and stay tuned!
Announcement
Nikolay Solovyev · Apr 8, 2021
We released a new version of ZPM (Package Manager)
New in ZPM 0.2.14 release:
Publishing timeout
Embedded vars usage in module parameters
Package installation from Github repo
Transaction support for install, load and publish.
See the details below.
New configuration setting - publish timeout zpm:USER>config set PublishTimeout 120Use this setting if you are unable to publish the package due to a bad connection or other problems
Support embedded vars in Default values in Module.xml <Default Name="MyDir" Value="${mgrdir}MySubDir"></Default> Thanks to @Lorenzo.Scalese for the suggested changes
Load packages from repo (git) zpm:USER>load https://github.com/intersystems-community/zpm-registrygit must be installed
Transactions Now Load, Install, Publish commands are executed in a transaction, which allows you to be sure that no changes will remain in the system in case of problems during these operations
Many bug fixes and improvements
All Docker images https://github.com/intersystems-community/zpm/wiki/04.-Docker-Images are updated and include ZPM 0.2.14
To launch IRIS do:
docker run --name my-iris -d --publish 9091:51773 --publish 9092:52773 intersystemsdc/iris-community:2020.4.0.524.0-zpm
docker run --name my-iris -d --publish 9091:51773 --publish 9092:52773 intersystemsdc/iris-community:2020.3.0.221.0-zpm
docker run --name my-iris -d --publish 9091:51773 --publish 9092:52773 intersystemsdc/iris-ml-community:2020.3.0.302.0-zpm
docker run --name my-iris -d --publish 9091:51773 --publish 9092:52773 intersystemsdc/irishealth-community:2020.4.0.524.0-zpm
docker run --name my-iris -d --publish 9091:51773 --publish 9092:52773 intersystemsdc/irishealth-community:2020.3.0.221.0-zpm
Robert Thanks @Nikolay.Soloviev!
How to use load "github repo" feature using docker container? It says there is no git inside. in DockerFile
USER root## add gitRUN apt update && apt-get -y install git This works, thank you, Robert!