Search

Clear filter
Announcement
Anastasia Dyubaylo · Oct 18, 2021

InterSystems Interoperability Contest: Cast Your Vote!

Hey Developers, This week is a voting week for the InterSystems Interoperability contest! So, it's time to give your vote to the best solutions built with InterSystems IRIS. 🔥 You decide: VOTING IS HERE 🔥 How to vote? Details below. Experts nomination: InterSystems experienced jury will choose the best apps to nominate the prizes in the Experts Nomination. Please welcome our experts:⭐️ @Stefan.Wittmann, Product Manager ⭐️ @Robert.Kuszewski, Product Manager⭐️ @Nicholai.Mitchko, Manager, Solution Partner Sales Engineering⭐️ @Renan.Lourenco, Solutions Engineer⭐️ @Jose-Tomas.Salvador, Sales Engineer Manager⭐️ @Eduard.Lebedyuk, Sales Engineer⭐️ @Alberto.Fuentes, Sales Engineer⭐️ @Evgeny.Shvarov, Developer Ecosystem Manager Community nomination: For each user, a higher score is selected from two categories below: Conditions Place 1st 2nd 3rd If you have an article posted on DC and an app uploaded to Open Exchange (OEX) 9 6 3 If you have at least 1 article posted on DC or 1 app uploaded to OEX 6 4 2 If you make any valid contribution to DC (posted a comment/question, etc.) 3 2 1 Level Place 1st 2nd 3rd VIP Global Masters level or ISC Product Managers 15 10 5 Ambassador GM level 12 8 4 Expert GM level or DC Moderators 9 6 3 Specialist GM level 6 4 2 Advocate GM level or ISC Employees 3 2 1 Blind vote! The number of votes for each app will be hidden from everyone. Once a day we will publish the leaderboard in the comments to this post. The order of projects on the Contest Page will be as follows: the earlier an application was submitted to the competition, the higher it will be in the list. P.S. Don't forget to subscribe to this post (click on the bell icon) to be notified of new comments. To take part in the voting, you need: Sign in to Open Exchange – DC credentials will work. Make any valid contribution to the Developer Community – answer or ask questions, write an article, contribute applications on Open Exchange – and you'll be able to vote. Check this post on the options to make helpful contributions to the Developer Community. If you changed your mind, cancel the choice and give your vote to another application! Support the application you like! Note: contest participants are allowed to fix the bugs and make improvements to their applications during the voting week, so don't miss and subscribe to application releases! Hi, Developers! Here are the results after the first day of voting: Expert Nomination, Top 3 IRIS Interoperability Message Viewer by @Henrique.GoncalvesDias appmsw-telealerts by @MikhailenkoSergey IRIS Big Data SQL Adapter by @YURI MARX GOMES ➡️ Voting is here. Community Nomination, Top 3 IRIS Interoperability Message Viewer by @Henrique.GoncalvesDias appmsw-telealerts by @MikhailenkoSergey IRIS Big Data SQL Adapter by @YURI MARX GOMES ➡️ Voting is here. ​​​​​​​If you want to support the application by your like, please read our rules of voting first😊 Here are the results after 2 days of voting: Expert Nomination, Top 3 IRIS Interoperability Message Viewer by @Henrique Dias Node-RED node for InterSystems IRIS by @Dmitry.Maslennikov appmsw-telealerts by @Sergey Mikhailenko ➡️ Voting is here. Community Nomination, Top 3 IRIS Interoperability Message Viewer by @Henrique Dias Node-RED node for InterSystems IRIS by @Dmitry.Maslennikov appmsw-telealerts by @Sergey Mikhailenko ➡️ Voting is here. So, the voting continues. Please support the application you like! Voting for the InterSystems Interoperability contest goes ahead! And here're the results at the moment: Expert Nomination, Top 3 Node-RED node for InterSystems IRIS by @Dmitry Maslennikov IRIS Big Data SQL Adapter by @Yuri.Gomes IRIS Interoperability Message Viewer by @Henrique Dias ➡️ Voting is here. Community Nomination, Top 3 IRIS Interoperability Message Viewer by @Henrique Dias Node-RED node for InterSystems IRIS by @Dmitry Maslennikov IRIS Big Data SQL Adapter by @Yuri.Gomes ➡️ Voting is here. Hey Developers! 3 days left before the end of voting! Please check out the Contest Board and vote for the applications you like! 👍🏼
Announcement
Anastasia Dyubaylo · Nov 11, 2021

InterSystems Security Contest Kick-off Webinar

Hi Community, We are pleased to invite all the developers to the upcoming InterSystems Security Contest Kick-off Webinar! The topic of this webinar is dedicated to the Security contest. We’ll discuss the aspects of Security Model implementation in InterSystems IRIS, the requirements, and what do we expect from participants of the Security contest. Also, we’ll answer all the questions related to the contest! Date & Time: Monday, November 15 — 12:00 AM EDT Speakers: 🗣 @Andreas.Dieckow, Principal Product Manager at InterSystems Corporation🗣 @Evgeny.Shvarov, InterSystems Developer Ecosystem Manager So! We will be happy to talk to you at our webinar! ✅ JOIN THE KICK-OFF WEBINAR! The Eventbrite record looks a bit strange. Why not on sale until 3 weeks after today's kickoff? Hi John, Thanks for noticing! Fixed. Please join in an hour ;) Hey Developers, The recording of this webinar is available on InterSystems Developers YouTube! Please welcome: ⏯ InterSystems Security Contest Kick-off Webinar Big applause to our speakers! 👏🏼
Announcement
Evgeny Shvarov · Dec 1, 2021

InterSystems Security Contest 2021 Bonuses Results

Hi contestants! We've introduced a set of bonuses for the projects for the Interoperability Contest 2021! Here are projects that scored it: Project Basic Auth Bearer/JWT OAuth Authorization Auditing Encryption Docker ZPM Online Demo Code Quality Article on DC Video on YouTube Total Bonus Nominal 2 3 5 2 2 2 2 2 3 1 2 3 29 appmsw-forbid-old-passwd 2 2 2 1 2 9 isc-apptools-lockdown 2 - - 1 2 5 passwords-tool 2 2 1 2 7 API Security Mediator 2 2 2 2 2 3 1 6 3 23 Audit Mediator 2 2 2 1 4 3 14 iris-disguise 2 2 1 4 3 12 iris-saml-example 5 2 2 2 3 1 2 17 Server Manager 3.0 Preview 2 4 6 appmsw-dbdeploy 2 2 1 2 7 Data_APP_Security 2 5 2 2 2 2 3 1 4 3 26 IRIS Middlewares 2 1 3 TimeTracking-workers 2 2 1 5 zap-api-scan-sample 2 1 4 3 10 https-rest-api 2 2 Please apply with your comments here in the posts or in Discord. Bonuses are subject to change upon the improvements or your requests if we missed something! Good luck in the contest! Hi Mr. Evgeny Shvarov Thanks for sharing Please note that Data_APP_Security app already have article on DC Thanks I claim 2 points to audit mediator app, because the article https://pt.community.intersystems.com/post/aproveitando-o-banco-de-dados-de-auditoria I claim 2 points to security mediator, because the article https://pt.community.intersystems.com/post/o-poder-do-xdata-aplicado-%C3%A0-seguran%C3%A7a-da-api I request bonus to online app (SecurityMediator) http://ymservices.tech:52773/crud/ I request bonus to use Audit into the app SecurityMediator. Evidence: https://github.com/yurimarx/iris-api-security-mediator/blob/a3fa1863cd8318ace38d554ef3956694e6b28ef4/src/dc/SecurityMediator/SecurityMediator.cls#L66 I request bonus to the use of basic auth, my app is REST API that's requires basic auth. If you access my app online, you have to inform user and password. (http://ymservices.tech:52773/crud/). Evidence: https://github.com/yurimarx/iris-api-security-mediator/blob/a3fa1863cd8318ace38d554ef3956694e6b28ef4/module.xml#L21 I just posted another article about my Server Manager 3 entry: https://community.intersystems.com/post/server-manager-now-showcasing-vs-codes-new-support-pre-release-extensions I didn't know that translated articles give points, so if it is allowed there is my article in portuguese https://pt.community.intersystems.com/post/anonimiza%C3%A7%C3%A3o-de-dados-apresentando-iris-disguise Hi, I am requesting points for translated article in Spanish for Data_APP_Security app https://es.community.intersystems.com/post/c%C3%B3mo-crear-usuarios-conceder-privilegios-habilitardeshabilitar-y-autentificarinvalidar Thanks Hi,Add video on YouTube https://www.youtube.com/watch?v=Ofyf0IdakeYThanks I claim more 2 points (security mediator) for my new article https://community.intersystems.com/post/objectscript-rest-api-cookbook Hey Mr. Evgeny Shvarov Please add points for the `quality code` for: # appmsw-forbid-old-passwd +1 # isc-apptools-lockdown +1 I also ask you to add +2 for `appmsw-forbid-old-passwd` The program that checks the correctness of the password entered by the user is already definitely an authorization component. Please add points for the quality of the code for `appmsw-dbdeploy`all errors are due to the fact that the code quality program cannot work with% ZPM.https://community.objectscriptquality.com/dashboard?id=intersystems_iris_community%2Fappmsw-dbdeploy Hi Mr. Evgeny Shvarov Please note that I have added OAuth2 support along with Article for Data_APP_Security appThanks Hi @Evgeny.Shvarov We are here to request bonus points to Gryffindor ZAP API Scan Sample zap api scan sample 1 article in English: https://community.intersystems.com/post/why-how-whats-zap-api-scan-sample and 1 article in Portuguese: https://pt.community.intersystems.com/post/por-que-como-o-que-%C3%A9-zap-api-scan-sample ![funny](https://media.giphy.com/media/Q7ozWVYCR0nyW2rvPW/giphy.gif) :)
Announcement
Evgeny Shvarov · Aug 3, 2021

Join InterSystems Discord While DC Is Not Available

Hi Developers! Currently, we are experiencing technical issues with DC sign-in - you may not be able to sign in and contribute to the Developer Community. Our engineers are already working to solve the issue, so we are committed to restoring service quickly. To stay in touch, let's continue our tech talks on InterSystems Developers Discord Server 👈 Thank you for your patience!
Announcement
Evgeny Shvarov · Aug 26, 2021

Technology Bonuses for InterSystems Analytics Contest 2021

Hi Developers! Here're the technology bonuses for the InterSystems Analytics contest that will give you extra points in the voting. Adaptive Analytics (AtScale) Cubes usage - 4 pointsInterSystems Adaptive Analytics provides the option to create and use AtScale cubes for analytics solutions. You can use the AtScale server we set up for the contest (URL and credentials can be collected in the Discord Channel) to use cubes or create a new one and connect to your IRIS server via JDBC. The visualization layer for your Analytics solution with AtScale can be crafted with Tableau, PowerBI, Excel, or Logi. Documentation, AtScale documentation Training Tableau, PowerBI, Logi usage - 3 points Collect 3 points for the visualization you made with Tableau, PowerBI, or Logi - 3 points per each. Visualization can be made vs direct IRIS BI server or via the connection with AtScale. Logi is available on behalf of the InterSystems Reports solution - you can download the composer on InterSystems WRC. A temporary license can be collected in the discord channel. Documentation Training InterSystems IRIS BI - 3 points InterSystems IRIS Business Intelligence is a feature of IRIS which gives you the option to create BI cubes and pivots against persistent data in IRIS and deliver then this information to users using interactive dashboards. Learn more The basic iris-analytics-template contains examples of IRIS BI cube, pivot, and a dashboard. Here is the set of examples on IRIS BI solutions: Samples BI Covid19 analytics Analyze This Game of Throne Analytics Pivot Subscriptions Error Globals Analytics Creating InterSystems IRIS BI Solutions Using Docker & VSCode (video) The Freedom of Visualization Choice: InterSystems BI (video) InterSystems BI(DeepSee) Overview (online course) InterSystems BI(DeepSee) Analyzer Basics (online course) InterSystems IRIS NLP (iKnow) - 3 points InterSystems NLP a.k.a. iKnow is an InterSystems IRIS feature and is a library for Natural Language Processing that identifies entities (phrases) and their semantic context in natural language text in English, German, Dutch, French, Spanish, Portuguese, Swedish, Russian, Ukrainian, Czech and Japanese. Learn more about iKnow on Open Exchange Examples: Covid iKnow Text Navigator Samples Aviation and more. Use iKnow to manage unstructured data in your analytics solution and get 1 bonus point. Docker container usage - 2 points The application gets a 'Docker container' bonus if it uses InterSystems IRIS running in a docker container. Here is the simplest template to start from. ZPM Package deployment - 2 points You can collect the bonus if you build and publish the ZPM(ObjectScript Package Manager) package for your Full-Stack application so it could be deployed with: zpm "install your-multi-model-solution" command on IRIS with ZPM client installed. ZPM client. Documentation. Unit Testing - 2 points Applications that have Unit Testing for the InterSystems IRIS code will collect the bonus. Learn more about ObjectScript Unit Testing in Documentation and on Developer Community. Online Demo of your project - 3 pointsCollect 3 more bonus points if you provision your project to the cloud as an online demo. You can use this template or any other deployment option. Example. Here is the video on how to use it. Code quality analysis with zero bugs - 2 points Include the code quality Github action for code static control and make it show 0 bugs for ObjectScript. Article on Developer Community - 2 points Post an article on Developer Community that describes the features of your project. Collect 2 points for each article. Translations to different languages work too. Video on YouTube - 3 points Make the Youtube video that demonstrates your product in action and collect 3 bonus points per each. Example. The list of bonuses is subject to change. Stay tuned!
Announcement
Anastasia Dyubaylo · Sep 10, 2021

Video: Best Practices for InterSystems API Manager

Hey Community A new video is already on InterSystems Developers YouTube ⏯ Best Practices for InterSystems API Manager Learn how to setup your services and routes to leverage full potencial of InterSystems API Manager (IAM) and how to use effectively the workspaces. See an example of a good workflow for Open API specification files and learn about High Availability setup. 🗣 Presenter: @Stefan.Wittmann, Product Manager, InterSystems Enjoy and stay tuned!
Announcement
Evgeny Shvarov · Nov 15, 2021

Technology Bonuses for InterSystems Security Contest 2021

Hi Developers! Here're the technology bonuses for the Security Contest 2021 that will give you extra points in the voting: Basic Authentication usage - 2 Bearer/JWT Authentication usage - 3 OAuth 2.0 usage - 5 Authorization components usage - 2 Auditing usage - 2 Data Encryption usage - 2 Docker container usage - 2 ZPM Package deployment - 2 Online Demo - 2 Code Quality pass - 1 Article on Developer Community - 2 Video on YouTube - 3 See the details below. Basic Authentication usage - 2 points Implement basic authentication (user, password) in your application with InterSystems IRIS as a backend. The backend can be e.g. a REST-API web application. Bearer/JWT Authentication usage - 3 points Implement bearer or token authentication in your application with InterSystems IRIS as a backend. The backend can be a REST-API web application. OAuth 2.0 usage - 5 Implement OAuth 2.0 authentication in your application as a client and collect 5 bonus points! We expect the sign-in to your app e.g. via Google or GitHub users. Learn more in the Documentation Authorization components usage - 2 InterSystems Authorisation is built with concepts of Users, Roles, and Resources. (learn more in the documentation). Implement it in your app to collect 2 more expert points. Auditing usage - 2 InterSystems IRIS provides the auditing capability - logging of system or user-defined events, Documentation. Your application can earn 2 more bonus points if it contains code that uses the Auditing feature of InterSystems IRIS. Data Encryption usage - 2 InterSystems IRIS provides the option of database data encryption. Use it in your application programmatically and collect 2 extra bonus points. Learn more in the documentation. Docker container usage - 2 points The application gets a 'Docker container' bonus if it uses InterSystems IRIS running in a docker container. Here is the simplest template to start from. ZPM Package deployment - 2 points You can collect the bonus if you build and publish the ZPM(ObjectScript Package Manager) package for your Full-Stack application so it could be deployed with: zpm "install your-multi-model-solution" command on IRIS with ZPM client installed. ZPM client. Documentation. Online Demo of your project - 2 pointsCollect 3 more bonus points if you provision your project to the cloud as an online demo. You can do it on your own or you can use this template - here is an Example. Here is the video on how to use it. Code quality pass with zero bugs - 1 point Include the code quality Github action for code static control and make it show 0 bugs for ObjectScript. Article on Developer Community - 2 points Post an article on Developer Community that describes the features of your project. Collect 2 points for each article. Translations to different languages work too. Video on YouTube - 3 points Make the Youtube video that demonstrates your product in action and collect 3 bonus points per each. Example. The list of bonuses is subject to change. Stay tuned! Good luck in the competition! An idea to a new bonus point criteria: 50% of code written in object script code in the github calculation.
Announcement
Anastasia Dyubaylo · Jun 23, 2022

[Webinar] InterSystems IRIS Tech Talk: Kafka

Hey Community, In this webinar, we’ll discuss how you can now easily integrate Apache Kafka within InterSystems IRIS data platform, including via interoperability productions and programmatically via API calls, as both a producer as well as a consumer of Apache Kafka messages: ⏯ InterSystems IRIS Tech Talk: Kafka Presenters:🗣 @Kinshuk.Rakshith, InterSystems Sales Engineer🗣 @Robert.Kuszewski, InterSystems Product Manager Enjoy watching the new video on InterSystems Developers YouTube and stay tuned!
Article
Timothy Leavitt · Jun 28, 2022

Unique indices and null values in InterSystems IRIS

An interesting pattern around unique indices came up recently (in internal discussion re: isc.rest) and I'd like to highlight it for the community. As a motivating use case: suppose you have a class representing a tree, where each node also has a name, and we want nodes to be unique by name and parent node. We want each root node to have a unique name too. A natural implementation would be: Class DC.Demo.Node Extends %Persistent { Property Parent As DC.Demo.Node; Property Name As %String [ Required ]; Index ParentAndName On (Parent, Name) [ Unique ]; Storage Default { <Data name="NodeDefaultData"> <Value name="1"> <Value>%%CLASSNAME</Value> </Value> <Value name="2"> <Value>Parent</Value> </Value> <Value name="3"> <Value>Name</Value> </Value> </Data> <DataLocation>^DC.Demo.NodeD</DataLocation> <DefaultData>NodeDefaultData</DefaultData> <IdLocation>^DC.Demo.NodeD</IdLocation> <IndexLocation>^DC.Demo.NodeI</IndexLocation> <StreamLocation>^DC.Demo.NodeS</StreamLocation> <Type>%Storage.Persistent</Type> } } And there we go! But there's a catch: as it stands, this implementation allows multiple root nodes to have the same name. Why? Because Parent is not (and shouldn't be) a required property, and IRIS does not treat null as a distinct value in unique indices. Some databases (e.g., SQL Server) do, but the SQL standard says they're wrong [citation needed; I saw this on StackOverflow somewhere but that doesn't really count - see also @Daniel.Pasco 's comment below on this and the distinction between indices and constraints]. The way to get around this is to define a calculated property that's set to a non-null value if the referenced property is null, then put the unique index on that property. For example: Property Parent As DC.Demo.Node; Property Name As %String [ Required ]; Property ParentOrNUL As %String [ Calculated, Required, SqlComputeCode = {Set {*} = $Case({Parent},"":$c(0),:{Parent})}, SqlComputed ]; Index ParentAndName On (ParentOrNUL, Name) [ Unique ]; This also allows you to pass $c(0) to ParentAndNameOpen/Delete/Exists to identify a root node uniquely by parent (there isn't one) and name. As a motivating example where this behavior is very helpful, see https://github.com/intersystems/isc-rest/blob/main/cls/_pkg/isc/rest/resourceMap.cls. Many rows can have the same set of values for two fields (DispatchOrResourceClass and ResourceName), but we want at most one of them to treated as the "default", and a unique index works perfectly to enforce this if we say the "default" flag can be set to either 1 or null then put a unique index on it and the two other fields. Maybe I'm missing something, but, beyond how nulls are treated, if you want parent to be unique within this definition you must define a unique index on parent (alone). The index you have defined only guarantees that the combination (parent, name) will be unique. Even if you declare the property as required it wouldn't still solve the uniqueness requirement. parameter INDEXNULLMARKER; Override this parameter value to specify what value should be used as a null marker when a property of the type is used in a subscript of an index map. The default null marker used is -1E14, if none is specfied for the datatype. However %Library.PosixTime and %Library.BigInt datatypes could have values that collate before -1E14, and this means null values would not sort before all non-NULL values. For beauty, I would also use the value of this parameter, for example:Class dc.test Extends %Persistent { Property idp As dc.test; Property idpC As %Integer(INDEXNULLMARKER = "$c(0)") [ Calculated, Private, Required, SqlComputeCode = {s {*}=$s({idp}="":$c(0),1:{idp})}, SqlComputed ]; Property Name As %String [ Required ]; Index iUnq On (idpC, Name) [ Unique ]; ClassMethod Test() { d ..%KillExtent() &sql(insert into dc.test(Name,idp)values('n1',1)) w SQLCODE,! ;0 &sql(insert into dc.test(Name,idp)values('n2',1)) w SQLCODE,! ;0 &sql(insert into dc.test(Name,idp)values('n2',1)) w SQLCODE,!! ;-119 &sql(insert into dc.test(Name,idp)values('n1',null)) w SQLCODE,! ;0 &sql(insert into dc.test(Name,idp)values('n2',null)) w SQLCODE,! ;0 &sql(insert into dc.test(Name,idp)values('n2',null)) w SQLCODE,!! ;-119 zw ^dc.testD,^dc.testI } }Output:USER>d ##class(dc.test).Test() 0 0 -119 0 0 -119 ^dc.testD=4 ^dc.testD(1)=$lb("",1,"n1") ^dc.testD(2)=$lb("",1,"n2") ^dc.testD(3)=$lb("","","n1") ^dc.testD(4)=$lb("","","n2") ^dc.testI("iUnq",1," N1",1)="" ^dc.testI("iUnq",1," N2",2)="" ^dc.testI("iUnq",$c(0)," N1",3)="" ^dc.testI("iUnq",$c(0)," N2",4)="" OrProperty idpC As %Integer [ Calculated, Private, Required, SqlComputeCode = {s {*}=$s({idp}="":$$$NULLSubscriptMarker,1:{idp})}, SqlComputed ]; Output:USER>d ##class(dc.test).Test() 0 0 -119 0 0 -119 ^dc.testD=4 ^dc.testD(1)=$lb("",1,"n1") ^dc.testD(2)=$lb("",1,"n2") ^dc.testD(3)=$lb("","","n1") ^dc.testD(4)=$lb("","","n2") ^dc.testI("iUnq",-100000000000000," N1",3)="" ^dc.testI("iUnq",-100000000000000," N2",4)="" ^dc.testI("iUnq",1," N1",1)="" ^dc.testI("iUnq",1," N2",2)="" What about defining a super parent, which is the only node with a NULL parent? In that case every other root node has this node as a parent and filtering is easy (WHERE parent IS NULL filters only the super parent). And there's no need for an additional calculated property in that case. Thank you for pointing this out! I saw this in docs but believe it wouldn't work for object-valued properties. There would still need to be some enforcement of the super parent being the only node with a NULL parent (and the point here is that the unique index wouldn't do that). Also finding all of the top-level nodes (assuming we could have multiple independent trees) would be a slightly more complicated. I have to throw in my opinions and possibly a few facts regarding nulls and unique constraints. IRIS Unique index - this is primarily a syntactical shortcut as it defines not only an index but also a unique constraint on the index key. Most pure SQL implementations don't merge the two concepts and the SQL standard doesn't define indexes. The SQL Standard does define unique constraints. Keep in mind that both IDKEY and PRIMARYKEY are modifiers of a unique constraint (and, in our world, the index defined as IDKEY is also special). There can be at most one index flagged as IDKEY and one that is flagged as PRIMARYKEY. An index can be both PRIMARYKEY and IDKEY. There was once an SQL implementation that defined syntax for both "unique index" and "unique constraint" with different rules. The difference between them was simple - if an index is not fully populated (not all rows in the table appear in the index - we call this a conditional index) then the unique index only checked for uniqueness in the rows represented in the index. A unique constraint applies to all rows. Also keep in mind that an index exists for a singular purpose - to improve the performance of a subset of queries. Any SQL constraint can be expressed as a query. The SQL Standard is a bit inconsistent when it comes to null behavior. In the Framework document there is this definition: A unique constraint specifies one or more columns of the table as unique columns. A unique constraint is satisfied if and only if no two rows in a table have the same non-null values in the unique columns. In the Foundation document, there exists two optional features, F291 and F292. These features define a unique predicate (291) and unique null treatment (292). These features appear to provide syntax where the user can define the "distinct-ness" of nulls. Both are optional features, both are relatively recent (2003? 2008?). The rule when these features are not supported is left to the implementor. IRIS is consistent with the Framework document statement - all constraints are enforced on non-null keys only. A "null" key is defined as a key in which any key column value is null. Thank you Dan! The index/constraint distinction and SQL standard context are particularly helpful facts for this discussion. :) the best part of this article is "IRIS does not treat null as a distinct value in unique indices"
Announcement
Olga Zavrazhnova · Jul 11, 2022

Boston InterSystems Developer Meetup on Python Interoperability

Hi Community, We are excited to announce that InterSystems Developers Meetups are finally back in person! The first Python-related meetup will take place on July 21 at 6:00 at Democracy Brewing, Boston, MA. There will be 2-3 short presentations related to Python, Q&A, networking sessions as well as free beer with snacks and brewery tours. AGENDA: 6:00-6:20 pm First round of drinks, appetizers 6:20-6:40 pm "Run Your Python Code & Your Data Together" by @Robert.Kuszewski, Product Manager - Developer Experience at InterSystems 6:40-7:00 pm "Enterprise Application Integration in Python" by @Michael.Breen, Senior Support Specialist at InterSystems 7:00-7:20 pm Open Mic & Project Discussion: Working on something exciting? Feeling the need to share it? Drop us a line and we can add you to the lineup! 7:20-8:30 pm Brewery tours & Networking Don't miss out excellent opportunity to discuss new solutions in a great company of like-minded peers. Networking is strongly recommended! :) >> REGISTER HERE << ⏱ Time: July 21, 6:00 - 8:30 p.m. 📍Place: Democracy Brewing35 Temple Place in Downtown Crossing, Boston, MAhttps://www.democracybrewing.com/
Announcement
Evgeny Shvarov · Jul 25, 2022

HealthConnect and InterSystems IRIS For Health Comparison

Hi Community! There is a new PDF Resource published on our official site depicting key features and a comparison of InterSystems healthcare interoperability products: Health Connect and IRIS For Health. I think this could be useful for the Community. The PDF is also attached to the post.
Announcement
Anastasia Dyubaylo · Aug 16, 2022

InterSystems Interoperability Contest: Building Sustainable Solutions

Hello Developers! Want to show off your interoperability skills? Take part in our next exciting contest: 🏆 InterSystems Interoperability Contest: Building Sustainable Solutions 🏆 Duration: August 29 - September 18 More prizes: $13,500 – prize distribution has changed! >> Submit your application here << The topic 💡 Interoperability solutions for InterSystems IRIS and IRIS for Health 💡 Develop any interoperability solution or a solution that helps to develop or/and maintain interoperability solutions using InterSystems IRIS or InterSystems IRIS for Health. In addition, we invite developers to try their hand at solving one of the global issues. This time it will be a Sustainable Development Problem. We encourage you to join this competition and build solutions aimed at solving sustainability issues: 1) You will receive a special bonus if your application can solve sustainability issues, ESG, alternative energy sources, optimum utilization, etc.2) There will also be another bonus if you prepare and submit a dataset related to sustainability, ESG, alternative energy sources, or optimum utilization. General Requirements: Accepted applications: new to Open Exchange apps or existing ones, but with a significant improvement. Our team will review all applications before approving them for the contest. The application should work either on IRIS Community Edition or IRIS for Health Community Edition or IRIS Advanced Analytics Community Edition. The application should be Open Source and published on GitHub. The README file to the application should be in English, contain the installation steps, and contain either the video demo or/and a description of how the application works. 🆕 Contest Prizes: You asked – we did it! This time we've increased the prizes and changed the prize distribution! 1. Experts Nomination – winners will be selected by the team of InterSystems experts: 🥇 1st place - $5,000 🥈 2nd place - $3,000 🥉 3rd place - $1,500 🏅 4th place - $750 🏅 5th place - $500 🌟 6-10th places - $100 2. Community winners – applications that will receive the most votes in total: 🥇 1st place - $1,000 🥈 2nd place - $750 🥉 3rd place - $500 ✨ Global Masters badges for all winners included! Note: If several participants score the same amount of votes, they all are considered winners, and the money prize is shared among the winners. Important Deadlines: 🛠 Application development and registration phase: August 29, 2022 (00:00 EST): Contest begins. September 11, 2022 (23:59 EST): Deadline for submissions. ✅ Voting period: September 12, 2022 (00:00 EST): Voting begins. September 18, 2022 (23:59 EST): Voting ends. Note: Developers can improve their apps throughout the entire registration and voting period. Who Can Participate? Any Developer Community member, except for InterSystems employees (ISC contractors allowed). Create an account! Developers can team up to create a collaborative application. Allowed from 2 to 5 developers in one team. Do not forget to highlight your team members in the README of your application – DC user profiles. Helpful Resources: ✓ Instruqt plays: InterSystems Interoperability: Reddit Example ✓ Example applications: interoperability-embedded-python IRIS-Interoperability-template ETL-Interoperability-Adapter HL7 and SMS Interoperability Demo UnitTest DTL HL7 Twitter Sentiment Analysis with IRIS Healthcare HL7 XML RabbitMQ adapter PEX demo iris-megazord ✓ Online courses: Interoperability for Business Interoperability QuickStart Interoperability Resource Guide - 2019 ✓ Videos: Intelligent Interoperability Interoperability for Health Overview ✓ For beginners with IRIS: Build a Server-Side Application with InterSystems IRIS Learning Path for beginners ✓ For beginners with ObjectScript Package Manager (ZPM): How to Build, Test and Publish ZPM Package with REST Application for InterSystems IRIS Package First Development Approach with InterSystems IRIS and ZPM ✓ How to submit your app to the contest: How to publish an application on Open Exchange How to submit an application for the contest Need Help? Join the contest channel on InterSystems' Discord server or talk with us in the comment to this post. We can't wait to see your projects! Good luck 👍 By participating in this contest, you agree to the competition terms laid out here. Please read them carefully before proceeding. This article from MIT Enterprise Forum might help you on getting some inspiration about sustainability issues and spark ideas on what to develop https://mitefcee.org/sustainability-challenges/ Is it required use dtl, bpl and adapters to be considered an interoperability solution? You should have a production, receive data in services and send it to operations via messages and transmit further via operations. so BPL/DTL usage is not mandatory, but will give you a bonus points Added Instruqt resource to try InterSystems Interoperability: Developers! The technology bonuses have been announced! You can gain extra points in the voting by this! Please, check here! Added also iris-megazord by @Henrique.GonçalvesDias, @Henry.HamonPereira, and @José.Pereira. It has a very beautiful Flow Editor for interoperability productions. Not sure if the team will apply again with the solution but it can at least inspire participants. Please help me with the below. Fixed with the proper link https://play.instruqt.com/embed/intersystems/tracks/interop?token=em_XtBbRAsjw_hy2-c2 thank you! Dear Developers, Registration for the contest ends this Sunday.Hurry up to submit your application and you will still have a whole voting week to improve it)Happy coding! 🤩 Developers! Tomorrow is the last day to register for the contest! We are waiting for your applications! Hello Irina, I'm happy to say that I have joined the competition and that my application has been submitted !! Here is the link to my app and the link to the DC post about it. https://openexchange.intersystems.com/package/Sustainable-Machine-Learninghttps://community.intersystems.com/post/sustainable-machine-learning-intersystems-interoperability-contest Guess who's back?! Yaaay!! ;)))
Article
Murray Oldfield · Mar 11, 2016

InterSystems Data Platforms and performance – Part 2

In the last post we scheduled 24-hour collections of performance metrics using pButtons. In this post we are going to be looking at a few of the key metrics that are being collected and how they relate to the underlying system hardware. We will also start to explore the relationship between Caché (or any of the InterSystems Data Platforms) metrics and system metrics. And how you can use these metrics to understand the daily beat rate of your systems and diagnose performance problems. [A list of other posts in this series is here](https://community.intersystems.com/post/capacity-planning-and-performance-series-index) ***Edited Oct 2016...*** *[Example of script to extract pButtons data to a .csv file is here.](https://community.intersystems.com/post/extracting-pbuttons-data-csv-file-easy-charting)* ***Edited March 2018...*** Images had disappeared, added them back in. # Hardware food groups ![Hardware Food Groups](https://community.intersystems.com/sites/default/files/inline/images/foodgroups_0.png "Hardware Food Groups") As you will see as we progress through this series of posts the server components affecting performance can be itemised as: - CPU - Memory - Storage IO - Network IO If any of these components is under stress then system performance and user experience will likely suffer. These components are all related to each other as well, changes to one component can affect another, sometimes with unexpected consequences. I have seen an example where fixing an IO bottleneck in a storage array caused CPU usage to jump to 100% resulting in even worse user experience as the system was suddenly free to do more work but did not have the CPU resources to service increased user activity and throughput. We will also see how Caché system activity has a direct impact on server components. If there are limited storage IO resources a positive change that can be made is increasing system memory and increasing memory for __Caché global buffers__ which in turn can lower __system storage read IO__ (but perhaps increase CPU!). One of the most obvious system metrics to monitor regularly or check when users report problems is CPU usage. Looking at _top_ or _nmon_ on Linux or AIX, or _Windows Performance Monitor_. Because most system administrators look at CPU data regularly, especially if it is presented graphically, a quick glance gives you a good feel for the current health of your system -- what is normal or a sudden spike in activity that might be abnormal or indicates a problem. In this post we are going to look quickly at CPU metrics, but will concentrate on Caché metrics, we will start by looking at _mgstat_ data and how looking at the data graphically can give a feel for system health at a glance. # Introduction to mgstat mgstat is one of the Caché commands included and run in pButtons. mgstat is a great tool for collecting basic performance metrics to help you understand your systems health. We will look at mgstat data collected from a 24 hour pButtons, but if you want to capture data outside pButtons mgstat can also be run on demand interactively or as a background job from Caché terminal. To run mgstat on demand from the %SYS namespace the general format is. do mgstat(sample_time,number_of_samples,"/file_path/file.csv",page_length) For example to run a background job for a one hour run with 5 seconds sample period and output to a csv file. job ^mgstat(5,720,"/data/mgstat_todays_date_and_time.csv") For example to display to the screen but dropping some columns use the dsp132 entry. I will leave as homework for you to check the output to understand the difference. do dsp132^mgstat(5,720,"",60) > Detailed information of the columns in mgstat can be found in the _Caché Monitoring Guide_ in the most recent Caché documentation: > [InterSystems online documentation](https://docs.intersystems.com) # Looking at mgstat data pButtons has been designed to be collated into a single HTML file for easy navigation and packaging for sending to WRC support specialists to diagnose performance problems. However when you run pButtons for yourself and want to graphically display the data it can be separated again to a csv file for processing into graphs, for example with Excel, by command line script or simple cut and paste. In this post we will dig into just a few of the mgstat metrics to show how even a quick glance at data can give you a feel for whether the system is performing well or there are current or potential problems that will effect the user experience. ## Glorefs and CPU The following chart shows database server CPU usage at a site running a hospital application at a high transaction rate. Note the morning peak in activity when there are a lot of outpatient clinics with a drop-off at lunch time then tailing off in the afternoon and evening. In this case the data came from Windows Performance Monitor _(_Total)\% Processor Time_ - the shape of the graph fits the working day profile - no unusual peaks or troughs so this is normal for this site. By doing the same for your site you can start to get a baseline for "normal". A big spike, especially an extended one can be an indicator of a problem, there is a future post that focuses on CPU. ![CPU Time](https://community.intersystems.com/sites/default/files/inline/images/cpu_time.png "CPU Time") As a reference this database server is a Dell R720 with two E5-2670 8-core processors, the server has 128 GB of memory, and 48 GB of global buffers. The next chart shows more data from mgstat — Glorefs (Global references) or database accesses for the same day as the CPU graph. Glorefs Indicates the amount of work that is occurring on behalf of the current workload; although global references consume CPU time, they do not always consume other system resources such as physical reads because of the way Caché uses the global memory buffer pool. ![Global References](https://community.intersystems.com/sites/default/files/inline/images/glorefs_0.png "Global References") Typical of Caché applications there is a very strong correlation between Glorefs and CPU usage. >Another way of looking at this CPU and gloref data is to say that _reducing glorefs will reduce CPU utilisation_, enabling deployment on lower core count servers or to scale further on existing systems. There may be ways to reduce global reference by making an application more efficient, we will revisit this concept in later posts. ## PhyRds and Rdratio The shape of data from graphing mgstat data _PhyRds_ (Physical Reads) and _Rdratio_ (Read ratio) can also give you an insight into what to expect of system performance and help you with capacity planning. We will dig deeper into storage IO for Caché in future posts. _PhyRds_ are simply physical read IOPS from disk to the Caché databases, you should see the same values reflected in operating system metrics for logical and physical disks. Remember looking at operating system IOPS may be showing IOPS coming from non-Caché applications as well. Sizing storage and not accounting for expected IOPS is a recipe for disaster, you need to know what IOPS your system is doing at peak times for proper capacity planning. The following graph shows _PhyRds_ between midnight and 15:30. ![Physical Reads](https://community.intersystems.com/sites/default/files/inline/images/phyrds.png "Physical Reads") Note the big jump in reads between 05:30 and 10:00. With other shorter peaks at 11:00 and just before 14:00. What do you think these are caused by? Do you see these type of peaks on your servers? _Rdratio_ is a little more interesting — it is the ratio of logical block reads to physical block reads. So a ratio of how many reads are from global buffers (logical) from memory and how many are from disk which is orders of magnitude slower. A high _Rdratio_ is a good thing, dropping close to zero for extended periods is not good. ![Read Ratio](https://community.intersystems.com/sites/default/files/inline/images/rdratio.png "Read Ratio") Note that the same time as high reads _Rdratio_ drops close to zero. At this site I was asked to investigate when the IT department started getting phone calls from users reporting the system was slow for extended periods. This had been going on seemingly at random for several weeks when I was asked to look at the system. > _**Because pButtons had been scheduled for daily 24-hour runs it was relatively simple to go back through several weeks data to start seeing a pattern of high _PhyRds_ and low _Rdratio_ which correlated with support calls.**_ After further analysis the cause was tracked to a new shift worker who was running several reports entering 'bad' parameters combined with badly written queries without appropriate indexes causing the high database reads. This accounted for the seemingly random slowness. Because these long running reports are reading data into global buffers the result is interactive user’s data is being fetched from physical storage, rather than memory as well as storage being stressed to service the reads. Monitoring _PhyRds_ and _Rdratio_ will give you an idea of the beat rate of your systems and maybe allow you to track down bad reports or queries. There may be valid reason for high _PhyRds_ -- perhaps a report must be run at a certain time. With modern 64-bit operating systems and servers with large physical memory capacity you should be able to minimise _PhyRds_ on your production systems. > If you do see high _PhyRds_ on your system there are a couple of strategies you can consider: > - Improve the performance by increasing the number of database (global) buffers (and system memory). > - Long running reports or extracts can be moved out of business hours. > - Long running read only reports, batch jobs or data extracts can be run on a separate shadow server or asynchronous mirror to minimise the impact on interactive users and to offload system resource use such as CPU and IOPS. Usually low _PhyRds_ is a good thing and it's what we aim for when we size systems. However if you have low _PhyRds_ and users are complaining about performance there are still things that can be checked to ensure storage is not a bottleneck - the reads may be low because the system cannot service any more. We will look at storage closer in a future post. # Summary In this post we looked at how graphing the metrics collected in pButtons can give a health check at a glance. In upcoming posts I will dig deeper into the relationship between the system and Caché metrics and how you can use these to plan for the future. Murray, thank you for the series of articles.A couple of questions I have.1) Documentation (2015.1) states that Rdratio is a Ratio of physical block reads to logical block reads,while one can see in mgstat log Rdratio values >> 1 (usually 1000 and more). Don't you think that the definition should be reversed?2) You wrote that:If you do see high PhyRds on your system there are a couple of strategies you can consider:...- Long running read only reports, batch jobs or data extracts can be run on a separate shadow server or asynchronous mirror to minimise impact on interactive users and to offload system resource use such as CPU and IOPS.I heard this advice many times, but how to return report results back to primary member? (ECP) mounting of remote database that resides on primary member is prohibited on backup member, and vise versa. Or these restrictions does not apply to asynchronous members (never played with them yet)? Murray thanks for your articles. But I think, should be mentioned metrics related to Write Daemon too, such as WDphase and WDQsz. Some time when our system looks works too slow, it may depends how quickly our disks can write. And I think in this case it is very usefull metrics. In my own experience I saw when in usual day our server started to work to slow in we saw that writedaemon all time was in 8 phase, and with PhyWrs we can count how many blocks were really written on a disk, and it was not so big count in that time, and so we found a problem in our storage, something related with snapshots. And when storage was reconfigured, our witedaemon continued to work as before quickly. I believe the correct statement is that Rdratio is the ratio of logical block reads to physical block reads, but is zero if physical block reads is zero. Thanks! Yes I wrote the wrong way around in the post. I have fixed this now. The latest Caché documentation has details and examples for setting up read only or read/write asynchronous report mirror. The asynch reporting mirror is special because it is not used for high availability. For example it is not a DR server.At the highest level running reports or extracts on a shadow is possible simply because the data exists on the other server in near real time. Operational or time-critcal reports should be run on the primary servers. The suggestion is that resource heavy reports or extracts can use the shadow or reporting server. While setting up a shadow or reporting asynch mirror is part of Caché, how a report or extract is scheduled or run is an application design question, and not something I can answer - hopefully someone else can jump in here with some advice or experience.Posibilities may include web services or if you use ODBC your application could direct queries to the shadow or a reporting asynch mirror. For batch reports or extracts routines could be scheduled on the shadow/reporting asynch mirror via task manager. Or you may have a sepearte application module for this type of reporting. If you need to have results returned to the application on the primary production that is also application dependant.You should also consider how to handle (e.g. via global mapping) any read/write application databases such as audit or logs which may be overwritten by the primary server. If you are going to do reporting on a shadow server search the online documentation for special considerations for "Purging Cached Queries". There are several more aticles to come before we are done with storage IO I will focus more on IOPS and writes in comming weeks. And will show some examples and solutions to the type of problem you mentioned.Thanks, for the comment. I have quite a few more articles (in my head) for this series, I will be using the comments to help me decide which topics you all are interested in. Rdratio is a little more interesting — it is the ratio of logical block reads to physical block reads.Don't you think that zero values of Rdratio is a special case, as David Marcus mentioned?In mgstat (per second) logs I have at hand I've found them always accompanied with zero values of PhyRds. Just one thing; one good tool to use on Linux is dstat.It is not installed by default, but once you have it (apt-get install dstat on Debian and derivates, yum install dstat on RHEL), you can observe the live behavior of your system as a whole with:dstat -ldcymsnIt gives quite a lot of information! Would it be possible to fix the image links to this post? Hi Mack, sorry about that, the images are back!
Announcement
Paul Gomez · Apr 11, 2016

InterSystems Global Summit 2016 - Session Content

Please use the following link to see all Global Summit 2016 sessions and links to additional content, including session materials.https://community.intersystems.com/global-summit-2016
Announcement
Janine Perkins · Apr 13, 2016

Featured InterSystems Online Course: Troubleshooting Productions

Learn how to start troubleshooting productions.Troubleshooting ProductionsLearn how to start troubleshooting productions, with a focus on locating and understanding some of the key Ensemble Management Portal pages when troubleshooting. Learn More.