Search

Clear filter
Question
Bren Mochocki · Aug 16, 2021

Link to Download InterSystems Iris is down

It appears the link to download IRIS is down:https://download.intersystems.com/download/login.csp !
Article
Yuri Marx · Oct 19, 2021

Big Data components and the InterSystems IRIS

In the last years the data architecture and platforms focused into Big Data repositories and how toprocess it to deliver business value. From this effort many technologies were created to process tera and petabytes of data, see: The fundamental piece to the Big Data technologies is HDFS (Hadoop Distributed File System). It is a distributed file system to store tera or petabytes of data into arrays of storages, memory and CPU working together. In addition of the Hadoop we have other components, see: In a Big Data project the first step is create the data ingestion, integration, transformation/enrichment, with data quality as a requirement. In this area we have: Apache Kafka: data event flow, in real time asyncronous events, were each event deliver data items to be ingested by the Big Data using Kafka clients. It is possible process milions of data event per second, creating real time data streams. Apache NiFi: it is an ETL capable to create integration maps that connect with multiples formats, technologies and type of data sources to deliver data to HDFS, Databases or HBase repositories. Acts also as a CDC tool. Apache Sqoop: it is an Data Ingestion tool for SQL Databases. Apache Flume: collect data from the log of the datasources to push to the HDFS or HBase. ESB: it is a service bus used to connect internal and external datasources into a corporate data backbone to send, receive and process data between enterprise data resources, including Big Data repositories. The ESB has connectors/adapters to the main data resources, like Kafka, SQL Databases, NoSQL databases, E-mail, Files, FTP, JMS messages and other. The InterSystems IRIS has an ESB. The ESB can automate data process, delivering complex integrations to the Big Data repositories. HDFS: it is the most used resource to store big data volumes, with performance, realiabity and availability. The data storage is distributed into data nodes working in an active-active HA and using common machines. So, it is cheaper than store into RDMS products. HBase: it is like HDFS, but it is used to store NoSQL data, like objects or documents. Sharding DB: are data stores that use distributed data nodes to allow process Big Data SQL or NoSQL repositories into proprietary formats. The main example is MongoDB, but InterSystems IRIS it is a Shard DB too, including options to store SQL, NoSQL (JSON) or OLAP datastores. YARN and ZooKeeper: are tools to centralize and allows the Big Data tools share configuration, resources, resource names and metadata. Apache Spark: Apache Spark is a distributed processing system used for big data workloads. It utilizes in-memory caching, and optimized query execution for fast analytic queries against data of any size. It provides development APIs in Java, Scala, Python and R, and supports code reuse across multiple workloads—batch processing, interactive queries, real-time analytics, machine learning, and graph processing (AWS definition). The InterSystems IRIS has a Spark adapter that can be used to read and write HDFS data. Apache Hive: it is a distributed, fault-tolerant data warehouse system that enables analytics at a massive scale. A data warehouse provides a central store of information that can easily be analyzed to make informed, data driven decisions. Hive allows users to read, write, and manage petabytes of data using SQL. Hive is built on top of Apache Hadoop, which is an open-source framework used to efficiently store and process large datasets. As a result, Hive is closely integrated with Hadoop, and is designed to work quickly on petabytes of data. What makes Hive unique is the ability to query large datasets, leveraging Apache Tez or MapReduce, with a SQL-like interface (AWS definition). The InterSystems IRIS don't have native adapter for Hive, but I created it (see: https://community.intersystems.com/post/using-sql-apache-hive-hadoop-big-data-repositories). Apache Pig: it is a library that runs on top of Hadoop, providing a scripting language that you can use to transform large data sets without having to write complex code in a lower level computer language like Java. The library takes SQL-like commands written in a language called Pig Latin and converts those commands into Tez jobs based on directed acyclic graphs (DAGs) or MapReduce programs. Pig works with structured and unstructured data in a variety of formats. (AWS definition). Apache Drill: it is a tool to do queries for the main Big Data products using SQL sintax. Drill supports a variety of NoSQL databases and file systems, including HBase, MongoDB, MapR-DB, HDFS, MapR-FS, Amazon S3, Azure Blob Storage, Google Cloud Storage, Swift, NAS and local files. A single query can join data from multiple datastores. For example, you can join a user profile collection in MongoDB with a directory of event logs in Hadoop (Apache Drill definition). I will build an adapter to Apache Drill in the near future. ESB Adapters: it is an interoperability platform, with ready connectors to query and process data from multiple formats and protocols. The InterSystems IRIS has adapter to HTTP, FTP, File, SQL, MQTT (IoT) and many other datasources. These adapters can compose data flows (productions) to deliver the data for multiple targets, including hadoop repositories. InterSystems IRIS it is a good Big Data architecture option with: 1. An excellent integration/ETL layer, as an ESB platform.2. A fantastic distributed data store when using IRIS Database Sharding.3. An important tool to query and process Big Data information, using its adapters. The InterSystems IRIS can also deliver reports, BI Dashboards and data microservices.4. A good tool to work with Kafka, Spark, Hadoop and Hive, to deliver Big Data with additional resources, like ESB, Analytics, Machine Learning, Data workflows (BPL productions) and native support to the main languages (Python, R, Java, .Net, Node.js). See a sample of this into my new app: https://openexchange.intersystems.com/package/IRIS-Big-Data-SQL-Adapter. Thanks! And enjoy!
Announcement
Anastasia Dyubaylo · Jan 25, 2022

[Video] InterSystems IRIS Container Updates

Hey Community, Learn about the changes we've made to InterSystems IRIS Containers, including security updates and the new web gateway container: ⏯ InterSystems IRIS Container Updates 🗣 Presenter: @Eve.Phelps, Senior Systems Developer, InterSystems Enjoy watching on InterSystems Developers YouTube channel and stay tuned!
Announcement
Anastasia Dyubaylo · Jan 26, 2022

French Stream On InterSystems Technologies #8

Hey Community, We invite you to join the next French stream on InterSystems Technologies #8, hosted by @Guillaume.Rongier7183! Date & Time: February 3rd, 12:00 Paris time. 👉 Direct link to join: https://youtu.be/2PFgnuF8bO8 On the agenda this month: 🗞 News What's new in InterSystems IRIS 2021.2 https://community.intersystems.com/post/email-templating-csp https://community.intersystems.com/post/intersystems-iris-and-iris-health-20212-published https://community.intersystems.com/post/intersystems-iris-20212-python-examples-embedded-native-apis-and-notebooks https://community.intersystems.com/post/intersystems-evaluation-service https://wrc.intersystems.com/wrc/coDistEvaluation.csp 💡 Did you know? How to create a SQL connector without code? https://docs.intersystems.com/irisforhealth20212/csp/docbook/DocBook.UI.Page.cls?KEY=HXIHRN_new#HXIHRN_new_sqladapters 🗂 Focus on A new demo on python but on the interoperability framework side Interoperability Embedded Python 👨‍💻 Let’s talk with @Benjamin.DeBoe, Product Manager, SQL et Analytics. New SQL Loader command & features in the analysis of relational tables. https://docs.intersystems.com/irisforhealth20212/csp/docbook/DocBook.UI.Page.cls?KEY=RSQL_loaddata https://community.intersystems.com/post/20212-sql-feature-spotlight-run-time-plan-choice https://community.intersystems.com/post/20212-sql-feature-spotlight-smart-sampling-automation-table-statistics https://community.intersystems.com/post/20212-sql-feature-spotlight-advanced-table-statistics Don’t miss the French stream #8 👉 https://youtu.be/2PFgnuF8bO8 Enjoy watching the prev streams on YouTube: Stream #1 Stream #2 Stream #3 Stream #4 Stream #5 Stream #6 Stream #7 Stay tuned!
Article
Yuri Marx · Jan 30, 2022

Upload into a InterSystems IRIS REST API

If you need create a upload REST API with IRIS is very simple. Do these procedures: From Postman client you send a file P.S.: It is a multipart form with a file type using "file" in the name. The request type is form/multipart. See the http request: POST /image-analyzer/postFile HTTP/1.1 Host: localhost:52773 Content-Length: 213 Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW ----WebKitFormBoundary7MA4YWxkTrZu0gW Content-Disposition: form-data; name="file"; filename="/C:/Users/yurim/OneDrive/Imagens/salesschema.png" Content-Type: image/png (data) ----WebKitFormBoundary7MA4YWxkTrZu0gW Do a REST API backend to get the file and save (upload): P.S.: pay attention if the destination folder has write permissions. Class dc.upload.UploadRESTApp Extends %CSP.REST { Parameter CHARSET = "utf-8"; Parameter CONVERTINPUTSTREAM = 1; Parameter CONTENTTYPE = "application/json"; Parameter Version = "1.0.0"; Parameter HandleCorsRequest = 1; XData UrlMap [ XMLNamespace = "http://www.intersystems.com/urlmap" ] { <Routes> <!-- post image --> <Route Url="/postFile" Method="POST" Call="PostFile" /> </Routes> } ClassMethod PostFile() As %Status { //try to do the actions try { Set info = {} Set source = %request.GetMimeData("file") Set destination=##class(%Stream.FileBinary).%New() Set destination.Filename="/opt/irisbuild/output/"_source.FileName set tSC=destination.CopyFrom(source) //reader open the file set result=destination.%Save() set info.return = result set info.message = "File saved into /opt/irisbuild/output/"_source.FileName Set %response.ContentType = ..#CONTENTTYPEJSON Set %response.Headers("Access-Control-Allow-Origin")="*" Write info.%ToJSON() Set tSC=$$$OK //returns error message to the user } catch e { Set tSC=e.AsStatus() Set pOutput = tSC } Quit tSC } }
Article
Brendan Bannon · Jul 15, 2021

Embedded SQL new in InterSystems IRIS

Benjamin De Boe wrote this great article about Universal Cached Queries, but what the heck is a Universal Cached Query (UCQ) and why should I care about it if I am writing good old embedded SQL? In Caché and Ensemble, Cached Queries would be generated to resolve xDBC and Dynamic SQL. Now in InterSystems IRIS embedded SQL has been updated to use Cached Queries, hence the Universal added to the name. Now any SQL executed on IRIS will be done so from a UCQ class. Why did InterSystems do this? Good Question! The big win here is flexibility in a live environment. In the past, if you add an index or ran TuneTable, the SQL in Cached Queries would make use of this new information right away while embedded SQL would remain unchanged until the class or routine was compiled manually. If your application used deployed classes or only shipped OBJ code, recompiling on the customer system was not an option. Now all SQL statements on a system will be using the latest class def. and the latest tuning data available. In the future, InterSystems IRIS will have optional tools that can monitor and tune your production systems on a nightly basis customizing the SQL plans based on how the tables are being queried. As this toolset grows the power of the Universal Cached Query will grow as well. Is my embedded SQL slower now? Yes and no. Calling a tag in a different routine is a little more expensive than calling a tag in the same routine, so that is slower, but UCQ code generation was different from embedded, and getting to use those changes more than makes up for the expense of calling a different routine. Are there cases where the UCQ code is slower? Yes, but overall you should see better performance. I am an embedded SQL guy from way back so I always like to point out that Embedded SQL is faster than Dynamic SQL. It still is faster, but with all the work that has been done to make objects faster the margin between the 2 styles is small enough that I will not make fun of you for using dynamic SQL. How do I check for errors now? Error handling for Embedded SQL has not changed. SQLCODE will be set to a negative number if we hit an error and %msg will be set to the description of that error. What has changed are the types of errors you can get. The default behavior now is that the SQL will not be compiled until the first time the query is run. This means if you misspell a field or table in the routine the error will not get reported when you compile that routine, it will be reported the first time you execute the SQL, same as dynamic SQL. SQLCODE is set for every SQL command but if you are lazy like me you only ever check SQLCODE after a FETCH. Well, now you might want to start checking on the OPEN as well. &SQL(DECLARE cur CURSOR FOR SELECT Name,junk into :var1, :var2 FROM Sample.Person) &SQL(OPEN cur) write !,"Open Status: ",SQLCODE,?20,$G(%msg) for { &SQL(FETCH cur) write !,"Fecth Status: ",SQLCODE,?20,$G(%msg) QUIT:SQLCODE'=0 w !,var1 } &SQL(CLOSE cur) write !,"Close Status: ",SQLCODE,?20,$G(%msg) QUIT In the code above I have an invalid field in the SELECT. Because we do not compile the SQL when we compile the routine this error is not reported. When I execute the code the OPEN reports the compile error while the FETCH and CLOSE report a cursor not open error. %msg does not get changed so if you check that at any point you will get helpful info: USER>d ^Embedded Open Status: -29 Field 'JUNK' not found in the applicable tables^DECLARE cur CURSOR FOR SELECT Name , junk INTO compiling embedded cached query from Embedded.mac Fetch Status: -102 Field 'JUNK' not found in the applicable tables^DECLARE cur CURSOR FOR SELECT Name , junk INTO compiling embedded cached query from Embedded.mac Close Status: -102 Field 'JUNK' not found in the applicable tables^DECLARE cur CURSOR FOR SELECT Name , junk INTO compiling embedded cached query from Embedded.mac What if I don’t want my embedded SQL to change? You can still do this using Frozen Query Plans. A quick side note, every major IRIS upgrade you do will freeze all SQL Statements so nothing will change if you don’t let it. You can read more about that here. Now back to dealing with UCQ stuff. Here are 3 ways you could freeze embedded SQL plans in your application: If you ship an IRIS.DAT: Do $SYSTEM.OBJ. GenerateEmbedded() to generate the UTC for embedded SQL Freeze the plans: do $SYSTEM.SQL.Statement.FreezeAll() Ship the IRIS.DAT If you use xml files: Do $SYSTEM.OBJ. GenerateEmbedded() to generate the UTC for embedded SQL Freeze the plans: do $SYSTEM.SQL.Statement.FreezeAll() Export the frozen plans: do $SYSTEM.SQL.Statement.ExportAllFrozenPlans() After loading your application, load the frozen plans: do $SYSTEM.SQL.Statement.ImportFrozenPlans() Freeze UTC Plans on the customer site: Load the code with embedded SQL on the customer system Do $SYSTEM.OBJ. GenerateEmbedded() to generate the UTC for embedded SQL Freeze all the plans that got generated: do $SYSTEM.SQL.Statement.FreezeAll() Can I go back to the old behavior? Nope, this is the way it is now. From a developer's point of view, you can get the old behavior back by adding the flag /compileembedded=1 to your compiler options. This will tell the compiler to generate the UCQ class while compiling the class or routine. If there is an issue with the SQL it will be reported at compile time as it did in the past. Compiling routine : Embedded.macERROR: Embedded.mac(5) : SQLCODE=-29 : Field 'JUNK' not found in the applicable tables^DECLARE cur CURSOR FOR SELECT Name , junk INTO compiling embedded cached query from Embedded.macDetected 1 errors during compilation in 0.034s. If you are concerned about the overhead of generating the UCQ classes the first time embedded SQL is run you could add this step as part of your application install to generate them all in advance: do $SYSTEM.OBJ. GenerateEmbedded() This is a very high-level overview of Universal Cached Queries and Embedded SQL. I did not get into any of the real details about what happens under the covers. I just tried to talk about stuff people would run into as they work with Embedded SQL on IRIS. Overall moving to UCQ should make SQL performance more consistent across all types of SQL and it should make updating SQL on a production system easier. There will be some adjustments. Adding the compiler flag will be a big help for me. Now I just need to get used to looking for the generated code in a new place. If you have any questions, comments, concerns about this, or anything else related to SQL on InterSystems IRIS please let me know. Very nice article @brendan.bannon - thank you for boiling it down to a set of core things that developers will care most about!
Question
Muhammad Waseem · Sep 3, 2021

What is InterSystems SSO social account?

Hi, Under the social tab of my profile I found "InterSystems SSO" account along with my other social accounts. What is InterSystems SSO account and to utilize it? Thanks Just curious - where are you seeing this? Could you please include a URL? The InterSystems SSO account is what controls your access to open as well as restricted applications provided by InterSystems to help customers and prospects get stuff done. If you go to Login.InterSystems.com you can see the various apps available to you (after you sign in). This would include things like D.C., OEX, Learning, Download, etc for Prospects, and supported customers would also get access to WRC, iService, HS Docs, CCR, etc (depending on what products they have). Hope that helps - let me know if any additional clarification is needed. SSO is part of Social Tab in Edit Profile in Global Masters.it allows sign in using WRC SSO https://globalmasters.intersystems.com/ It is along with other social accounts Thanks Robert Cemper, What is the difference between Normal and SSO Account ? SSO account can be shared for WRC (if enabled), DC, OEX, Learnig, ..Global Masters is an external application that can make use of SSObut has its own independent accounts as well.
Article
Yuri Marx · Nov 1, 2021

Top 10 InterSystems IRIS Features

The InterSystems IRIS is a great data platform and it is met the current features required by the market. In this article, you see the top 10: Note: this list was updated because many features are added to IRIS in last 3 years (thanks @Kristina.Lauer) Rank Feature Why Learning more about it 1 Democratized analytics InterSystems IRIS Adaptive Analytics:Delivers virtual cubes with centralized business semantics, abstracted from technical details and modeling, to allow business users to easily and quickly create their analyses in Excel or their preferred analytics product (PowerBI, Tableau, etc.). There are no consumption restrictions per user.InterSystems Reports:It is a low code report designer to deliver operational data reports embedded on any application or in a web report portal. Overview of Adaptive Analytics, Adaptive Analytics Essentials Introduction to InterSystems Reports,Delivering Data Visually with InterSystems Reports 2 API Manager The digital assets are consumed using API REST. Is required govern the reuse, security, consuming, asset catalog, developer ecosystem and others aspects in a central point. The API Manager is the right tool to do this. So, all the companies have or want to have an API Manager. Hands-On with API Manager for Devs 3 Scalable Databases Sharding DatabaseThe total amount of data created, captured, copied, and consumed globally is forecast to increase rapidly, reaching 64.2 zettabytes in 2020. Over the next five years up to 2025, global data creation is projected to grow to more than 180 zettabytes. In 2020, the amount of data created and replicated reached a new high (source: https://www.statista.com/ statistics/871513/worldwide-data-created/). In this scenario, is critical to the business be able to process data in a distributed way (into shards, like hadoop, or mongodb), to increase and mantain the performance. The other important thing is the IRIS is 3 times more rapid then Cache, and more rapid then AWS databases, into the AWS cloud.Columnar storageChanges the storage of repeating data into columns instead of rows, allowing you to achieve up to 10x higher performance, especially in aggregated (analytical) data storage scenarios. Planning and Deploying a Sharded Cluster Scaling for Data Volume with Sharding Increasing Analytical Query Speed Using Columnar StorageUsing Columnar Storage 4 Python support Python is the most popular language to do AI and AI is in the center of the business strategy, because allows you get new insights, get more productivity and reduce costs. Writing Python Applications with InterSystems Leveraging Embedded Python in Interoperability Productions 5 Native APIs (Java, .NET, Node.js, Python) and PEX The US has nearly 1 million open IT jobs (source: https://www.cnbc.com/2019/11/06/ how-switching-careers-to-tech-could-solve-the-us-talent-shortage.html). Is very hard find an Object Script developer. So, is important be able use IRIS features, like interoperability with the developer team official programming language (Python, Java, .NET, etc.). Creating Interoperability Productions Using PEX, InterSystems IRIS for Coders, Node.js QuickStart, Using the Native API for Python 6 Interoperability, FHIR and IoT Businesses are constantly connecting and exchanging data. Departments also need to work connected to deliver business processes with more strategic value and lower cost. The best technology to do this, is the interoperability tools, especially ESB, Integration Adapters, Business Process automation engines (BPL), data transformation tools (DTL) and the adoption of market interoperability standards, like FHIR and MQTT/IoT. The InterSystems Interoperability supports all this (for FHIR use IRIS for Health). Receiving and Routing Data in a Production, Building Basic FHIR Integrations with InterSystems, Monitoring Remotely with MQTT, Building Business Integrations with InterSystems IRIS 7 Cloud, Docker & Microservices Everyone now wants cloud microservices architecture. They want to break the monoliths to create projects that are smaller, less complex, less coupled, more scalable, reusable, and independent. IRIS allows you deploy data, application and analytics microservices, thanks IRIS support to shards, docker, kubernetes, distributed computing, DevOps tools and lower CPU/memory consumption (IRIS supports even ARM processors!). But microservices requires the microservice API management, using API Manager, to be used aligned to the business. Deploying InterSystems IRIS in Containers and the Cloud Deploying and Testing InterSystems Products Using CI/CD Pipelines 8 Vector Search and Generative AI Vectors are mathematical representations of data and textual semantics (NLP), and are the raw material for generative AI applications to understand questions and tasks and return correct answers. Vector repositories and searches are capable of storing vectors (AI processing) so that for each new task or question, they can retrieve what has already been produced (AI memory or knowledge base), making everything faster and cheaper. Developing Generative AI Applications, Using Vector Search 9 VSCode support VSCode is the most popular IDE and InterSystems IRIS has a good set of tools for it. Developing on an InterSystems Server Using VS Code 10 Data Science The ability to apply data science to the data, integration and transaction requests and responses, using Python, R and IntegratedML (AutoML) enable AI intelligence at the moment is required by the business. The InterSystems IRIS deliver AI with Python, R and IntegratedML (AutoML) Hands-On with IntegratedML Developing in Python or R within InterSystems IRIS Predicting Outcomes with IntegratedML in InterSystems IRIS nice summary :) 💡 This article is considered as InterSystems Data Platform Best Practice. I updated this article, thanks @Kristina.Lauer
Announcement
Anastasia Dyubaylo · Nov 29, 2021

InterSystems Security Contest: Voting time!

Hey Developers, This week is a voting week for the InterSystems Security contest! So, it's time to give your vote to the best solutions built with InterSystems IRIS. 🔥 You decide: VOTING IS HERE 🔥 How to vote? Details below. Experts nomination: InterSystems experienced jury will choose the best apps to nominate the prizes in the Experts Nomination. Please welcome our experts: ⭐️ @Andreas.Dieckow, Principal Product Manager⭐️ @Robert.Kuszewski, Product Manager⭐️ @Raj.Singh5479, Product Manager⭐️ @Sean.Klingensmith, Senior Systems Developer⭐️ @Wanqing.Chen, Systems Developer⭐️ @Pravin.Barton, Developer⭐️ @Timothy.Leavitt, Development Manager⭐️ @Francois.LeFloch, Senior Solutions Engineer⭐️ @Marc.Mundt, Senior Sales Engineer⭐️ @Eduard.Lebedyuk, Sales Engineer⭐️ @Alberto.Fuentes, Sales Engineer⭐️ @Guillaume.Rongier7183, Sales Engineer⭐️ @Evgeny.Shvarov, Developer Ecosystem Manager Community nomination: For each user, a higher score is selected from two categories below: Conditions Place 1st 2nd 3rd If you have an article posted on DC and an app uploaded to Open Exchange (OEX) 9 6 3 If you have at least 1 article posted on DC or 1 app uploaded to OEX 6 4 2 If you make any valid contribution to DC (posted a comment/question, etc.) 3 2 1 Level Place 1st 2nd 3rd VIP Global Masters level or ISC Product Managers 15 10 5 Ambassador GM level 12 8 4 Expert GM level or DC Moderators 9 6 3 Specialist GM level 6 4 2 Advocate GM level or ISC Employees 3 2 1 Blind vote! The number of votes for each app will be hidden from everyone. Once a day we will publish the leaderboard in the comments to this post. The order of projects on the Contest Page will be as follows: the earlier an application was submitted to the competition, the higher it will be in the list. P.S. Don't forget to subscribe to this post (click on the bell icon) to be notified of new comments. To take part in the voting, you need: Sign in to Open Exchange – DC credentials will work. Make any valid contribution to the Developer Community – answer or ask questions, write an article, contribute applications on Open Exchange – and you'll be able to vote. Check this post on the options to make helpful contributions to the Developer Community. If you changed your mind, cancel the choice and give your vote to another application! Support the application you like! Note: contest participants are allowed to fix the bugs and make improvements to their applications during the voting week, so don't miss and subscribe to application releases! I liked the new expert method to choose the best apps, congrats! So! After the first day of the voting we have: Expert Nomination, Top 3 iris-disguise by @henry IRIS Middlewares by @davimassaru.teixeiramuta zap-api-scan-sample by @José.Pereira ➡️ Voting is here. Community Nomination, Top 3 Server Manager 3.0 Preview by @John.Murray passwords-tool by @Dmitry.Maslenniko iris-disguise by @henry ➡️ Voting is here. Experts, we are waiting for your votes! 🔥 Participants, improve & promote your solutions! Here are the results after 2 days of voting: Expert Nomination, Top 3 iris-disguise by @Henry Pereira Server Manager 3.0 Preview by @John.Murray zap-api-scan-sample by @José Roberto Pereira ➡️ Voting is here. Community Nomination, Top 3 iris-disguise by @Henry Pereira Server Manager 3.0 Preview by @John Murray passwords-tool by @Dmitry.Maslennikov ➡️ Voting is here. So, the voting continues. Please support the application you like! Voting for the InterSystems Security contest goes ahead! And here're the results at the moment: Expert Nomination, Top 3 iris-disguise by @Henry Pereira Data_APP_Security by @Muhammad.Waseem iris-saml-example by @Dmitry.Maslennikov ➡️ Voting is here. Community Nomination, Top 3 iris-disguise by @Henry Pereira API Security Mediator by @Yuri.Gomes zap-api-scan-sample by @José.Pereira ➡️ Voting is here. In the expert vote, how many points are computed for each expert's vote? Developers! Only 3 days left till the end of the voting period. Please support our participants with your votes! At the moment we have next results: Expert Nomination, Top 3 iris-disguise by @Henry Pereira iris-saml-example by @Dmitry.Maslennikov zap-api-scan-sample by @José Roberto Pereira ➡️ Voting is here. Community Nomination, Top 3 iris-disguise by @Henry Pereira zap-api-scan-sample by @José Roberto Pereira API Security Mediato by @Yuri.Gomes ➡️ Voting is here. Have a good weekend) Hey Yuri, I don't have this info. But from my point of view, it's pointless to know that, since we don't have the information of "current points". They changed the way of showing that for a reason, and this is where the fun lives. It will be a thriller until the end, and that's ok. Last day of voting! ⌛ Please check out the Contest Board.Our contestants need your votes! 📢
Announcement
Anastasia Dyubaylo · Nov 25, 2021

French Stream On InterSystems Technologies #7

Hey Community, We invite you to join the next French stream on InterSystems Technologies #7, hosted by @Guillaume.Rongier7183! Date & Time: December 2nd, 12:00 Paris time. 👉 Direct link to join: https://bit.ly/30UV6xp On the agenda this month: 🗞 News https://community.intersystems.com/post/sql-search-index-json-objects#comments0 https://community.intersystems.com/post/how-script-download-installation-kits-wrc SQL Load statement demo: https://docs.intersystems.com/iris20212/csp/docbook/DocBook.UI.Page.cls?KEY=RSQL_loaddata Tradeshows & Conferences : https://santexpo.live/ https://www.supplychain-event.com/ https://www.aiforhealth.fr/ https://www.meetup.com/fr-FR/FHIR-France/events/278358909/ 💡 Did you know? Data type: VarString vs String Neat trick: how to change max RAM process limit on the fly (OnInit + $ZStorage) 🗂 Focus on Python & InterSystems IRIS: How to interact with InterSystems IRIS in Python Don’t miss the French Stream #7 👉 https://bit.ly/30UV6xp Enjoy watching the prev streams on YouTube: Stream #1 Stream #2 Stream #3 Stream #4 Stream #5 Stream #6 Stay tuned!
Question
Rob Rubinson · Nov 28, 2021

InterSystems ODBC driver backwards compatible?

Is InterSystems ODBC driver backwards compatible? If I use cache version 2018, can I upgrade the ODBC drivers to the 2021 version or it is better to upgrade to the 2018? Rob, the ODBC drivers for IRIS are not compatible with Caché and Ensemble. Erik what about the ODBC drivers not for IRIS? Earlier ODBC driver versions should be compatible with Caché 2018.1, but we recommend updating clients to the latest driver for your Caché version. There aren't any 2021.x ODBC drivers for Caché on the WRC download page. If you have one and you want to be sure you have a good driver, reach out to the WRC so we can understand your specific situation. Erik
Article
Yuri Marx · Jan 12, 2022

JSON Schema applied to InterSystems IRIS

The JSON is a data document free of types and validation rules. However, in some scenarios it is important that the JSON document has type and business rules validation, especially in interoperability scenarios. This article demonstrates how you can leverage a market-defined JSONSchema technology that is open for everyone to use and do advanced validations. About JSON According to json.org, “JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language Standard ECMA-262 3rd Edition - December 1999. JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make JSON an ideal data-interchange language”. (Source: https://www.json.org/json-en.html). An example of a JSON document could be: About JSON Schema According to json-schema.org, “JSON Schema is a vocabulary that allows you to annotate and validate JSON documents.”.If JSON is easy for humans to read, write and understand, why do we need to apply a schema to validate JSON documents/content? The main reasons are: To define a clear contract to interoperate JSON based data between partners and their applications. To detail, document and describe your JSON files, making it easier to use. To validate JSON data for automated testing, ensuring the quality of the requests and responses. To generate JSON sample and/or real data from the JSON Schema. To apply business/validation rules into JSON content, instead of creating language dependent validations. To support the “contract first approach” on the development of REST API. More details on https://swagger.io/blog/api-design/design-first-or-code-first-api-development/. JSON Syntax To understand the role of JSON Schema in a JSON game, it is necessary to know more about JSON syntax. The JSON syntax is derived from JavaScript object notation syntax, so, the syntax rules are equal to JavaScript objects. The JSON rules are next (source: https://www.w3schools.com/js/js_json_syntax.asp): Data must be specified in name/value pairs. Data must be separated by commas. Curly braces hold objects. Square brackets hold arrays. The JSON Data consists of a name/value pair with a field name (in double quotes), followed by a colon and followed by a value. Example: JSON names require double quotes and refers to JavaScript objects and to InterSystems IRIS ObjectScript Dynamic Object (%Library.DynamicObject) too. More details in https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GJSON_intro#GJSON_intro_methods. In JSON, values must be one of the following data types: A string, must be between “”. Sample: “a string”. A number, must be integer or decimal values. Sample: 10 or 20.23. An object, must be between {}. Sample: {“firstName”: “John”}. An array, must be between []. Sample: [{“firstName: “John”},{“firstName”: “Anna”}]. A Boolean, must be true or false. Sample: {“isDead”: false}. A null, to set no value. When you apply a JSON document to a language object, these syntax rules are validated, but sometimes, it is necessary to establish business rules too. JSON Schema is used for this, to expand JSON basic rules with rules specified in JSON Schema documents.The InterSystems IRIS has full support for JSON basic rules, and JSON content is very elegant to read or write.. It is easier than any other programming language, see a sample: InterSystems IRIS Object Script writing JSON Java writing JSON set dynObject1 = ##class(%DynamicObject).%New()set dynObject1.SomeNumber = 42set dynObject1.SomeString = "a string"set dynObject1.SomeArray = ##class(%DynamicArray).%New()set dynObject1.SomeArray."0" = "an array element"set dynObject1.SomeArray."1" = 123dynObject1.%ToJSON() //First EmployeeJSONObject employeeDetails = new JSONObject();employeeDetails.put("firstName","Lokesh");employeeDetails.put("lastName", "Gupta");employeeDetails.put("website","howtodoinjava.com");JSONObject employeeObject = new JSONObject(); employeeObject.put("employee", employeeDetails); //Second EmployeeJSONObject employeeDetails2 = new JSONObject();employeeDetails2.put("firstName", "Brian");employeeDetails2.put("lastName", "Schultz");employeeDetails2.put("website", "example.com"); JSONObject employeeObject2 = new JSONObject(); employeeObject2.put("employee", employeeDetails2); //Add employees to listJSONArray employeeList = new JSONArray();employeeList.add(employeeObject);employeeList.add(employeeObject2);employeeList.toJSONString(); While ObjectScript is a dynamic language, allowing setting JSON properties as object properties, other languages, like Java, force you to set key and values inside objects. On the other hand, Java and other languages support JSON Schema using open source and commercial packages, but the ObjectScript does not support JSON Schema at the moment. See the list from json-schema.org (source: https://json-schema.org/implementations.html): NET Json.NET Schema 2019-09, draft-07, -06, -04, -03 (AGPL-3.0-only) JsonSchema.Net 2020-12, 2019-09, draft-07, -06 (MIT) C++ f5-json-schema draft-07 (Boost Software License 1.0) JSON schema validator for JSON for Modern C++ draft-07 (MIT) Valijson draft-07 header-only library, works with many JSON parser implementations (BSD-2-Clause) Jsoncons draft-07 header-only library (Boost Software License 1.0) Java Snow 2019-09, draft-07, -06 Uses Maven for the project and Gson under the hood. (GNU Affero General Public License v3.0) Vert.x Json Schema 2019-09, draft-07 includes custom keywords support, custom dialect support, asynchronous validation (Apache License, Version 2.0) Everit-org/json-schema draft-07, -06, -04 (Apache License 2.0) Justify draft-07, -06, -04 (Apache License 2.0) Networknt/json-schema-validator draft-07, -06, -04 Support OpenAPI 3.0 with Jackson parser (Apache License 2.0) Jsonschemafriend 2019-09, draft-07, -06, -04, -03 (Apache License 2.0) JavaScript Ajv 2019-09, 2020-12, draft-07, -06, -04 for Node.js and browsers - supports user-defined keywords and $data reference (MIT) Djv draft-06, -04 for Node.js and browsers (MIT) Hyperjump JSV 2019-09, 2020-12, draft-07, -06, -04 Built for Node.js and browsers. Includes support for custom vocabularies. (MIT) Vue-vuelidate-jsonschema draft-06 (MIT) @cfworker/json-schema 2019-09, draft-07, -06, -04 Built for Cloudflare workers, browsers, and Node.js (MIT) Python jschon 2019-09, 2020-12 (MIT) jsonschema 2019-09, 2020-12, draft-07, -06, -04, -03 (MIT) fastjsonschema draft-07, -06, -04 Great performance thanks to code generation. (BSD-3-Clause) jsonschema-rs draft-07, -06, -04 Python bindings to Rust’s jsonschema crate (MIT) This is a sample how to use JSON schema to validate JSON content (using Python, source: https://jschon.readthedocs.io/en/latest/): Fortunately, IRIS allows you to create packages or frameworks using the programming language of your choice (.NET or Java using PEX or Language Server). So, it is possible to create an IRIS extension package to handle JSON Schema in IRIS. Another possibility is to use Embedded Python and create a JSON validation method class (in the version 2021.2+). Extending the InterSystems IRIS to support JSON Schema using Java Language Server (Java Gateway) Among Java frameworks, the networknt/json-schema-validator is used the most to validate JSON using JSON Schema.To use this Java framework, you can get the application https://openexchange.intersystems.com/package/IRIS-JSON-Schema-Validator. This application has the following files and folders: 1. The folder jgw has the necessary files to create a Java Gateway (bridge to allow communication between Java and IRIS classes); 2. The iris-json-schema-1.0.0.jar has the Java classes and libraries (including json-schema-validator) to service JSON Schema validations; 3. The JSONSchemaValidator.cls has the ObjectScript code to use the Java class and do JSON validations using JSON schema by the validation rules; 4. The Dockerfile and docker-compose.yml run the Java Gateway and the IRIS as docker instances. The Java Class has a validation method, which uses the framework json-schema-validator to validate the JSON Schema and the JSON, and to return the validation results. See the Java Code: Java Class for JSON Schema Validation package dc.irisjsonschema; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.util.Set; import java.util.stream.Collectors; import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.databind.JsonNode; import com.fasterxml.jackson.databind.ObjectMapper; import com.networknt.schema.JsonSchema; import com.networknt.schema.JsonSchemaFactory; import com.networknt.schema.SpecVersion; import com.networknt.schema.ValidationMessage; public class JSONSchemaService { public String validate(String jsonSchemaContent, String jsonContent) { // create instance of the ObjectMapper class ObjectMapper objectMapper = new ObjectMapper(); // create an instance of the JsonSchemaFactory using version flag JsonSchemaFactory schemaFactory = JsonSchemaFactory.getInstance(SpecVersion.VersionFlag.V201909); try { // read data from the stream and store it into JsonNode JsonNode json = objectMapper.readTree(jsonContent); // get schema from the schemaStream and store it into JsonSchema JsonSchema schema = schemaFactory.getSchema(jsonSchemaContent); // create set of validation message and store result in it Set<ValidationMessage> validationResult = schema.validate(json); if(validationResult.isEmpty()) { return "{\"message\": \"JSON is valid\", \"valid\":true}"; } else { return "{\"message\": \"" + validationResult.toString() + "\", \"valid\":false}"; } } catch (JsonProcessingException e) { return e.getMessage(); } } } The InterSystems IRIS uses this Java validation method to validate JSON content. To do that it is necessary to create a JAR file with the validate class inside (iris-json-schema-1.0.0.jar) and to configure a Java Gateway (bridge between Java and IRIS communication), allowing ObjectScript to call the Java methods.The ObjectScript code which uses the Java Class and the JSONSchemaValidator class is presented here: Final Java Class read to Validate JSON inside IRIS /// Validate JSON documents using JSON Schema documents Class dc.irisjsonschema.JSONSchemaValidator { /// Get json and json schema em return validation results ClassMethod Validate(JSONSchemaContent As %String, JSONContent As %String, Output ValidationResult As %String) As %Status { Set sc = $$$OK // Connect a Gateway instance to server JavaGateway on the host machine set GW = ##class(%Net.Remote.Gateway).%New() set st = GW.%Connect("jgw", "55555", "IRISAPP",,) //instantiate java class set JSONSchemaValidator = ##class(%Net.Remote.Object).%New(GW,"dc.irisjsonschema.JSONSchemaService") //validate the JSON using a JSON Schema set ValidationResult = JSONSchemaValidator.validate(JSONSchemaContent, JSONContent) Write ValidationResult Return sc } With this ObjectScript class and the Validate class method, it is possible to use any JSON content and any JSON Schema definition to validate basic and advanced validation rules.To see this, execute these steps: 1. Go to https://openexchange.intersystems.com/package/IRIS-JSON-Schema-Validator 2. Git-clone the repository into any local directory $ git clone https://github.com/yurimarx/iris-json-schema.git 3. Open the terminal in this directory and run it: $ docker-compose build 4. Run the IRIS container with your project: $ docker-compose up 5. Go to the IRIS terminal (open a new VSCode Terminal) docker exec -it iris-json-schema_iris_1 bash iris session iris 6. Change to the IRISAPP namespace set $namespace = "IRISAPP" 7. Get a sample JSON Schema set jsonSchema = ##class(dc.irisjsonschema.JSONSchemaValidator).GetJSONSchema() 8. Get a sample valid JSON set jsonContent = ##class(dc.irisjsonschema.JSONSchemaValidator).GetValidSampleJSON() 9. Get a validation equals to valid set st = ##class(dc.irisjsonschema.JSONSchemaValidator).Validate(jsonSchema,jsonContent,.result) write result 10. Now, get a sample INVALID JSON set jsonContent = ##class(dc.irisjsonschema.JSONSchemaValidator).GetInvalidSampleJSON() 11. After that, get validation equals to INVALID set st = ##class(dc.irisjsonschema.JSONSchemaValidator).Validate(jsonSchema,jsonContent,.result) write result 12. The JSON Schema used was: JSON Schema used to define the validation rules { "$schema": "https://json-schema.org/draft/2019-09/schema#", "type": "object", "title": "Movie art", "description": "Information about actors", "additionalProperties": true, "required": [ "name", "artist", "description", "tags" ], "properties": { "name": { "type": "string", "description": "Painting name" }, "artist": { "type": "string", "maxLength": 50, "description": "Name of the artist" }, "description": { "type": [ "string", "null" ], "description": "Painting description" }, "tags": { "type": "array", "items": { "$ref": "#/$defs/tag" } } }, "$defs": { "tag": { "type": "string", "enum": [ "action", "suspense", "UK", "famous", "license", "kill" ] } }} This JSON Schema configured the fields and the tags array with limited values (enum) as required. The types of the JSON fields are defined too. So, for this schema the following JSON Object is valid: set obj = {}set obj.name = "Agent 007"set obj.artist = "Pierce Brosman"set obj.description = "007 actor"set tags = ["license","kill"]set obj.tags = tags All properties use the right data type and the tags use values inside values allowed in the Schema for this array. Now, see a JSON Object invalid: set obj = {}set obj.name = "Agent 007"set obj.artist = "Pierce Brosman"set obj.description = 1set tags = []set tags."0" = "license" set tags."1" = "love"set obj.tags = tags This object sets an integer property to a description (the valid type is string) and sets the value “love”, out of valid values allowed to array tags.The site https://json-schema.org/ has tools, samples and learning resources to learn on how to validate and write advanced validation rules using JSON Schema. Hi Yuri, Great post! I had the challenge to validate incoming json for a REST API for which I had created a decent swagger 2.0 spec, which follows the Draft4 specification of jsonschema. I implemented a validator that: Fetches the Swagger specification from the spec class Uses https://github.com/python-jsonschema/jsonschema In embedded Python code Creates a specific validator for the Draft4 specification Builds a nice JSON array with the errors found One thing to specifically note is the need to add "additionalProperties": false if you want to check against unexpected properties. I'll paste the code in the next comment. Let me know if you have feedback or questions! This is the code that was the result: /// Perform JSON Schema validation based on the API specification Class MMLOGGINGPKG.Validations.JsonSchemaValidation { /// Validate against the schema ClassMethod JsonIsValid(logregel As %DynamicObject, regelnummer As %String = "", errors As %DynamicArray, classname As %String) As %Boolean { #define SchemaValidator %JSchemaValidator if '$Data($$$SchemaValidator) || '$IsObject($$$SchemaValidator) { set specification = {}.%FromJSON(##class(%Dictionary.XDataDefinition).%OpenId("MMLOGGINGPKG.API.v1.spec||OpenAPI").Data) // Prevent enum validations, these are too verbose do specification.definitions.Event.properties.type.%Remove("enum") do specification.definitions.Request.properties.method.%Remove("enum") set $$$SchemaValidator = ..GetValidator({ "$ref": "#/definitions/Logregel", "definitions": (specification.definitions) }.%ToJSON()) } return ..PythonJsonIsValid(logregel.%ToJSON(), $$$SchemaValidator, regelnummer, errors, classname) } /// Validate JSON using Python implementation of jsonschema ClassMethod GetValidator(schema As %String) As %ObjectHandle [ Language = python ] { import json from jsonschema import Draft4Validator jschema = json.loads(schema) return Draft4Validator(schema=jschema) } /// Validate JSON using Python implementation of jsonschema ClassMethod PythonJsonIsValid(object As %String, validator As %ObjectHandle, regelnummer As %String, errors As %DynamicArray, classname As %String) As %Boolean [ Language = python ] { import json import iris from jsonschema import Draft4Validator jobject = json.loads(object) valid = 1 for error in validator.iter_errors(instance=jobject): try: valid = 0 #; print(error.json_path) #; e.g. '$' or $.error.status path = error.json_path.replace("$.", "").replace("$", "") pattern = "Additional properties are not allowed ('" if path.count(".") == 0 and error.message.startswith(pattern): try: object = error.path.pop() + "." except: object = "" pass x = error.message.replace(pattern, "") x = x.split("' w")[0] for attribute in x.split("', '"): iris.cls(classname).AddError(errors, regelnummer, object + attribute, "Additional properties are not allowed") else: iris.cls(classname).AddError(errors, regelnummer, path, error.message) pass except Exception as xxend: print(xxend) pass return valid } } A couple of clarifications: Replace "MMLOGGINGPKG.API.v1.spec" with your API spec You will have to tweak the schema as passed to GetValidator() to match what you want to validate The trick with $$$SchemaValidator is a way to create and initialize a singleton instance of the validator. The downside of that obviously is when you change the spec you have to make sure to restart the process in which the singleton resides. AddError is a classmethod that looks like: /// AddError ClassMethod AddError(fouten As %DynamicArray, regelnummer As %Integer, path As %String, error As %String) { do fouten.%Push({ "regelnummer": (regelnummer), "path": (path), "error": (error) }) }
Announcement
Evgeny Shvarov · Jan 12, 2022

InterSystems Datasets Contest Bonuses Results

Hi contestants! We've introduced a set of bonuses for the projects for the Datasets Contest! Here are the projects that scored it: Project Demo Repo LOAD DATA Questionnaire Unique Real Dataset Docker ZPM Online Demo Code Quality First Article on DC Second Article on DC Video on YouTube Total Bonus Nominal 4 3 2 4 2 3 2 1 2 1 3 27 Medical Datasets 3 2 2 3 2 1 2 1 3 19 Dataset OEX reviews 2 2 3 1 2 10 Dataset Lightweight M:N 2 2 3 1 2 10 openflights_dataset 3 2 2 1 2 1 11 iris-kaggle-socrata-generator 2 2 3 1 2 1 3 14 dataset-covid19-fake-news 3 2 2 - 1 8 Health Dataset 4 2 2 3 1 2 1 3 18 exchange-rate-cbrf 4 3 2 2 3 2 1 2 1 20 iris-python-faker 2 2 3 2 1 2 12 ApacheLog-Dataset 4 3 2 2 3 2 1 2 1 3 23 dataset-finance 4 3 2 2 3 2 1 2 1 3 23 Please apply with your comments here in the posts or in Discord. I claim youtube bonus (the video is linked with app already) for the app Health Dataset Hi Mr. EvgenyThanks for sharing the Bonuses Results.Please note that online demo is available now for Medical Datasets application. Regards I claim bonus second article: https://community.intersystems.com/post/predict-maternal-health-risks and bonus to app repo: https://openexchange.intersystems.com/package/Predict-Maternal-Risk I claim online demo bonus, my online app is hosted into http://ymservices.tech:52773/csp/sys/UtilHome.csp But this is Management Portal, right? Where is the demo? Hi Community! We decided to change the results for the Online Demo bonus category.The main reason is that the online server with the dataset hosted on it is not a sufficient demonstration. Please re-read the bonuses descriptions post to find out about the additions to the Usage Demo and Online Demo categories. And read contest chat in Discord. I set up an online [demo.]( https://cbrf.demo.community.intersystems.com/apptoolsrest/a/rate&class=a...) And mentioned it in the [readme.](https://github.com/SergeyMi37/exchange-rate-cbrf) First Article on DC [published](https://community.intersystems.com/post/database-exchange-rates-central-bank-russian-federation) Hi!. EvgenyI posted Second Article on DC:https://community.intersystems.com/post/exchange-rate-cbrf-ui-project-demonstration-application-exchange-rate-database-contest Hi!. Evgeny I Added example LOAD DATA And mentioned it in the readme.
Announcement
Anastasia Dyubaylo · Jun 7, 2021

Special Sauce: InterSystems IRIS Overview

Hi Community, Please welcome the new video from #VSummit20: ⏯ Special Sauce: InterSystems IRIS Overview See what makes InterSystems IRIS data platform so special, learn about the unique features behind the scenes, and identify what InterSystems IRIS can do for you. Follow #InterSystemsIRIS for more. 🗣 Presenter: @Harry.Tong, Senior Solutions Architect, InterSystems Subscribe to InterSystems Developer YouTube and stay tuned! 👍🏼
Announcement
Anastasia Dyubaylo · Aug 9, 2021

Video: Adaptive Analytics in InterSystems IRIS

Hey Developers, Watch the new video on InterSystems Developers YouTube: ⏯ Adaptive Analytics in InterSystems IRIS Get an overview of InterSystems IRIS Adaptive Analytics, which brings ease-of-use and scalability to analytics end-users. Learn more about the benefits and availability of this new offering, announced in October 2020. 🗣 Presenter: @Carmen.Logue, Product Manager - Analytics and AI, InterSystems Enjoy and stay tuned!