Search

Clear filter
Announcement
Anastasia Dyubaylo · Jun 16, 2021

Webinar: Implementing Your Data Fabric with InterSystems IRIS

Hey Community, Join the next InterSystems webinar "Implementing Your Data Fabric with InterSystems IRIS" on June 22 at 11:00 AM EDT to learn: How you can access, transform, and harmonize data from multiple sources, on demand, and make it usable and actionable for a wide variety of business applications at the convergence of transactions and analytics. Why leading analysts are predicting that data fabrics are the future of data management, and how a modern data platform can speed and simplify these kinds of initiatives. How the features and benefits in the new release of InterSystems IRIS® data platform and InterSystems IRIS for Health™ 2021.1 can help your organization. Speakers:🗣 @Benjamin.DeBoe, Product Manager, InterSystems🗣 @Joseph.Lichtenberg, Director of Product & Industry Marketing, InterSystems🗣 @Carmen.Logue, Product Manager, Analytics & AI, InterSystems Date: Tuesday, June 22, 2021Time: 11:00 AM EDT ➡️ Save your seat today!
Announcement
Anastasia Dyubaylo · Jun 15, 2021

Webinar: What's New in InterSystems IRIS 2021.1

Hey Developers, We're pleased to invite you to the upcoming InterSystems webinar called "What's New in InterSystems IRIS 2021.1"! Date: Thursday, June 17, 2021Time: 11:00 AM EDT In this webinar, we’ll highlight some of the new capabilities of InterSystems IRIS and InterSystems IRIS for Health, version 2021.1: analytics, introducing a new optional add-on component, InterSystems IRIS® Adaptive Analytics, that enables a common virtual data model and allows business users to explore and analyze data using their BI tool of choice (including Microsoft Excel!) development, including more freedom of choice for developing in your language of choice, both server-side and client-side deployment and operations, including the InterSystems Kubernetes Operator interoperability, including the InterSystems API Manager 1.5 FHIR, including FHIRPath, FHIR Profiles, and more There will be time for Q&A at the end. Speakers:🗣 @Benjamin.DeBoe, Product Manager, InterSystems🗣 @Joseph.Lichtenberg, Director of Product & Industry Marketing, InterSystems🗣 @Carmen.Logue, Product Manager, Analytics & AI, InterSystems ➡️ Register for the webinar today!
Article
José Pereira · Sep 15, 2021

Implementing an IMAP Client in InterSystems IRIS - part II

In the first part we got a quick introduction on the IMAP protocol commands, now it's time to use IRIS and implement them and create our own IMAP client! IRIS Email Framework The IRIS platform has default interfaces and classes for working with email. Developers originally designed those artifacts for POP3 implementation. However, this doesn’t mean that we can’t use and extend these interfaces and classes to implement an IMAP client. So let’s talk about them: %Net.FetchMailProtocol: This is the base class for email retrieval. The IMAP client extends it. %Net.MailMessage: This is the MIME message. It extends %Net.MailMessagePart. %Net.MailMessagePart: This encapsulates a MIME message part for multipart messages. This class has an array for itself, enabling a tree representation of message subparts. %Net.MIMEReader: This utility class has methods to parse a message’s MIME content, generating a %Net.MIMEPart instance. %Net.MIMEPart: This encapsulates the message’s MIME parts and provides methods to get information about them. Implementing an IMAP Client In this section, we present implementation details about an IMAP client, an inbound interoperability adapter, and a simple production example. Note that, in favor of saving space, we won’t show most implementation methods. Instead, we link to each one’s full implementation details. You can find the complete source code on GitHub. Creating a Basic IMAP Client As we discussed before, IMAP is a plain text-based protocol over TCP. This means the base code to implement a client for such a protocol is a TCP client. The IRIS platform provides standard ObjectScript commands to perform I/O operations: OPEN, USE, READ, WRITE, and CLOSE. Here is a simple example of how to connect to the MS Outlook server, log in, then log out: ClassMethod SimpleTest() { // connection configuration SET dev = "|TCP|993" SET host = "outlook.office365.com" SET port = "993" SET mode = "C" SET sslConfig = "ISC.FeatureTracker.SSL.Config" SET timeout = 30 // connection to MS Outlook IMAP server OPEN dev:(host:port:mode:/TLS=sslConfig):timeout THROW:('$TEST) ##class(%Exception.General).%New("Sorry, can't connect...") USE dev READ resp($INCREMENT(resp)):timeout WRITE "TAG1 LOGIN user@outlook.com password", ! READ resp($INCREMENT(resp)):timeout WRITE "TAG2 LOGOUT", ! READ resp($INCREMENT(resp)):timeout CLOSE dev // come back to default device (terminal) and prints responses USE 0 ZWRITE resp } This is its output: USER>d ##class(dc.Demo.Test).SimpleTest() resp=3 resp(1)="* OK The Microsoft Exchange IMAP4 service is ready. [QwBQ..AA==]"_$c(13,10) resp(2)="TAG1 OK LOGIN completed."_$c(13,10) resp(3)="* BYE Microsoft Exchange Server IMAP4 server signing off."_$c(13,10)_"TAG2 OK LOGOUT completed."_$c(13,10) There are some highlights in this code: We set the mode variable to C, which is carriage return mode. This setting is mandatory for IMAP. The flag /TLS establishes a secure layer of communication (SSL). We must set this flag value to a valid SSL IRIS connection. The OPEN command initiates the connection. The special boolean variable $TEST returns 1 when a command with a timeout is successful or 0 if the timeout expires. In this example, if the OPEN command exceeds 30 seconds, the code throws an exception. After a connection is established successfully, the command USE owns the TCP device, redirecting all READ and WRITE commands to this device. The WRITE command issues commands to the IMAP server, and the READ command gets their output. To finish the connection, we must use the CLOSE command. After owning the device, all calls to READ and WRITE commands execute on the device specified in the dev variable, after using the USE dev command. To come back to the terminal and write to it again, you need to issue a USE 0 command first. Each READ command has a limited buffer to store the server response. When the response size exceeds this limit, you need to issue another READ command to read the complete response. Of course, it’s possible to increase the buffer size, but a better approach is to be ready to deal with such a situation. As we discussed before, IMAP requires a tag for each command. This tag is helpful to check if the code retrieved the complete response or if it needs to issue another READ command. In this case, we implement the ReadResponse method to ensure the code reads the whole message. Implementing the %Net.FetchMailProtocol Interface for IMAP The %Net.FetchMailProtocol abstract class abstracts email retrieval on the IRIS platform. We implement the following methods: Connect: This establishes a connection to the IMAP server and logs in a user. GetMailBoxStatus: This gets the size of the mailbox and how many messages are in it. GetSizeOfMessages: This gets the size of one or all messages identified by a message number. GetMessageUIDArray: This gets an array with one or all message UIDs in the inbox. GetMessageUID: This gets the UID corresponding to a message number. Fetch: This retrieves a message’s content, possibly multipart content, identified by a message number. It retrieves the message content encapsulated in a %Net.MailMessage object. FetchFromStream: This is the same as Fetch, but gets content from an encapsulated EML message content in a %BinaryStream object, instead of calling the IMAP server. FetchMessage: This is the same as Fetch, but returns specific message headers in ByRef variables. FetchMessageInfo: This retrieves only message headers and the text of the message. DeleteMessage: This adds a message to the deletion array. RollbackDeletes: This cleans up the deletion array. QuitAndCommit: This deletes all messages in the deletion array and disconnects from the IMAP server. QuitAndRollback: This cleans up the deletion array and disconnects from the IMAP server. Ping: This pings the IMAP server to keep the session alive. First, we create a new class to implement the interface: dc.Demo.IMAP. This class inherits several properties, which we must set to establish a connection to the IMAP server. We create a helper class as well: dc.Demo.IMAPHelper. This class parses methods for IMAP responses, gets all parts of a multipart message, and stores peripheral features, including a method to send commands and ensure the entire response is read. The first method we implement is the Connect method. This method establishes a connection to the IMAP server using the configuration encapsulated in the class properties. It issues a login as well. This method uses the IRIS platform’s OPEN command to establish the connection to the IMAP server and the IMAP command LOGIN to authenticate to the server. The next method we implement is GetMailBoxStatus. This method uses the SELECT command to select a mailbox and it brings some additional information as well, like how many messages are in the mailbox. IMAP doesn’t have a ready-to-use command to get the size of all messages. Of course, it’s possible to iterate through all messages and sum their sizes. However, this strategy will probably cause slowness issues. So in this implementation, we don’t retrieve the size for all messages. The next method is GetSizeOfMessages. This method gets the size of one or more messages in the inbox. When no message number is defined, this method throws an exception due to the same IMAP limitation we explained for the GetMailBoxStatus method. We use the IMAP command FETCH <message_number> (RFC822.SIZE) to retrieve a message size by its number. The GetMessageUIDArray method comes next, which uses the IMAP commands SELECT and UID SEARCH [ALL | <message_number>] and parses its response to get the UID array. The next method is GetMessageUID. This method gets a UID for a defined message number and uses the same logic as the GetMessageUIDArray method. Following this is the Fetch method. It uses the IMAP commands SELECT and FETCH <message_number> BODY to retrieve message content, which is coded in MIME format. Fortunately, the IRIS platform has a reader for MIME content, the %Net.MIMEReader class. This class gets the message in a stream and returns the parsed message in a %Net.MIMEPart object. After getting the MIME content, the method creates a %Net.MailMessage object, fills it with data from the %Net.MIMEPart object, and returns it. The MIME content is encapsulated in a %Net.MIMEPart object that maps into a %Net.MailMessagePart object through the GetMailMessageParts method in the dc.Demo.IMAPHelper class. The next method is FetchFromStream. This method receives a stream object with an EML message and converts it to a %Net.MailMessage object. This method does not retrieve content from the server. Following are the FetchMessage and FetchMessageInfo methods, which are special cases of the Fetch method. The DeleteMessage method marks a message for deletion, whereas the RollbackDeletes method just cleans up the array of messages marked for deletion. Next is the QuitAndCommit method. It disconnects from the IMAP server and calls the method CommitMarkedAsDeleted for message deletion. The method QuitAndRollback just disconnects from the IMAP server and cleans up the array of messages marked for deletion. The last method, Ping, issues a NOOP command to keep the IMAP session alive. Implementing an Inbound Interoperability Adapter for IMAP The base class for email interoperability inbound in the IRIS platform is EnsLib.EMail.InboundAdapter. This inbound adaptor requires these configurations: The email server host address The email server port A credential ID which stores the username and password for accessing the server An SSL configuration This class was extended to create a new IMAP inbound adapter class: dc.Demo.IMAPInboundAdapter. To use this new adapter, we set which mailbox to use in the Mailbox production parameter. Its default value is INBOX. The implementation is simple, it just overrides the MailServer property and sets its type to dc.Demo.POP3ToIMAPAdapter IMAP client. This adapter maps the POP3 flow to the IMAP one, as the base adapter class was designed for POP3 commands. Thus, this POP3 to IMAP adapter enables us to perform all the original inbound adapter logic using IMAP commands instead of POP3 commands. In the dc.Demo.POP3ToIMAPAdapter class, we use the IMAP client IMAPClient of type dc.Demo.IMAP as a proxy for server communication. However, as dc.Demo.POP3ToIMAPAdapter extends %Net.POP3, it must override all abstract methods in %Net.FetchMailProtocol. Also, we had to implement new methods that the %Net.POP3 client had implemented directly: ConnectPort and FetchMessageHeaders. In the same way, we created ConnectedGet and SSLConfigurationSet methods to set and get properties that %New.POP3 also implemented directly. Setting up a Simple Production To make all these classes work together, we set up a simple production. Check out Creating a Production to get more information about IRIS Interoperability productions. This production includes a business service and a business operation, which uses the IMAP inbound adapter to check for new messages. This code was inspired by the Demo.Loan.FindRateProduction interoperability sample. In short, this production: Uses the GetMessageUIDArray method to get all available messages in the configured mailbox Loops over them, tracing their output, fetched by the Fetch method Checks if each message subject matches a criterion — starting with "[IMAP test]" Responds to the sender if the message subject matches the criteria, otherwise ignores the message Deletes all of the messages so that it won’t analyze them again In this example, we configure an IMAP server from Yahoo Mail imap.mail.yahoo.com, on port 993. We also use the default IRIS SSL configuration “ISC FeatureTacker.SSL.Config”. Next, we configure a credential called imap-test containing a username and password, as follows: As the image below shows, the production starts and keeps querying the IMAP server for new messages. When there are new messages, the inbound adapter grabs their information, like the header and subject, and lets production take further action based on this information. In this example, the production checks if the message subject starts with "[IMAP test]" and sends back a message to the sender. When a message doesn’t match the criteria, production just ignores it. Conclusion In this article, we discussed an IMAP client implementation. First, we explored some essential background on IMAP and its main commands. Then, we detailed the implementation, covering the client itself and how to connect it to the IRIS platform. We also presented an extension to the default interoperability adapter to use IMAP, and a simple production example. Now that you know more about IMAP and its settings and you know how to connect it to IRIS, you can set up email capabilities in your applications. To learn more about the IMAP topics we discussed here, explore the resources below. Resources Atmail’s IMAP 101: Manual IMAP Sessions Fastmail’s Why is IMAP better than POP? IETF’s Internet Message Access Protocol IETF’s Multipurpose Internet Mail Extensions (MIME) Part One: Format of Internet Message Bodies InterSystems’ I/O Devices and Commands InterSystems’ Using the Email Inbound Adapter Nylas’ Everything you need to know about IMAP
Announcement
Anastasia Dyubaylo · Sep 17, 2021

Video: Getting Up to Speed on InterSystems API Manager

Hi Community, Enjoy watching the new video on InterSystems Developers YouTube: ⏯ Getting Up to Speed on InterSystems API Manager Get a walkthrough of the newest features of InterSystems API Manager (IAM), which offers full API management capabilities for InterSystems IRIS data platform; and get a brief introduction to the basic concepts of API management. 🗣 Presenter: @Stefan.Wittmann, Product Manager, InterSystems Enjoy and stay tuned!
Announcement
Anastasia Dyubaylo · Dec 21, 2021

InterSystems DataSets Contest Kick-off Webinar

Hi Community, We are pleased to invite all the developers to the upcoming InterSystems Datasets Contest Kick-off Webinar! The topic of this webinar is dedicated to the Datasets contest. In this webinar, we’ll do a quick tour of the new LOAD DATA feature, also chime in on packaging global data or file data with ZPM, and run a data-generation script as part of a method in the zpm install. As always, our experts will answer the questions on how to develop, build, and deploy datasets using InterSystems IRIS. Date & Time: Tuesday, December 28 – 10:00 AM EDT Speakers: 🗣 @Benjamin.DeBoe, Product Manager, InterSystems 🗣 @Evgeny.Shvarov, InterSystems Developer Ecosystem Manager So! We will be happy to talk to you at our webinar in Zoom! ✅ JOIN THE KICK-OFF WEBINAR! Hey everyone, The kick-off will start in 10 minutes! Please join us here: https://us02web.zoom.us/j/9822194974?pwd=bnZBdFhCckZ6c0xOcW5GT1lLdnAvUT09 Or enjoy watching the stream on YouTube: https://youtu.be/IEIGHit-9O8 Hi @Anastasia.Dyubaylo Is Kick-off webinar recoding available ?Thanks Hey Developers, The recording of this webinar is available on InterSystems Developers YouTube! Please welcome: ⏯ InterSystems DataSets Contest Kick-off Webinar Big applause to our speakers! 👏🏼
Article
Yuri Marx · Oct 29, 2021

Enterprise Architecture views with InterSystems IRIS and Zachman Framework

The Zachman Framework™ is an ontology - a theory of the existence of a structured set of essential components of an object for which explicit expressions is necessary and perhaps even mandatory for creating, operating, and changing the object (the object being an Enterprise, a department, a value chain, a “sliver,” a solution, a project, an airplane, a building, a product, a profession or whatever or whatever). Source: https://www.zachman.com/about-the-zachman-framework. In this article I use the Zachman Framework to detail how can you use InterSystems IRIS to promote your enterprise architecture project. The InterSystems IRIS can represent things important to the business using Persistent Classes with encapsulated reusable business logic that can be consumed as REST, Language Gateways (Python, Java, .Net and Node.js) or into IRIS productions (BPL, Business Services and Business Operations). The Analytics dashboards created with IRIS Analytics (Deepsee), IRIS Reports and IRIS Adaptative Analytics (AtScale) materialize business important things as dashboards for business staff. Finally, the InterSystems IRIS Interoperability productions automate things important to the business with BPL, DTL, Business Rules and Business Services and Operations in a business component consumed as REST, HTTP resource and other popular formats and protocols. The InterSystems IRIS automate business processes, including human tasks, using InterSystems IRIS productions with BPL. The business process indicators can be monitored using InterSystems Analytics capabilities (DeepSee, IRIS Reports and IRIS Adaptative Analytics) The InterSystems IRIS supports the main languages (english, portuguese, spanish, japanese and other) to operate the business globally. The database, components and analytics artifacts can be deployed in a distributed network or in the cloud. The InterSystems IRIS supports the definition of roles, people and resources with a integrated security model into the API Gateway (APIM) and in the database, interoperability and component layers, using OAuth, JWT, LDAP, RBAC and other models. In the Analytics users can colaborate and share business artifacts, creating corporate insights. The InterSystems IRIS has support to the international time zones and support operate data and application as real time or batch schedule events as syncronous or asyncronous request or responses, using the most popular protocols (kafka, mqtt, rest, http, smtp, and other). The data into these events can be monitored and analyzed with IRIS Analytics options (DeepSee, SAM, Adaptative Analytics and IRIS Reports). The IRIS Analytics options (Deepsee, IRIS Reports, Adaptative Analytics) allows you create KPIs to analyse the progress of business goals and strategies. The IRIS API Management can map business goals and strategies to the corporate REST Services (digital business assets) and promote its reuse with InterSystems Interoperability productions (create compositions to realize the strategy). The IRIS Multimodel Database is prepared to support relational, class oriented, analytics and document data models. The IRIS Interoperability productions can use BPL to model and execute business processes and can compose the use of machine learning, Java, .Net, Python and Object Script components and other corporate digital assets consumed using adapters. The IRIS deployment model supports creating business microsservices into docker services distributed into business services or technology services (ESB as service, Analytics as service, Database as service) into private and public clouds. The InterSystems IRIS interoperability BPL can model and execute workflows with automated and human tasks. With InterSystems IRIS you can schedule the deployment of reports (InterSystems Reports), dashboads output (DeepSee) and component/orchestration logic using the InterSystems BPL and Business Rules engine. IRIS support produce and consume events and messages too, including MQTT (IoT), Kafka and JMS. The Business Plan can be realized using IRIS Data Platform in all layers: 1. Data: planning to get, process and analyse data can be done with IRIS Interoperability BPL and Adapters with data storage in a multimodel database (SQL, Document/JSON, OLAP). 2. Application: create services and microservices using Java, .Net, Python and ObjectScript as REST services or interoperability services that realize the business requirements planned. 3. Technology: the business continuity is possible because IRIS supports HA and distributed computing in Data (shards), in Application (docker services) and in analytics (docker analytics services). The InterSystems IRIS create your ORM model automatically (each class is a table and properties can be relations between classes). So your logical model is translated to pyshical model as SQL and Classes at same time. The Application architecture in the InterSystems IRIS can be monolitic, as services or as microservices, because IRIS supports host, docker and kubernetes deployments. The IRIS application architecture is open to the main languages (Java, .Net, Python, ObjectScript and Node.js/JavaScript). Finally the IRIS application architecture supports analytics, data and language services and microservices implementations. IRIS supports AWS, Azure, Google Cloud and other public and private distributed architectures. IRIS supports sharding to distributed data repositories too. For applications, IRIS can serve data responses to Angular, Vue, React, React Native, Flutter and other popular UI options, including NPM package to allows the controller classes interact with IRIS components using Node.js/Javascript. In the analytics area, IRIS deliver analytics insights to Power BI, Tableau and Excel (Adaptative Analytics). The processing into IRIS can be asyncronous or syncronous. All processing can be orchestrated using APIM or using IRIS ESB/BPL. The dashboards and reports can be processed into the User Portal or into the the applications (embedded). The InterSystems IRIS has a business rules engine integrated to the BPL workflow, allow you using the business rules with your business process or data/service workflows. For AI business rules is possible compose with Python and R machine learning components inside the business process (BPL) or you can use IntegratedML too, to train and execute AI rules using SQL sentences. The Physical Data Model in IRIS can be monolitic or distributed (shards) and the data is multimodel (SQL, NoSQL (JSON), OLAP and Virtual Cubes (AtScale)). The InterSystems IRIS allows you design analytical, data, interoperability and open language systems into monolitic or microservice deployments, using private or public clouds. With IRIS the technology architeture is end-to-end, including: 1. API Management with InterSystems API Management; 2. ESB, Integration Adapters and Workflow with InterSystems Interoperability (Ensemble); 3. Business Services and Microservices using the most popular languages; 4. Analytics with IRIS Reports, IRIS Adaptative Analytics and DeepSee; 5. Advanced Analytics and Data Science with IntegratedML and Python/R gateways; 6. Deployment into VM, Docker, Kubernetes or hosts. IRIS deliver responses and process requests using REST/API Gateway or using Node.js NPM package. For analytical visualizations, IRIS deliver MDX or SQL data for PowerBI, Tableau and other. The InterSystems IRIS allow you control: 1. API with InterSystems API Management; 2. Services and Microservices with InterSystems API Management and InterSystems Interoperability productions (BPL); 3. Services and Microservices with Language Gateways (for Java, Python, .Net and Javascript/Node.js); 4. Data as InterSystems IRIS multimodel database (SQL, NoSQL - JSON - DocDB, OLAP - DeepSee); 5. Analytical with IRIS Analytics and Adaptative Analytics (AtScale); 6. Cognitive with IRIS IntegratedML and Python/R language support. The InterSystems IRIS has a rule engine inside InterSystems Interoperability.
Article
José Pereira · Dec 27, 2021

Using Python to Implement an IMAP Client in InterSystems IRIS

In the previous articles, we learned the basics of using IMAP protocol to handle messages from mailboxes in an e-mail server. That was cool and interesting, but you could take advantage of implementations created by other ones, available in libraries ready to use. One of the improvements to the IRIS data platform is the ability to write Python code alongside ObjectScript in the same IRIS process. This new feature is called [Embedded Python](https://community.intersystems.com/post/start-learning-about-embedded-python). Embedded Python lets us bring to our ObjectScript code the power of the huge [Python ecosystem’s libraries](https://pypi.org/). In this article, we’ll use one of those libraries, called [imaplib](https://docs.python.org/3/library/imaplib.html), to implement an IMAP client and integrate it with the [IRIS Email Framework](https://community.intersystems.com/post/implementing-imap-client-intersystems-iris-part-ii). We’ll also review a practical example of how to use embedded Python to resolve real-world challenges on the IRIS platform with the help of the Python ecosystem. You can find all code implemented here in this GitHub [repository](https://github.com/jrpereirajr/iris-imap-inbound-adapter-demo), in the [python directory](https://github.com/jrpereirajr/iris-imap-inbound-adapter-demo/tree/main/src/dc/demo/imap/python). Note that Python code just works in recent IRIS versions. In this example, the used version was 2021.1.0.215.3-zpm. You can follow updates about Embedded Python [here](https://community.intersystems.com/tags/python). ## Using Embedded Python The key to using embedded Python is the class %SYS.Python. By using this class, we can: * Import Python libraries: `##class(%SYS.Python).Import(“package-name”)` * Import custom Python modules (*.py files) available into the local system: `##class(%SYS.Python).Import(“module-file.py”)` * Get some Python built-in types to be used in assignments or parameters, for instance: * Python None object: `##class(%SYS.Python).None()` * Python True object: `##class(%SYS.Python).True()` * Python False object: `##class(%SYS.Python).False()` * Convert ObjectScript strings to Python Bytes objects (8-bit strings): `##class(%SYS.Python).Bytes(“ObjectScript string”)` These methods create Python objects and return an ObjectScript object. We can use the Python object’s properties and methods directly in our ObjectScript code. For instance, let’s see how we can implement this Python [recipe](https://docs.python.org/3/library/secrets.html#recipes-and-best-practices) for using the secrets library to generate passwords: USER>Set string = ##class(%SYS.Python).Import("string") USER>Set secrets = ##class(%SYS.Python).Import("secrets") USER>ZWrite secrets // let's check what this object is... secrets=1@%SYS.Python ; ; USER>ZWrite string // same for this one... string=2@%SYS.Python ; ; USER>Set alphabet = string."ascii_letters" _ string.digits // here we are accessing Python properties from string object USER>Set pwd = "" USER>For i=1:1:8 { Set pwd = pwd _ secrets.choice(alphabet) } USER>Write pwd Qv7HuOPV In this code, we use several properties and methods from Python objects to set ObjectScript variables. We use ObjectScript variables as parameters for Python objects methods. Another key point to using embedded Python is unique attributes and methods, sometimes called magical methods. Because everything in [Python data models](https://docs.python.org/3/reference/datamodel.html) are objects, these attributes and methods provide the Python interpreter's interface. For example, here’s how we retrieve an item from a list by its index, using the [\_\_getitem\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_getitem\_\_) special method: USER>Set b = ##class(%SYS.Python).Import("builtins") USER>Set list = b.list() // creates a Python list USER>Do list.append(1) USER>Do list.append(2) USER>Do list.append(3) USER>ZWrite list list=4@%SYS.Python ; [1, 2, 3] ; USER>w list."__getitem__"(0) // in Python, indexes are 0-based 1 USER>w list."__getitem__"(2) 3 In the same way, we can get the length of the list by using the \_\_len\_\_ special method: USER>Set listLen = list."__len__"() USER>ZWrite listLen listLen=3 We can combine them to iterate the list using ObjectScript: USER>For i=0:1:(listLen - 1) { Write list."__getitem__"(i), ! } 1 2 3 If we need to use constant values like None, True, or False, we can use the following methods from the %SYS.Python class: USER>Set none = ##class(%SYS.Python).None() USER>Set true = ##class(%SYS.Python).True() USER>Set false = ##class(%SYS.Python).False() USER>ZWrite none, true, false none=5@%SYS.Python ; None ; true=6@%SYS.Python ; True ; false=7@%SYS.Python ; False ; Similarly, we can convert an ObjectScript string to a Python Bytes object: USER>Set bytes = ##class(%SYS.Python).Bytes("This is a string") USER>ZWrite bytes bytes=8@%SYS.Python ; b'This is a string' ; Finally, we define our custom Python modules and import them into the ObjectScript context. You can find more useful resources on how to use embedded Python [here](https://community.intersystems.com/tags/python). For instance, check out this nice [example](https://community.intersystems.com/post/websocket-client-embedded-python) by [Robert Cemper](https://community.intersystems.com/user/robert-cemper-0). ## Writing an Alternative IMAP Client To use imaplib to implement our IMAP client, we use the regular [ObjectScript](https://community.intersystems.com/post/implementing-imap-client-intersystems-iris-part-i). We override its methods with the imaplib methods instead of implementing the IMAP protocol from the beginning. First, we create a new class named dc.demo.imap.python.IMAPPy. This class uses two properties to store references to Python objects: Class dc.demo.imap.python.IMAPPy Extends dc.demo.imap.IMAP { /// Stores the imaplib object reference Property imaplib As %SYS.Python; /// Stores the imaplib client instance Property client As %SYS.Python; ... Next, we import the imaplib library into the ObjectScript context, in the class constructor: Method %OnNew() As %Status [ Private ] { Set ..imaplib = ##class(%SYS.Python).Import("imaplib") Return $$$OK } Now, we can access all imaplib properties and methods using the imaplib class property. The first method that we override was the Connect method. This method uses the imaplib IMAP4_SSL method to make a connection to the IMAP server. It stores the imaplib client instance as a client property. The login method of the imaplib client will authenticate login requests, as follows: Method Connect(pServer As %String, pUserName As %String, pPassword As %String) As %Status { If ..Connected Return $$$ERROR($$$ConnectedError) Set sc = $$$OK Try { Set ..Server = pServer Set ..UserName = pUserName Set ..client = ..imaplib."IMAP4_SSL"(..Server) Set resp = ..client.login(..UserName, pPassword) Set ..Connected = 1 } Catch ex { Set sc = ex.AsStatus() } Return sc } The next method we override is the Disconnect method. This method now calls the logout method from the imaplib client: Method Disconnect() As %Status { Set sc = $$$OK Try { If ..Connected { Set tuple = ..client.logout() Set ..Connected = 0 } } Catch ex { Set sc=ex.AsStatus() } Return sc } The method GetMailBoxStatus was overridden to use the select method from imaplib to specify which mailbox to access. Method GetMailBoxStatus(ByRef NumberOfMessages As %Integer, ByRef NumberOfBytes As %Integer) As %Status { Set sc = $$$OK Try { Do ..CheckConnection() Set resp = ..client.select(..MailboxName) Set ackToken = resp."__getitem__"(0) Set dataArray = resp."__getitem__"(1) Set NumberOfMessages = dataArray."__getitem__"(0) Set NumberOfBytes = -1 } Catch ex { Set sc=ex.AsStatus() } Return sc } Note that this method returns a tuple, so the special method \_\_getitem\_\_ allows us to retrieve information. Also, remember that a tuple can store another tuple, so we can recursively use \_\_getitem\_\_. The following method overridden was GetSizeOfMessages. This method now uses the select method to choose the current mailbox and the fetch method to get the size of the message stored in the MessageNumber parameter. Method GetSizeOfMessages(MessageNumber As %String = "", ByRef ListOfSizes As %ArrayOfDataTypes) As %Status { Set sc = $$$OK Try { Do ..CheckConnection() // select the mailbox Set resp = ..client.select(..MailboxName) // hack to ensure that MessageNumber is of type %String Set MessageNumber = MessageNumber_"" Set resp = ..client.fetch(MessageNumber, "(RFC822.SIZE)") Set ackToken = resp."__getitem__"(0) Set dataArray = resp."__getitem__"(1) Set:('$ISOBJECT($Get(ListOfSizes))) ListOfSizes = ##class(%ArrayOfDataTypes).%New() Set data = dataArray."__getitem__"(0) Set msgIdx = +$Piece(data, " ", 1) Set size = +$Piece(data, " ", 3) Do ListOfSizes.SetAt(size, msgIdx) } Catch ex { Set sc=ex.AsStatus() } Return sc } We override the GetMessageUIDArray method in the same way to use the fetch method, but now we use it to get the UID codes: Method GetMessageUIDArray(MessageNumber As %String = "", ByRef ListOfUniqueIDs As %ArrayOfDataTypes) As %Status { Set sc = $$$OK Try { Do ..CheckConnection() // select the mailbox Set resp = ..client.select(..MailboxName) Set mailboxSize = resp."__getitem__"(1)."__getitem__"(0) If (mailboxSize > 0) { // hack to ensure that MessageNumber is of type %String Set MessageNumber = MessageNumber_"" // then get the mailbox UIDs Set param = $CASE(MessageNumber, "":"1:*", :MessageNumber) Set resp = ..client.fetch(param, "UID") Set ackToken = resp."__getitem__"(0) Set dataArray = resp."__getitem__"(1) Set len = dataArray."__len__"() } Else { Set len = 0 } Set:('$ISOBJECT($Get(ListOfUniqueIDs))) ListOfUniqueIDs = ##class(%ArrayOfDataTypes).%New(len) For i = 1:1:len { Set data = dataArray."__getitem__"(i - 1) Set msgIdx = +$Piece(data, " ", 1) Set size = +$Piece(data, " ", 3) Do ListOfUniqueIDs.SetAt(size, msgIdx) } } Catch ex { Set sc=ex.AsStatus() } Return sc } Note the use of the \_\_getitem\_\_ and \_\_len\_\_ methods to iterate over the tuples in the dataArray variable: … Set len = dataArray."__len__"() … For i = 1:1:len { Set data = dataArray."__getitem__"(i - 1) Set msgIdx = +$Piece(data, " ", 1) Set size = +$Piece(data, " ", 3) Do ListOfUniqueIDs.SetAt(size, msgIdx) } Next, we override the Fetch method, which we use to retrieve the whole message body: Method Fetch(MessageNumber As %Integer, ByRef Msg As %Net.MailMessage, Delete As %Boolean, messageStream As %BinaryStream) As %Status { Set sc = $$$OK Try { Do ..CheckConnection() // select the mailbox Set resp = ..client.select(..MailboxName) // hack to ensure that MessageNumber is of type %String Set MessageNumber = MessageNumber_"" // get the whole message Set resp = ..client.fetch(MessageNumber, "BODY.PEEK[]") Set rawMsg = ..TransversePythonArray(resp."__getitem__"(1)) ... } Catch ex { Set sc=ex.AsStatus() } Return sc } Note the presence of the method TransversePythonArray. Because the message body returned by the fetch method is a composition of collections, we created this method to recursively transverse this collection and flatten it into a single string. ClassMethod TransversePythonArray(pArray As %SYS.Python) As %String { Set acc = "" If ($IsObject(pArray)) { Set len = pArray."__len__"() For i = 1:1:len { Set item = pArray."__getitem__"(i - 1) If ($IsObject(item)) { Set acc = acc_..TransversePythonArray(item) } Else { Set acc = acc_item } Set acc = acc_$Char(13, 10) } } Else { Set acc = pArray_$Char(13, 10) } Return acc } We also override the Ping method to use the imaplib noop method. Method Ping() As %Status { Set sc = $$$OK Try { Do ..CheckConnection() Set resp = ..client.noop() } Catch ex { Set sc=ex.AsStatus() } Return sc } The last method overridden was the CommitMarkedAsDeleted method. It now uses the methods store and expunge to mark messages for deletion and to commit such operations. Method CommitMarkedAsDeleted() As %Status [ Internal, Private ] { Set sc = $$$OK Try { Do ..CheckConnection() // select the mailbox Set resp = ..client.select(..MailboxName) // transverse array in inverse order to keep numbers integrity, // that is, ensures that when the number is deleted no other // message can assume such number Set messageNumber = $Order(..MarkedAsDeleted(""), -1) While (messageNumber '= "") { // hack to ensure that messageNumber is of type %String Set messageNumber = messageNumber_"" Set resp = ..client.store(messageNumber, "+FLAGS", "\Deleted") Set messageNumber = $Order(..MarkedAsDeleted(messageNumber), -1) } Kill ..MarkedAsDeleted Set resp = ..client.expunge() } Catch ex { Set sc=ex.AsStatus() } Return sc } ## Conclusion This method is much easier to implement compared to the original one, where we had to implement each IMAP command manually using IRIS TCP commands. Now that you’ve seen a good example of how we can use the rich Python library ecosystem for real-world problems, start powering up your ObjectScript applications! ## References * [imaplib — IMAP4 protocol client](https://docs.python.org/3/library/imaplib.html) * [Video: Embedded Python in InterSystems IRIS: Sneak Peek](https://community.intersystems.com/post/new-video-embedded-python-intersystems-iris-sneak-peek) * [Embedded Python: Bring the Python Ecosystem to Your ObjectScript App](https://learning.intersystems.com/course/view.php?id=1572) * [Learn Python Network Programming: Python - IMAP](https://www.tutorialspoint.com/python_network_programming/python_imap.htm) * [Python Documentation: Recipes and Best Practices](https://docs.python.org/3/library/secrets.html#recipes-and-best-practices) * [Python Documentation: Data Model](https://docs.python.org/3/reference/datamodel.html) * [WebSocket Client with Embedded Python](https://community.intersystems.com/post/websocket-client-embedded-python) * [InterSystems Developer Community: #Python](https://community.intersystems.com/tags/python) Hi, @José.Pereira! Ia it possible to build a ZPM module for the case? And publish it on Open Exchange?
Announcement
Evgeny Shvarov · Dec 28, 2021

Technology Bonuses for InterSystems IRIS Datasets Contest 2021

Hi Developers! Here're the technology bonuses for the InterSystems IRIS Datasets Contest 2021 that will give you extra points in the voting: Dataset Usage Demo Repository - 4 LOAD DATA Usage - 3 Questionnaire - 2 Unique Real Dataset - 4 Docker container usage - 2 ZPM Package deployment - 3 Online Demo - 2 Code Quality pass - 1 First Article on Developer Community - 2 Second Article On DC - 1 Video on YouTube - 3 See the details below. Dataset Usage Demo Repository - 4 points Your application for the contest is a dataset itself. So it's very important and helpful if you submit another repo with the usage example of the data. Just mention the second demo repository in the README.md of your dataset repo and collect 4 extra points. LOAD DATA Usage - 3 points InterSystems IRIS 2021.2 preview release comes with the new feature to LOAD DATA. Use it in your dataset or demo repo and collect 3 extra points. Questionnaire - 2 Share your feedback in the questionnaire and collect 2 extra points! Unique Real Dataset - 4 Publish a new real-world dataset that has at least 10k rows and is not trivial and collect 4 bonus points. A good example of a trivial dataset could be the temperature reading of your Google Nest device. We will count the trivial once of a kind, the first published get the preference. Docker container usage - 2 points The application gets a 'Docker container' bonus if it uses InterSystems IRIS running in a docker container. Here is the simplest template to start from. ZPM Package deployment - 3 points You can collect the bonus if you build and publish the ZPM(ObjectScript Package Manager) package for your Full-Stack application so it could be deployed with: zpm "install your-multi-model-solution" command on IRIS with ZPM client installed. ZPM client. Documentation. Online Demo of your project - 2 pointsCollect 2 more bonus points if you provision your project to the cloud as an online demo. You can do it on your own or you can use this template - here is an Example. Here is the video on how to use it. Code quality pass with zero bugs - 1 point Include the code quality Github action for code static control and make it show 0 bugs for ObjectScript. Article on Developer Community - 2 points Post an article on Developer Community that describes the features of your project and collect 2 points for the article. The Second article on Developer Community - 1 point You can collect one more bonus point for the second article or the translation regarding the application. The 3rd and more will not bring more points but the attention will all be yours. Video on YouTube - 3 points Make the Youtube video that demonstrates your product in action and collect 3 bonus points per each. The list of bonuses is subject to change. Stay tuned! Good luck in the competition! UPD: Using Demo repo SHOULD have the functionality that shows the usage of the dataset - analytics, any meaningful UI around, etc. - something that shows why the dataset is good and how it can be used. Same for the Online Demo What is Unique Real Dataset? Real data means - not generated data. Unique means new and never existed before. In fact, it is the dataset you own - e.g. your collection of coins, or your library. Or the set of observations of any kind. Thanks I think dataset-finance should qualify as Unique Real Dataset.
Announcement
Anastasia Dyubaylo · Jun 8, 2020

Congrats to the winners of the InterSystems Native API Programming Contest!

Hi Developers! The InterSystems IRIS Native API Contest is over. Thank you all for participating in our IRIS Competition! As a result – 8 great apps! And now it's time to announce the winners! A storm of applause goes to these developers and their applications: 🏆 Experts Nomination - winners were determined by a specially selected jury: 🥇 1st place and $2,000 go to the iris-python-suite project by @Renato.Banzai 🥈 2nd place and $1,000 go to the WebSocket Client JS with IRIS Native API as Docker Micro Server project by @Robert.Cemper1003 🥉 3rd place and $500 go to the ObjectScript Kernel project by @Nikita.Mullin 🏆 Community Nomination - an application that received the most votes in total: 🥇 1st place and $1,000 go to the WebSocket Client JS with IRIS Native API as Docker Micro Server project by @Robert.Cemper1003 🥈 2nd place and $500 go to the iris-python-suite project by @Renato.Banzai Congratulations to all the participants! Thank you for your attention to the contest and the efforts you pay in this exciting coding competition! And what's next? A whole series of programming contests awaits InterSystems Developers! Join the next IRIS Contest already in this month! ➡️ More details in this post. Congratulations to the winners.... great job I'd like to say a big Thank You ! to all participants that gave me their vote. And I forgive the unknow experts that voted for me in the beginning and changed their mind shortly before closing as by the rules. Congratulations to all winners. Thank you all so much for organizing the contest. Thanks all competitors. It was a very exciting experience. Hi guys, We're pleased to invite you to the Online Meetup with the Winners of the InterSystems IRIS Native API Programming Contest on Friday, June 12 at 11:00 EDT! What awaits you at this virtual Meetup? Please find all the details in this post. Join us! 😉
Announcement
Anastasia Dyubaylo · Jun 10, 2020

New Video: In-Place InterSystems IRIS Conversions

Hi Community, The new video from Global Summit 2019 is already on InterSystems Developers YouTube: ⏯ In-Place InterSystems IRIS Conversions This video provides more detail about the process of converting an existing installation of a Caché/Ensemble-based application to InterSystems IRIS. We will give an overview of the process and do a live demonstration of converting a mirrored deployment. Takeaway: Caché and Ensemble can co-exist with InterSystems IRIS, which makes it easier to do in-place conversions. Presenter: @Andreas.Dieckow, Principal Product Manager, InterSystems Additional materials to this video you can find in this InterSystems Online Learning Course. If you would like to explore a wider range of topics related to this presentation, please use the Resource Guides below: Roadmaps (Ours and Yours) Resource Guide - 2019 Enjoy watching this video! Stay tuned! 👍🏼
Announcement
Anastasia Dyubaylo · Nov 6, 2020

New Video: InterSystems IRIS Adaptive Analytics in Action

Hey Developers, See how InterSystems IRIS Adaptive Analytics can be used to aggregate and query billions of records from a virtual cube, applying machine learning and analytics to that data. ⏯ InterSystems IRIS Adaptive Analytics in Action 👉🏼 Subscribe to InterSystems Developers YouTube. Enjoy and stay tuned!
Article
Mikhail Khomenko · Jan 21, 2021

InterSystems Kubernetes Operator Deep Dive: Part 2

In the previous article, we looked at one way to create a custom operator that manages the IRIS instance state. This time, we’re going to take a look at a ready-to-go operator, InterSystems Kubernetes Operator (IKO). Official documentation will help us navigate the deployment steps. Prerequisites To deploy IRIS, we need a Kubernetes cluster. In this example, we’ll use Google Kubernetes Engine (GKE), so we’ll need to use a Google account, set up a Google Cloud project, and install gcloud and kubectl command line utilities. You’ll also need to install the Helm3 utility: $ helm version version.BuildInfo{Version:"v3.3.4"...} Note: Be aware that on Google free tier, not all resources are free. It doesn’t matter in our case which type of GKE we use – zonal, regional, or private. After we create one, let’s connect to the cluster. We’ve created a cluster called “iko” in a project called “iko-project”. Use your own project name in place of “iko-project” in the later text. This command adds this cluster to our local clusters configuration: $ gcloud container clusters get-credentials iko --zone europe-west2-b --project iko-project Install IKO Let’s deploy IKO into our newly-created cluster. The recommended way to install packages to Kubernetes is using Helm. IKO is not an exception and can be installed as a Helm chart. Choose Helm version 3 as it's more secure. Download IKO from the WRC page InterSystems Components, creating a free developer account if you do not already have one. At the moment of writing, the latest version is 2.0.223.0. Download the archive and unpack it. We will refer to the unpacked directory as the current directory. The chart is in the chart/iris-operator directory. If you just deploy this chart, you will receive an error when describing deployed pods: Failed to pull image "intersystems/iris-operator:2.0.0.223.0": rpc error: code = Unknown desc = Error response from daemon: pull access denied for intersystems/iris-operator, repository does not exist or may require 'docker login'. So, you need to make an IKO image available from the Kubernetes cluster. Let’s push this image into Google Container Registry first: $ docker load -i image/iris_operator-2.0.0.223.0-docker.tgz $ docker tag intersystems/iris-operator:2.0.0.223.0 eu.gcr.io/iko-project/iris-operator:2.0.0.223.0 $ docker push eu.gcr.io/iko-project/iris-operator:2.0.0.223.0 After that, we need to direct IKO to use this new image. You should do this by editing the Helm values file: $ vi chart/iris-operator/values.yaml ... operator: registry: eu.gcr.io/iko-project ... Now, we’re ready to deploy IKO into GKE: $ helm upgrade iko chart/iris-operator --install --namespace iko --create-namespace $ helm ls --all-namespaces --output json | jq '.[].status' "deployed" $ kubectl -n iko get pods # Should be Running with Readiness 1/1 Let’s look at the IKO logs: $ kubectl -n iko logs -f --tail 100 -l app=iris-operator … I1212 17:10:38.119363 1 secure_serving.go:116] Serving securely on [::]:8443 I1212 17:10:38.122306 1 operator.go:77] Starting Iris operator Custom Resource Definition irisclusters.intersystems.com was created during IKO deployment. You can look at the API schema it supports, although it is quite long: $ kubectl get crd irisclusters.intersystems.com -oyaml | less One way to look at all available parameters is to use the “explain” command: $ kubectl explain irisclusters.intersystems.com Another way is using jq. For instance, viewing all top-level configuration settings: $ kubectl get crd irisclusters.intersystems.com -ojson | jq '.spec.versions[].schema.openAPIV3Schema.properties.spec.properties | to_entries[] | .key' "configSource" "licenseKeySecret" "passwordHash" "serviceTemplate" "topology" Using jq in this way (viewing the configuration fields and their properties), we can find out the following configuration structure: configSource name licenseKeySecret name passwordHash serviceTemplate metadata annotations spec clusterIP externalIPs externalTrafficPolicy healthCheckNodePort loadBalancerIP loadBalancerSourceRanges ports type topology arbiter image podTemplate controller annotations metadata annotations spec affinity nodeAffinity preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingIgnoredDuringExecution podAffinity preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingIgnoredDuringExecution podAntiAffinity preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingIgnoredDuringExecution args env imagePullSecrets initContainers lifecycle livenessProbe nodeSelector priority priorityClassName readinessProbe resources schedulerName securityContext serviceAccountName tolerations preferredZones updateStrategy rollingUpdate type compute image podTemplate controller annotations metadata annotations spec affinity nodeAffinity preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingIgnoredDuringExecution podAffinity preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingIgnoredDuringExecution podAntiAffinity preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingIgnoredDuringExecution args env imagePullSecrets initContainers lifecycle livenessProbe nodeSelector priority priorityClassName readinessProbe resources limits requests schedulerName securityContext serviceAccountName tolerations preferredZones replicas storage accessModes dataSource apiGroup kind name resources limits requests selector storageClassName volumeMode volumeName updateStrategy rollingUpdate type data image mirrored podTemplate controller annotations metadata annotations spec affinity nodeAffinity preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingIgnoredDuringExecution podAffinity preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingIgnoredDuringExecution podAntiAffinity preferredDuringSchedulingIgnoredDuringExecution requiredDuringSchedulingIgnoredDuringExecution args env imagePullSecrets initContainers lifecycle livenessProbe nodeSelector priority priorityClassName readinessProbe resources limits requests schedulerName securityContext serviceAccountName tolerations preferredZones shards storage accessModes dataSource apiGroup kind name resources limits requests selector storageClassName volumeMode volumeName updateStrategy rollingUpdate type There are so many settings, but, you don’t need to set them all. The defaults are suitable. You can see examples of configuration in the file iris_operator-2.0.0.223.0/samples. To run a minimal viable IRIS, we need to specify only a few settings, like IRIS (or IRIS-based application) version, storage size, and license key. Note about license key: we’ll use a community IRIS, so we don’t need a key. We cannot just omit this setting, but can create a secret containing a pseudo-license. License secret generation is simple: $ touch iris.key # remember that a real license file is used in the most cases $ kubectl create secret generic iris-license --from-file=iris.key An IRIS description understandable by IKO is: $ cat iko.yaml apiVersion: intersystems.com/v1alpha1 kind: IrisCluster metadata: name: iko-test spec: passwordHash: '' # use a default password SYS licenseKeySecret: name: iris-license # use a Secret name bolded above topology: data: image: intersystemsdc/iris-community:2020.4.0.524.0-zpm # Take a community IRIS storage: resources: requests: storage: 10Gi Send this manifest into the cluster: $ kubectl apply -f iko.yaml $ kubectl get iriscluster NAME DATA COMPUTE MIRRORED STATUS AGE iko-test 1 Creating 76s $ kubectl -n iko logs -f --tail 100 -l app=iris-operator db.Spec.Topology.Data.Shards = 0 I1219 15:55:57.989032 1 iriscluster.go:39] Sync/Add/Update for IrisCluster default/iko-test I1219 15:55:58.016618 1 service.go:19] Creating Service default/iris-svc. I1219 15:55:58.051228 1 service.go:19] Creating Service default/iko-test. I1219 15:55:58.216363 1 statefulset.go:22] Creating StatefulSet default/iko-test-data. We see that some resources (Service, StatefulSet) are going to be created in a cluster in the “default” namespace. In a few seconds, you should see an IRIS pod in the “default” namespace: $ kubectl get po -w NAME READY STATUS RESTARTS AGE iko-test-data-0 0/1 ContainerCreating 0 2m10s Wait a little until the IRIS image is pulled, that is, until Status becomes Ready and Ready becomes 1/1. You can check what type of disk was created: $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b356a943-219e-4685-9140-d911dea4c106 10Gi RWO Delete Bound default/iris-data-iko-test-data-0 standard 5m Reclaim policy “Delete” means that when you remove Persistent Volume, GCE persistent disk will be also removed. There is another policy, “Retain”, that allows you to save Google persistent disks to survive Kubernetes Persistent Volumes deletion. You can define a custom StorageClass to use this policy and other non-default settings. An example is present in IKO’s documentation: Create a storage class for persistent storage. Now, let’s check our newly created IRIS. In general, traffic to pods goes through Services or Ingresses. By default, IKO creates a service of ClusterIP type with a name from the iko.yaml metadata.name field: $ kubectl get svc iko-test NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE iko-test ClusterIP 10.40.6.33 <none> 1972/TCP,52773/TCP 14m We can call this service using port-forward: $ kubectl port-forward svc/iko-test 52773 Navigate a browser to http://localhost:52773/csp/sys/UtilHome.csp and type _system/SYS. You should see a familiar IRIS user interface (UI). Custom Application Let’s replace a pure IRIS with an IRIS-based application. First, download the COVID-19 application. We won’t consider a complete, continuous deployment here, just minimal steps: $ git clone https://github.com/intersystems-community/covid-19.git $ cd covid-19 $ docker build --no-cache -t covid-19:v1 . As our Kubernetes is running in a Google cloud, let’s use Google Docker Container Registry as an image storage. We assume here that you have an account in Google Cloud allowing you to push images. Use your own project name in the below-mentioned commands: $ docker tag covid-19:v1 eu.gcr.io/iko-project/covid-19:v1 $ docker push eu.gcr.io/iko-project/covid-19:v1 Let’s go to the directory with iko.yaml, change the image there, and redeploy it. You should consider removing the previous example first: $ cat iko.yaml ... data: image: eu.gcr.io/iko-project/covid-19:v1 ... $ kubectl delete -f iko.yaml $ kubectl -n iko delete deploy -l app=iris-operator $ kubectl delete pvc iris-data-iko-test-data-0 $ kubectl apply -f iko.yaml You should recreate the IRIS pod with this new image. This time, let’s provide external access via Ingress Resource. To make it work, we should deploy an Ingress Controller (choose nginx for its flexibility). To provide a traffic encryption (TLS), we will also add yet another component – cert-manager. To install both these components, we use a Helm tool, version 3. $ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx $ helm upgrade nginx-ingress \ --namespace nginx-ingress \ ingress-nginx/ingress-nginx \ --install \ --atomic \ --version 3.7.0 \ --create-namespace Look at an nginx service IP (it’s dynamic, but you can make it static): $ kubectl -n nginx-ingress get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-ingress-nginx-controller LoadBalancer 10.40.0.103 xx.xx.xx.xx 80:32032/TCP,443:32374/TCP 88s Note: your IP will differ. Go to your domain registrar and create a domain name for this IP. For instance, create an A-record: covid19.myardyas.club = xx.xx.xx.xx Some time will pass until this new record propagates across DNS servers. The end result should be similar to: $ dig +short covid19.myardyas.club xx.xx.xx.xx Having deployed Ingress Controller, we now need to create an Ingress resource itself (use your own domain name): $ cat ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: iko-test annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" certmanager.k8s.io/cluster-issuer: lets-encrypt-production # Cert manager will be deployed below spec: rules: - host: covid19.myardyas.club http: paths: - backend: serviceName: iko-test servicePort: 52773 path: / tls: - hosts: - covid19.myardyas.club secretName: covid19.myardyas.club $ kubectl apply -f ingress.yaml After a minute or so, IRIS should be available at http://covid19.myardyas.club/csp/sys/UtilHome.csp (remember to use your domain name) and the COVID-19 application at http://covid19.myardyas.club/dsw/index.html (choose namespace IRISAPP). Note: Above, we’ve exposed the HTTP IRIS port. If you need to expose via nginx TCP super-server port (1972 or 51773), you can read instructions at Exposing TCP and UDP services. Add Traffic Encryption The last step is to add traffic encryption. Let’s deploy cert-manager for that: $ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/v0.10.0/deploy/manifests/00-crds.yaml $ helm upgrade cert-manager \ --namespace cert-manager \ jetstack/cert-manager \ --install \ --atomic \ --version v0.10.0 \ --create-namespace $ cat lets-encrypt-production.yaml apiVersion: certmanager.k8s.io/v1alpha1 kind: ClusterIssuer metadata: name: lets-encrypt-production spec: acme: # Set your email. Let’s Encrypt will send notifications about certificates expiration email: mvhoma@gmail.com server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: lets-encrypt-production solvers: - http01: ingress: class: nginx $ kubectl apply -f lets-encrypt-production.yaml Wait a few minutes until cert-manager notices IRIS-application ingress and goes to Let’s Encrypt for a certificate. You can observe Order and Certificate resources in progress: $ kubectl get order NAME STATE AGE covid19.myardyas.club-3970469834 valid 52s $ kubectl get certificate NAME READY SECRET AGE covid19.myardyas.club True covid19.myardyas.club 73s This time, you can visit a more secured site version - https://covid19.myardyas.club/dsw/index.html: About Native Google Ingress Controller and Managed Certificates Google supports its own ingress controller, GCE, which you can use in place of an nginx controller. However, it has some drawbacks, for instance, lack of rewrite rules support, at least at the moment of writing. Also, you can use Google managed certificates in place of cert-manager. It’s handy, but initial retrieval of certificate and any updates of Ingress resources (like new path) causes a tangible downtime. Also, Google managed certificates work only with GCE, not with nginx, as noted in Managed Certificates. Next Steps We’ve deployed an IRIS-based application into the GKE cluster. To expose it to the Internet, we’ve added Ingress Controller and a certification manager. We’ve tried the IrisCluster configuration to highlight that setting up IKO is simple. You can read about more settings in Using the InterSystems Kubernetes Operator documentation. A single data server is good, but the real fun begins when we add ECP, mirroring, and monitoring, which are also available with IKO. Stay tuned and read the upcoming article in our Kubernetes operator series to take a closer look at mirroring.
Article
Mihoko Iijima · Mar 5, 2021

[InterSystems IRIS for the First Time] Interoperability: What a Production is

**This article is a continuation of this post.** In the previous article, how the Interoperability menu works for system integration was explained. In this article, I would like to explain how to develop a system integration using the Interoperability menu. To begin with, what kind of process do you want to create? While thinking about this, make the following content. * Production * Message * Components * Business Services * Business Processes * Business Operations Production is a definition used to specify the components required for system integration and to store the component settings, which are configured using the Management Portal (internally stored as a class definition for Production). For example, suppose you are creating a business service that processes files placed in a specified directory at regular intervals. In that case, it is necessary to set up exactly which directories to monitor and which files to process. A Production is prepared to store these settings. The settings depend on the **adapter** used by the component that sends and receives data. Adapters are classes to simplify the connection to external systems, some are protocol-specific such as Mail/File/SOAP/FTP/HTTP/SQL/TCP, and some are standards specific HL7. Please refer to the documentation (protocol-specific adapters and adapters related to EDI documentation for more information on adapters. Since we will define the necessary components for the **Production**, "Start Production" will start the system integration, and "Stop Production" will stop the system integration. The development required to complete the Production is the creation of the components necessary for system integration, specifically the following contents: * Message * Components (Business Services, Business Processes, Business Operations) * Data conversion, etc. The content above will be explained slowly in the articles coming after this one. First of all, let's start **Production** using the sample **Production** and check the message process by processing data while checking the settings. Sample can be downloaded from https://github.com/Intersystems-jp/selflearning-interoperability. To use a Container, download the sample code using the git clone, navigate the clone's directory, and run docker-compose up -d It's that easy! See here for the procedure (it will take some time to create a container). If you do not use containers, create a new namespace after downloading the sample, and import all class definition files (extension .cls) under the src folder into the created namespace. For more information on the process of creating a namespace, please refer to the video after 07:03 of this article. Please refer to the README for more details on the sample code. When you are ready, access the management portal (change the web server's port number to match your environment). **** Go to the **Management Portal > Interoperability > Configuration > Production**. If you are using a method other than containers, connect to the namespace where you imported the source code, access [Configuration] > [Production], click the [Open] button, select [Start] > [Production], and then click the [Start] button. ※ If you are using something other than a container, you will need to make some initial settings. Please set up the contents described below before trying the following contents. ![image](/sites/default/files/inline/images/image1051jp.png) The production page will be displayed as **[**● **Component Name]** for each of the "Service", "Process", and "Operation" components. Click on the component name to change the contents of the "Settings" tab on the right side of the screen. For example, when you click on **Start.GetKionOperation** (single click), the display is as follows. ![image](/sites/default/files/inline/images/image1052jp.png) This component has the [HTTP Server] and [URL] settings for connecting to the Web API. There is a [appid] field at the bottom of the settings where you can enter the API key that you get it. There is a [lang] field near [appid] and is set "ja" ("ja" = Japanese). [lang] set language of response from OpenWeather. For English, set "en". When you finish to set these settings , click the "Apply" button. ![image](/sites/default/files/inline/images/image1053jp_0.png) If you are using a container, the setup is complete. For more information, please click [here](#datasend). * * * #### If you are experimenting with something other than containers Please make the following two settings in advance: 1) Configure the SSL client. Since the Web API to be connected to will be communicated using HTTPS, configure the SSL client on the IRIS side in advance. To match the settings of the sample production, we will use the name **[openweather]**. The settings in the Production are as follows: ![image](/sites/default/files/inline/images/image1054jp.png) **Click the Management Portal > [System Administration] > [Security] > [SSL/TLS Configuration] > [Create New Configuration]** button, enter **"openweather"** in the "Configuration Name" field, and then click in the "Save" button to finish. ![image](/sites/default/files/inline/images/image1055jp.png) 2) Create a base URL for REST In the sample production, we have configured it so that the information can be entered via REST, and the base URL for REST needs to be configured on the IRIS side. In the sample, we set /start as the base URL. Since the Start.REST class exists in the namespace where the sample was imported, we will specify this class as the dispatch class and add %All as the application role to omit authentication at the time of access. **Management Portal > System Administration > Security > Applications > Web Application Path > Click the "Create new web application"** button. In the Name field, specify **/start**; in the Namespace field, specify the namespace from which the sample was imported; in the Dispatch Class field, specify **Start.REST**; in the Allowed Authentication Method field, select **"Unauthenticated"**, and save the file. After saving, add the **%All** role to the **application role** on the "Application Roles" tab. ![image](/sites/default/files/inline/images/image1056jp.png) ![image](/sites/default/files/inline/images/image1057jp.png) * * * ### Try to send data Once you are all set up, try to use a business service to send information via REST and let it run. The above example is a URL that supposes that someone has purchased "Takoyaki" in Osaka City. The screen after execution is as follows. ![image](/sites/default/files/inline/images/image1124_0.png) Check the messages that have been sent to the **Production**. In the **Management Portal > Interoperability > Configuration > Production**, click on the service below: ![](https://jp.community.intersystems.com/sites/default/files/inline/images/images/image(1060).png) Select the **"Messages"** tab on the right side of the screen and click on any number below to the header field column. If you do not see it, reload your browser. ![image](/sites/default/files/inline/images/image1059jp.png) Using the Visual Trace page, you can see the information of **messages** sent and received between components. You can see that the weather information is retrieved from the Web API and sent back in the **light blue frame**. In this way, you can use tracing to see what data was being sent and received at that time and in what order. Throughout this article, we have confirmed that **Production** has defined the necessary components and their settings for system integration by referring to the sample code settings. We also confirmed that we could refer to the messages flowing through the **Production** in chronological order by using the Visual Trace page. In the next articles, we will discuss the concept behind creating the **"message"** shown in this trace and how actually to define it.
Announcement
Anastasia Dyubaylo · Mar 9, 2021

Online Meetup with the InterSystems Grand Prix Contest Winners

Hi Community, We're pleased to invite you to the online meetup with the winners of the InterSystems Grand Prix Contest! Date & Time: Friday, March 12, 2021 – 10:00 EDT What awaits you at this virtual Meetup? Our winners' bios. Short demos on their applications. An open discussion about technologies being used, bonuses, questions. Plans for the next contests. Our speakers: @Dmitry.Maslennikov, Co-founder, CTO and Developer Advocate, CaretDev Corp @José.Pereira, Business Intelligence Developer at Shift Consultoria e Sistemas Ltda @Henrique, System Management Specialist / Database Administrator, Sao Paulo Federal Court @Botai.Zhang, Developer from China @Weiwei.Yang, Developer from China @Evgeny.Shvarov, InterSystems Developer Ecosystem Manager You will also have the opportunity to ask any questions to our developers in a special webinar chat. We will be happy to talk to you at our Virtual Meetup! ➡️ REGISTER TODAY! Hey developers, ➡️ Please join today's meetup using this link. See you! Hey Developers! The recording of this virtual meetup is already on InterSystems Developers YouTube: ⏯ Online Meetup with the InterSystems Grand Prix Contest Winners Big applause to all the speakers! 👏🏼
Question
Sharafath Fazil · Mar 10, 2021

React Native or Flutter which is most compatible with InterSystems

React Native or Flutter which is most compatible with InterSystems Hi @Sharafath.Fazil I never worked with React Native, but I developed something in Flutter a few years ago. In my humble opinion, it doesn't matter that much since I'm assuming you'll be consuming data using a REST service to interact with flutter/react-native applications. It's up to you :) Maybe taking into consideration the pros and cons of each one of them. Hope that helps. I think the same, Flutter is easier to develop in my opinion, but it's just personal taste. I am not sure what your requirements are but a 3rd option is to use Ionic (Ionic - Cross-Platform Mobile App Development (ionicframework.com)) and stick with any existing nodeJs knowledge and integration you have.