Clear filter
Announcement
Olga Zavrazhnova · Feb 10, 2022
We are thrilled to announce that we’ve launched the InterSystems 🔥 FHIR startup incubator in Prague - Caelestinus.We invite Health Tech startups to apply: to let your innovations speak FHIR, to consult and prove your idea with the leaders of Healthcare and IT!
Why join?
✅ You’ll be able to introduce into your solution the FHIR cloud services from InterSystems - one of the world leaders in Health Tech!
✅ We partnered with the Institute for Clinical and Experimental Medicine, the largest transplant center in Europe, and the Faculty ofBiomedical Engineering of Czech Technical University in Prague where you’ll have a chance to implement your innovation
✅ You'll be able to get advice on your innovation from Jeff Fried, Regilo de Souza, Patrick Jamieson, M.D., Qi Li, Russell Leftwich MD FAMIA, Tomas Studenik, Johan Nordstrom, Martin Zubek, Jan Muzik, and other experts in information technology, entrepreneurship, and healthcare in Europe and the USA. See all mentor's bios here
✅ You’ll have the opportunity to present your innovations at InterSystems Global Summit in Seattle, June 2022!✅ We provide office space in Prague with a conference room for meetings. Address: Prague, Celetna 19 - stop by for a coffee!
A joint team of CEE Hacks and InterSystems trackers will go through the incubation with you from March to November 2022. ❕We are waiting for your applications until the 20th of February. Apply today ❕
Questions? Ask below! This looks like a great opportunity for those in Eastern Europe!! I look forward to seeing the results coming out of this incubator program It's not only for Eastern and Central Europe, @Benjamin.Spead ! Any startup from any region is very welcome! The majority of activities will be online! Besides the Czech Republic, we already have applications from USA, UK, Netherlands, UAE, Ukraine and invite startups to apply! @Evgeny.Shvarov - really good to know, thank you for clarifying!
Announcement
Evgeny Shvarov · Feb 7, 2022
Hi Developers!
Here're the technology bonuses for the InterSystems Python Contest 2022 that will give you extra points in the voting:
Embedded Python - 4
Python Native API - 3
Python Pex - 3
NoObjectScriptLine - 5
Questionnaire - 2
Docker container usage - 2
ZPM Package deployment - 2
Online Demo - 2
First Article on Developer Community - 2
Second Article On DC - 1
Video on YouTube - 3
See the details below.
Embedded Python - 4 points
Use Embedded Python in your application and collect 4 extra points. You'll need at least InterSystems IRIS 2021.2 for it.
Python Native API - 3 points
InterSystems IRIS introduces a Python Native API library that helps to interact with IRIS from python. Use it and collect 3 extra points for your application.
Python Pex - 3 points
InterSystems IRIS has Python Pex module that provides the option to develop InterSystems Interoperability productions from Python. Use it and collect 3 extra points for your application. It's OK also to use alternative python.pex wheel introduced by Guillaume Ronguier.
NoObjectScriptLine - 5 points
We are introducing several python APIs in this contest! And the bonus is yet another challenge: build your python solution with InterSystems IRIS and try to avoid using even a line of ObjectScript! IRIS Classes with only Embedded Python methods are OK though. Do it and collect 5 bonus points more!
Questionnaire - 2
Share your feedback in this questionnaire and collect 2 extra points!
Docker container usage - 2 points
The application gets a 'Docker container' bonus if it uses InterSystems IRIS running in a docker container. Here is the simplest template to start from.
ZPM Package deployment - 2 points
You can collect the bonus if you build and publish the ZPM(InterSystems Package Manager) package for your Full-Stack application so it could be deployed with:
zpm "install your-multi-model-solution"
command on IRIS with ZPM client installed.
ZPM client. Documentation.
Online Demo of your project - 2 pointsCollect 2 more bonus points if you provision your project to the cloud as an online demo. You can do it on your own or you can use this template - here is an Example. Here is the video on how to use it.
Article on Developer Community - 2 points
Post an article on Developer Community that describes the features of your project and collect 2 points for the article.
The Second article on Developer Community - 1 point
You can collect one more bonus point for the second article or the translation regarding the application. The 3rd and more will not bring more points but the attention will all be yours.
Video on YouTube - 3 points
Make the Youtube video that demonstrates your product in action and collect 3 bonus points per each.
The list of bonuses is subject to change. Stay tuned!
Good luck in the competition!
Announcement
Michelle Spisak · Jul 20, 2021
And now, you can help us!
Take a 5-minute survey and tell us what YOU want to see in our learning resource names!
Article
Sergey Lukyanchikov · Jul 22, 2021
Fixing the terminology
A robot is not expected to be either huge or humanoid, or even material (in disagreement with Wikipedia, although the latter softens the initial definition in one paragraph and admits virtual form of a robot). A robot is an automate, from an algorithmic viewpoint, an automate for autonomous (algorithmic) execution of concrete tasks. A light detector that triggers street lights at night is a robot. An email software separating e-mails into “external” and “internal” is also a robot. Artificial intelligence (in an applied and narrow sense, Wikipedia interpreting it differently again) is algorithms for extracting dependencies from data. It will not execute any tasks on its own, for that one would need to implement it as concrete analytic processes (input data, plus models, plus output data, plus process control). The analytic process acting as an “artificial intelligence carrier” can be launched by a human or by a robot. It can be stopped by either of the two as well. And managed by any of them too. Interaction with the environment
Artificial intelligence needs data that is suitable for analysis. When an analyst starts developing an analytic process, the data for the model is prepared by the analyst himself. Usually, he builds a dataset that has enough volume and features to be used for model training and testing. Once the accuracy (and in less frequent cases, the “local stability” in time) of the obtained result becomes satisfactory, a typical analyst considers his work done. Is he right? In the reality, the work is only half-done. It remains to secure an “uninterrupted and efficient running” of the analytic process – and that is where our analyst may experience difficulties. The tools used for developing artificial intelligence and machine learning mechanisms, except for some most simple cases, are not suitable for efficient interaction with external environment. For example, we can (for a short period of time) use Python to read and transform sensor data from a production process. But Python will not be the right tool for overall monitoring of the situation and switching control among several production processes, scaling corresponding computation resources up and down, analyzing and treating all types of “exceptions” (e.g., non-availability of a data source, infrastructure failure, user interaction issues, etc.). To do that we will need a data management and integration platform. And the more loaded, the more variative will be our analytic process, the higher will be set the bar of our expectations from the platform’s integration and “DBMS” components. An analyst that is bred on scripting languages and traditional development environments to build models (including utilities like “notebooks”) will be facing the near impossibility to secure his analytical process an efficient productive implementation.
Adaptability and adaptiveness
Environment changeability manifests itself in different ways. In some cases, will change the essence and nature of the things managed by artificial intelligence (e.g., entry by an enterprise into new business areas, requirements imposed by national and international regulators, evolution of customer preferences relevant for the enterprise, etc.). In the other cases – the information signature of the data coming from external environment will become different (e.g., new equipment with new sensors, more performant data transmission channels, availability of new data “labeling” technologies, etc.). Can an analytic process “reinvent itself” as the external environment structure changes? Let us simplify the question: how easy is it to adjust the analytic process if the external environment structure changes? Based on our experience, the answer that follows is plain and sad: in most known implementations (not by us!) it will be required to at least rewrite the analytic process, and most probably rewrite the AI it contains. Well, end-to-end rewriting may not be the final verdict, but doing the programing to add something that reflects the new reality or changing the “modeling part” may indeed be needed. And that could mean a prohibitive overhead – especially if environment changes are frequent.
Agency: the limit of autonomy?
The reader may have noticed already that we proceed in the direction of a more and more complex reality proposed to artificial intelligence. While taking a note of possible “instrument-side consequences”. In a hope for our being finally able to provide a response to emerging challenges. We are now approaching the necessity to equip an analytic process with the level of autonomy such that it can cope with not just changeability of the environment, but also with the uncertainty of its state. No reference to a quantum nature of the environment is intended here (we will discuss it in one of our further publications), we simply consider the probability for an analytic process to encounter the expected state at the expected moment in the expected “volume”. For example: the process “thought” that it would manage to complete a model training run before the arrival of new data to apply the model to, but “failed” to complete it (e.g., for several objective reasons, the training sample contained more records than usually). Another example: the labeling team has added a batch of new press in the process, a vectorization model has been trained using that new material, while the neural network is still using the previous vectorization and is treating as “noise” some extremely relevant information. Our experience shows that overcoming such situations requires splitting what previously used to be a single analytic process in several autonomous components and creating for each of the resulting agent processes its « buffered projection » of the environment. Let us call this action (goodbye, Wikipedia) agenting of an analytical process. And let us call agency the quality acquired by an analytical process (or rather to a system of analytical processes) due to agenting.
A task for the robot
At this point, we will try to come up with a task that would need a robotized AI with all the qualities mentioned above. It will not take as a long journey to get to ideas, especially because of a wealth of some very interesting cases and solutions for those cases published in the Internet – we will simply re-use one of such cases/solutions (to obtain both the task and the solution formulation). The scenario we have chosen is about classification of postings (“tweets”) in the Twitter social network, based on their sentiment. To train the models, we have rather large samples of “labeled” tweets (i.e. with sentiment specified), while classification will be performed on “unlabeled” tweets (i.e. without sentiment specified): Figure 1 Sentiment-based text classification (sentiment analysis) task formulation
An approach to creating mathematical models able to learn from labeled texts and classify unlabeled texts with unknown sentiment, is presented in a great example published on the Web. The data for our scenario has been kindly made available from the Web. With all the above at hands, we could be starting to “assemble a robot” – however, we prefer complicating the classical task by adding a condition: both labeled and unlabeled data are fed to the analytical process as standard-size files as the process “consumes” the already fed files. Therefore, our robot will need to begin operating on minimal volumes of training data and continually improve classification accuracy by repeating model training on gradually growing data volumes.
To InterSystems workshop
We will demonstrate, taking the scenario just formulated as an example, that InterSystems IRIS and ML Toolkit, a set of extensions, can robotize artificial intelligence. And achieve an efficient interaction with the external environment for the analytic processes we create, while keeping them adaptable, adaptive and agent (the «three А»). Let us begin with agency. We deploy four business processes in the platform: Figure 2 Configuration of an agent-based system of business processes with a component for interaction with Python
GENERATOR – as previously generated files get consumed by the other processes, generates new files with input data (labeled – positive and negative tweets – as well as unlabeled tweets)
BUFFER – as already buffered records are consumed by the other processes, reads new records from the files created by GENERATOR and deletes the files after having read records from them
ANALYZER – consumes records from the unlabeled buffer and applies to them the trained RNN (recurrent neural network), transfers the “applied” records with respective “probability to be positive” values added to them, to the monitoring buffer; consumes records from labeled (positive and negative) buffers and trains the neural network based on them
MONITOR – consumes records processed and transferred to its buffer by ANALYZER, evaluates the classification error metrics demonstrated by the neural network after the last training, and triggers new training by ANALYZER
Our agent-based system of processes can be illustrated as follows: Figure 3 Data flows in the agent-based system
All the processes in our system are functioning independently one from another but are listening to each other’s signals. For example, a signal for GENERATOR process to start creating a new file with records is the deletion of the previous file by BUFFER process. Now let us look at adaptiveness. The adaptiveness of the analytic process in our example is implemented via “encapsulation” of the AI as a component that is independent from the logic of the carrier process and whose main functions – training and prediction – are isolated one from another: Figure 4 Isolation of the AI’s main functions in an analytic process – training and prediction using mathematical models
Since the above-quoted fragment of ANALYZER process is a part of the “endless loop” (that is triggered at the process startup and is functioning till the whole agent-based system is shut down), and since the AI functions are executed concurrently, the process is capable of adapting the use of AI to the situation: training models if the need arises, predicting based on the available version of trained models, otherwise. The need to train the models is signaled by the adaptive MONITOR process that functions independently from ANALYZER process and applies its criteria to estimate the accuracy of the models trained by ANALYZER:
Figure 5 Recognition of the model type and application of the respective accuracy metrics by MONITOR process
We continue with adaptability. An analytic process in InterSystems IRIS is a business process that has a graphical or XML representation in a form of a sequence of steps. The steps in their turn can be sequences of other steps, loops, condition checks and other process controls. The steps can execute code or transmit information (can be code as well) for treatment by other processes and external systems. If there is a necessity to change an analytical process, we have a possibility to do that in either the graphical editor or in the IDE. Changing the analytical process in the graphical editor allows adapting process logic without programing: Figure 6 ANALYZER process in the graphical editor with the menu open for adding process controls
Finally, it is interaction with the environment. In our case, the most important element of the environment is the mathematical toolset Python. For interaction with Python and R, the corresponding functional extensions were developed – Python Gateway and R Gateway. Enabling of a comfortable interaction with a concrete toolset is their key functionality. We could already see the component for interaction with Python in the configuration of our agent-based system. We have demonstrated that business processes that contain AI implemented using Python language, can interact with Python. ANALYZER process, for instance, carries the model training and prediction functions implemented in InterSystems IRIS using Python language, like it is shown below: Figure 7 Model training function implemented in ANALYZER process in InterSystems IRIS using Python
Each of the steps in this process is responsible for a specific interaction with Python: a transfer of input data from InterSystems IRIS context to Python context, a transfer of code for execution to Python, a return of output data from Python context to InterSystems IRIS context. The most used type of interactions in our example is the transfer of code for execution in Python: Figure 8 Python code deployed in ANALYZER process in InterSystems IRIS is sent for execution to Python
In some interactions there is a return of output data from Python context to InterSystems IRIS context: Figure 9 Visual trace of ANALYZER process session with a preview of the output returned by Python in one of the process steps
Launching the robot
Launching the robot right here in this article? Why not, here is the recording from our webinar in which (besides other interesting AI stories relevant for robotization!) the example discussed in our article was demoed. The webinar time being always limited, unfortunately, and we still prefer showcasing our work as illustratively though briefly as possible – and we are therefore sharing below a more complete overview of the outputs produced (7 training runs, including the initial training, instead of just 3 in the webinar): Figure 10 Robot reaching a steady AUC above 0.8 on prediction
These results are in line with our intuitive expectations: as the training dataset gets filled with “labeled” positive and negative tweets, the accuracy of our classification model improves (this is proven by the gradual increase of the AUC values shown on prediction). What conclusions can we make at the end of the article: • InterSystems IRIS is a powerful platform for robotization of the processes involving artificial intelligence • Artificial intelligence can be implemented in both the external environment (e.g., Python or R with their modules containing ready-to-use algorithms) and in InterSystems IRIS platform (using native function libraries or by writing algorithms in Python and R languages). InterSystems IRIS secures interaction with external AI toolsets allowing to combine their capabilities with its native functionality • InterSystems IRIS robotizes AI by applying “three A”: adaptable, adaptive and agent business processes (or else, analytic processes) • InterSystems IRIS operates external AI (Python, R) via kits of specialized interactions: transfer/return of data, transfer of code for execution, etc. One analytic process can interact with several mathematical toolsets • InterSystems IRIS consolidates on a single platform input and output modeling data, maintains historization and versioning of calculations • Thanks to InterSystems IRIS, artificial intelligence can be both used as specialized analytic mechanisms, or built in OLTP and integration solutions For those who have read this article and got interested by the capabilities of InterSystems IRIS as a platform for developing and deploying machine learning and artificial intelligence mechanisms, we propose a further discussion of the potential scenarios that are relevant to your company, and a collaborative definition of the next steps. The contact e-mail of our AI/ML expert team is MLToolkit@intersystems.com.
Announcement
Anastasia Dyubaylo · Sep 20, 2021
Hey Community,
New video for an easy start on InterSystems Partner Directory:
⏯ How to list a solution on InterSystems Partner Directory
Stay tuned and see you at https://partner.intersystems.com!
Article
Evgeny Shvarov · Sep 20, 2021
Hi folks!
Sometimes we need to import data into InterSystems IRIS from CSV. It can be done e.g. via csvgen tool that generates a class and imports all the data into it.
But what if you already have your own class and want to import data from CSV into your existing table?
There are numerous ways to do that but you can use csvgen (or csvgen-ui) again! I prepared and and example and happy to share. Here we go!
The concept is the following: I have Class A and I want the data in file.csv that contains a column I need for my class.
The steps:
create Class B using csvgen,
perform SQL Update to add class B data to the class A
delete Class B.
To demonstrate a concept I created a simple demo project . The project imports Countries dataset that contains dc_data.Country class with different information on countries including GNP.
ClassMethod ImportCSV() As %Status
{
set sc = ##class(community.csvgen).GenerateFromURL("https://raw.githubusercontent.com/evshvarov/test-ipad/master/gnp.csv",",","dc.data.GNP")
Return sc
}
But the data on GNP is outdated and I have the recent one in this CSV. Here is the method that shows GNP e.g. for Angola:
ClassMethod ShowGNP() As %Status
{
Set sc = $$$OK
&sql(
SELECT TOP 1 name,gnp into :name,:gnp from dc_data.Country
)
if SQLCODE < 0 throw ##class(%Exception.SQL).CreateFromSQLCODE(SQLCODE,"Show Country GNP")
write "Country ",name," gnp=",gnp,!
Return sc
}
So I import CSV in a generated class with one line:
ClassMethod ImportCSV() As %Status
{
set sc = ##class(community.csvgen).GenerateFromURL("https://raw.githubusercontent.com/evshvarov/test-ipad/master/gnp.csv",",","dc.data.GNP")
Return sc
}
and with the second line I perform an SQL query that imports the updated GNP data into my dc_data.Country class.
ClassMethod UpdateGNP() As %Status
{
Set sc = $$$OK
&sql(
UPDATE dc_data.Country
SET Country.gnp=GNP."2020"
FROM
dc_data.Country Country
INNER JOIN dc_data.GNP GNP
On Country.name=GNP.CountryName
)
if SQLCODE < 0 throw ##class(%Exception.SQL).CreateFromSQLCODE(SQLCODE,"Importing data")
w "Changes to GNP are made from dc.data.GNP",!
Return sc
}
And then I delete generated class with its data as I don't need it any more.
ClassMethod DropGNP() As %Status
{
Set sc = $$$OK
&sql(
DROP TABLE dc_data.GNP
)
if SQLCODE < 0 throw ##class(%Exception.SQL).CreateFromSQLCODE(SQLCODE,"Drop csv table")
write "dc.data.DNP class is deleted.",!
Return sc
}
Well, here is the method that does all at once:
ClassMethod RunAll() As %Status
{
Set sc = $$$OK
zw ..ImportDataset()
zw ..ShowGNP()
zw ..ImportCSV()
zw ..UpdateGNP()
zw ..ShowGNP()
zw ..DropGNP()
Return sc
}
Of course it's just one approach to the problem but I hope it can be helpful. Looking forward for your feedback!
Of course it's just one approach to the problem but I hope it can be helpful.
Stay tuned for a dedicated LOAD DATA command in IRIS SQL coming very soon :-) Now csvgen supports LOAD DATA inside and is still useful to generate the class from scratch vs the arbitrary CSV.
Announcement
Anastasia Dyubaylo · Nov 22, 2021
Hey Community,
Welcome to the second InterSystems technical article writing competition! Write an article on any topic related to InterSystems technology:
🎄 InterSystems Tech Article Contest: Christmas Edition 🎄
Duration: November 25 – December 25, 2021
Prizes for everyone: Everyone who publishes an article on Dev Community during this period will receive a special prize pack!
Main Prizes: Apple AirPods Max / Oculus Quest 2 (VR Headset) / Amazon Kindle / Apple AirPods Pro / Raspberry Pi
Join our new contest and your content will be seen by over 55K monthly readers! Details below.
Prizes
1. Everyone is a winner in InterSystems Tech Article Contest! Any user who writes an article during the competition period will receive special prizes:
🎁 InterSystems Branded T-shirt
🎁 InterSystems Branded Coffee Cup
2. Expert Awards – articles will be judged by InterSystems experts:
🥇 1st place: Apple AirPods Max
🥈 2nd place: Oculus Quest 2 (VR Headset)
🥉 3rd place: Amazon Kindle 8G Paperwhite / Apple AirPods Pro / Raspberry Pi 4 8GB with InterSystems IRIS Community Edition ARM installed
Or as an alternative: Alternatively, any winner can choose a prize from a lower prize tier than his own.
3. Developer Community Award – article with the most likes. The winner will have an option to choose one from the following prizes:
🎁 Apple AirPods Pro
🎁 Amazon Kindle 8G Paperwhite
🎁 Raspberry Pi 4 8GB with InterSystems IRIS Community Edition ARM installed
Note: the author can only win one place in one nomination (in total one author can win two prizes: one in Expert and one in Community nomination).
Who can participate?
Any Developer Community member, except for InterSystems employees. Create an account!
Contest Period
📝 November 25 - December 25: Publication of articles and voting time.
Publish an article(s) throughout this period. DC members can vote for published articles with Likes – votes in the Community award.
Note: The sooner you publish an article(s), the more time you will have to collect both Experts & Community votes.
🎉 December 26: Winners announcement.
What are the requirements?
❗️ Any article written during the contest period and satisfying the requirements below will automatically enter the competition:
The article must be related to InterSystems technology
The article must be in English
The article must be 100% new (it can be a continuation of an existing article)
The article should not be plagiarized or translated (translations of your own DC articles from another language are allowed)
Article size: >1,000 characters (links are not counted towards character limit)
Team size: individual (multiple entries from the same author are allowed)
What to write about?
❗️ You can choose any tech topic related to InterSystems technology.
🎯 NEW BONUS: If your article is on the topic from the list of the proposed topics, you will receive a bonus of 5 Expert votes (vs 1st place selected by an Expert = 3 votes).
Here're some possible fields for choosing the article topic. These are just examples, you have the liberty to choose anything you want.
#
Topic
Details
1
Embedded Python Introduction
Embedded Python is an exciting new feature of InterSystems IRIS allowing developers to write methods, SQL procedures, and more in Python.
2
Embedded Python from Interoperability
Explore how Embedded Python can be leveraged from an Interoperability production.
3
Embedded Python: Translating by Language Constructs
While we aim for seamless Embedded Python integration there are some tips & tricks to smooth things over. Underscore methods, dictionaries, lists, and others. What are the best ways of calling Python features from ObjectScript?
4
Intro to InterSystems Reports Designer
Continuation of this article. This article should cover:
Catalog creation
Creation of the basic report types, namely
Chart (bar, pie, line, gauge, heatmap, ...)
Table (summary and detailed)
Crosstab
Publishing Reports to Reports Server
Creating a schedule
A good tutorial to start with: Getting Started with InterSystems Reports
5
Calling Reports from Interoperability/IRIS
An article describing how to execute (and get) InterSystems Reports Report from IRIS on from Interoperability Production.
6
Map Reports with InterSystems
An article describing how to build InterSystems Reports Report with geospatial data. HoleFoods dataset contains locations for transactions that you can use.
7
How to do CI/CD with InterSystems IRIS
–
8
Change Data Capture with Kafka Connect
An example that shows how to set up Kafka Connect and export&import SQL data via the Kafal Connect JDBC connector.
9
Applying analytics / ML to the SQL Statement Index
–
10
My favourite maintenance tasks, automated
–
11
Leveraging the Audit database
–
12
The three steps to set up GitHub Actions that make your app invincible
–
13
OAuth2 authorization in IRIS instance
–
14
Setup mirroring on K8s
–
15
Using %MDX and %KPI instead of Subject Area in IRIS Analytics
–
16
Trying External Language Gateways / compare to the gateways of old
Example
17
Streaming events to Kafka from IAM
–
18
IntegratedML walkthrough
–
19
Integrating cloud services with productions
e.g. MS Azure Cognitive Services or Amazon Rekognition.
20
Working with IKO
–
21
IKO IRIS on AWS Kubernetes with Hugepages
–
22
Incorporating backups with IKO
–
23
IKO – Create a cluster with compute nodes, SAM, and no sharding
Include the CPF file to set up our best practices.
24
Data Science shared workgroup setup with ECP
There is a data server and each data scientist has a compute node on their desktop. Show the data is available when disconnected and syncs when you re-connect.
25
Article discussing storage options for cloud deployments (performance difference between local storage, block storage, etc) and trade-offs (you might not need mirrors if using block storage, etc.)
–
26
Building IRIS images with Docker Build Mounts
Details
27
InterSystems IRIS CUDA image
There's a way to use GPUs/CUDA from inside the container. Describe how to build an InterSystems IRIS image with CUDA support.
Note: Articles on the same topic from different authors are allowed.
Feel free to submit your topic ideas in the comments to this post.
So,
We're waiting for your great articles!
Good luck and let the power of Pulitzer be with you! ✨ WOW!
Already 7 articles are in the game! Who's next?))
btw, if you publish an article for your Open Exchange contest application, it automatically goes to the Tech Article writing competition. So you have double wines! 😉 I'd like some clarification on the prompt "Using %MDX and %KPI instead of Subject Area in IRIS Analytics". I don't typically think of %MDX and %KPI as alternatives to creating/using a subject area in the way that the prompt implies - is the intention just to have an article that explains how to use %MDX and %KPI, or is there a specific use case that you are hoping it will explain?edit: Doesn't look like I'm eligible to participate, but maybe someone else wants to write this... Hey Developers!
We have a lot of new articles in our InterSystems Tech Article Contest: Christmas Edition 🎄!
A program to prohibit the use of old passwords. by @MikhailenkoSergey
Changes to the security level of the system by @MikhailenkoSergey
Deploying solutions without source code from ZPM by @MikhailenkoSergey
Data anonymization, introducing iris-Disguise by @Henry.HamonPereira
The power of XDATA applied to the API Security by @Yuri.Gomes
Leveraging the Audit database by @Yuri.Gomes
Traditional Debugging in ObjectScript by @Robert.Cemper1003
VSCode-ObjectScript on GitHub by @Dmitry.Maslennikov
Why? How? What's zap-api-scan-sample? by @Henrique.GonçalvesDias
OAuth2 and Basic Authentication, Authorization AND Auditing by code from Web Application by @Muhammad.Waseem
How secure is password? by @Dmitry.Maslennikov
Previewing Server Manager 3.0 for VS Code by @John.Murray
Server Manager now showcasing VS Code's new support for pre-release extensions by @John.Murray
ObjectScript REST API Cookbook by @Yuri.Gomes
OAuth2 Authentication with GitHub account from IRIS Web Application by @Muhammad.Waseem
Invite the FHIR® Accelerator Service to your Kubernetes Microservice Party by @Ron.Sweeney1582
Please, go check out those articles!
And vote for articles you like by your thumbs up here! Hey Community!
Another article is in the game:
17. MULTIEXCEL by @alex.kosinets
Who will be the next?))
Support the articles you like: https://community.intersystems.com/contests/2 👍 Hey everyone,
Next week, our InterSystems experts will start voting for articles in the Expert Awards. So, who has not yet taken part in the 2nd Tech Article, hurry up! 😉
Here're the articles that collect our new bonus of 5 Expert votes:
🎯 IntegratedML hands-on lab by @José.Pereira
🎯 IntegratedML walkthrough by @Yuri.Gomes
NEW BONUS: If your article is on the topic from the list of the proposed topics, you will receive a bonus of 5 Expert votes (vs 1st place selected by an Expert = 3 votes).
Who else? There are a few days left until the end of the contest! BTW, is it really 1000 characters or words? Because 1000 characters are around 260 words. Yes, 1000 characters. In the next competition, we will increase it ;) New articles participating in the competition:
18. IntegratedML walkthrough by @Yuri.Gomes
19. Holiday Reading: What Lies Beneath! by @Rob.Tweed
20. IntegratedML hands-on lab by @José.Pereira
21. ZAP API scan GitHub action by @José.Pereira
22. Using IRIS at university (and a fun task) by @Irene.Mikhaylova
4 days left until the end of the contest! Support your favourites with your likes! 👍 4 days left? 😃 Do we have until the end of the year this time?
Announcement
Ben Spead · Oct 28, 2021
Happy #VSummit21 week!!
CCR users should check out the following Virtual Summit '21 session by @Jean.Millette:
Drinking Our Own Champagne: InterSystems AppServices Move from Zen Reports to InterSystems Reports
- Find out how we use the CCR application to make changes to the CCR application (for reports)
- Find out how we moved from Zen Reports to InterSystems Reports for the CCR application
Feel free to ask questions on the content.
Nice work Jean!! Thanks Ben for the “shout out” and big thanks to you, @Matthew.Giesmann, @Timothy.Leavitt , @Philip.Cantwell, @Paul.Collins, and others on the AppServices and Trak teams for building key parts of the framework that enabled CCR’s move to InterSystems Reports. My pleasure @Jean.Millette - it is great work which is worthy of a "shout out" !! :)
Announcement
Anastasia Dyubaylo · Dec 20, 2021
Hi Community,
Join us for this walk-through of InterSystems Package Manager ZPM advanced features for developing and deploying InterSystems IRIS solutions:
⏯ InterSystems Package Manager Advanced Topics
🗣 Presenter: @Timothy.Leavitt, Development Manager, Application Services, InterSystems
Subscribe and watch on InterSystems Developers YouTube channel!
Announcement
Anastasia Dyubaylo · Aug 11, 2021
Hi Community,
See a demonstration of InterSystems IRIS Adaptive Analytics and get a detailed description of this new offering for analytics end-users:
⏯ Demonstration: Adaptive Analytics in InterSystems IRIS
🗣 Presenter: @Amir.Samary, Director - Solution Architecture, InterSystems
Subscribe to InterSystems Developers YouTube and stay tuned!
Discussion
Evgeny Shvarov · Aug 10, 2021
Hi developers!
Do you know a CRM that is built with InterSystems IRIS or Caché or Ensemble on a backend? Hi.These are two Brazilian software development companies that have products developed with InterSystems technology. It definitely has ERP, I don't know if it has CRM.http://www.innovatium.com.br/https://consistem.com.br/ Several companies in Belgium as well :
https://www.datam.be/producten.html
https://www.asci.be/customer-relationship-management/
Thank you, Marcio!
Is there a typo on innovatium? I see the following:
Thanks, Danny! One of our China ISV is also doing one. What Information do you need? Is there a site available? It is a start-up comp which they don't hv official site yet.
Announcement
Anastasia Dyubaylo · Aug 5, 2021
Hey everyone,
We've made a video tutorial for an easy start on InterSystems Partner Directory:
⏯ How to list a company on InterSystems Partner Directory
In this video, you will see a few simple steps on how to list your company on InterSystems Partner Directory.
Stay tuned!
Article
Yuri Marx · Sep 6, 2021
The OKR methodology (Objectives and Key Results or Objectives and Key Results) is used by the largest companies in the world (such as Google, Netflix, Spotify, BMW, Linkedin, etc.) for agile performance management. It was created in the 1970s by Andrew Grove, president of Intel, and introduced to the general public in his famous book “High Output Management”.
Around 1998 John Doerr, one of the world's top venture capitalists, after coming into contact with Intel's OKR, introduced the model to Larry Page and Sergey Brin, who started a small company called Google.
Sergey and Larry saw the great value of the methodology and began writing the first OKRs for Google and then their individual OKRs.
Since then, the practice has become a quarterly routine at the company. According to Rick Klau (Google Ventures), “Google wasn't Google” until he started practicing OKRs at the beginning.
The benefits of OKR are:
It has a simple process;
It works with short cycles;
It involves the entire team;
Brings clarity in direction;
Increases the chance of success;
Encourages high performance;
Increases focus;
Makes measuring results easier
To design your OKR, start with the objectives, they should be:
Limited in time (3 to 6 months) and scope;
Clear and understood by everyone;
Aligned with the company or product/project strategy;
Measurable.
For each objective, define 2 to 5 key outcomes that are also clear, time-limited, and measurable.
The Analytics-OKR (https://openexchange.intersystems.com/package/Analytics-OKR) has a simple sample using IRIS BI (DeepSee) to monitor OKR.
For see it, follow the steps:
1. Get the source code:
git clone https://github.com/yurimarx/analytics-okr.git
2. Build and up the project:
docker-compose build
docker-compose up -d
3. Open the Dashboards in the User Portal:
http://localhost:32792/csp/irisapp/_DeepSee.UserPortal.Home.zen?$NAMESPACE=IRISAPP&$NAMESPACE=IRISAPP&
4. Open the OKR Expanded Dashboard and see it:
5. You can see OKR about strategies to grow the InterSystems DC. See the other dashboards and pivot tables and enjoy it.
Announcement
Anastasia Dyubaylo · Oct 18, 2021
Hey Developers,
This week is a voting week for the InterSystems Interoperability contest! So, it's time to give your vote to the best solutions built with InterSystems IRIS.
🔥 You decide: VOTING IS HERE 🔥
How to vote? Details below.
Experts nomination:
InterSystems experienced jury will choose the best apps to nominate the prizes in the Experts Nomination. Please welcome our experts:⭐️ @Stefan.Wittmann, Product Manager ⭐️ @Robert.Kuszewski, Product Manager⭐️ @Nicholai.Mitchko, Manager, Solution Partner Sales Engineering⭐️ @Renan.Lourenco, Solutions Engineer⭐️ @Jose-Tomas.Salvador, Sales Engineer Manager⭐️ @Eduard.Lebedyuk, Sales Engineer⭐️ @Alberto.Fuentes, Sales Engineer⭐️ @Evgeny.Shvarov, Developer Ecosystem Manager
Community nomination:
For each user, a higher score is selected from two categories below:
Conditions
Place
1st
2nd
3rd
If you have an article posted on DC and an app uploaded to Open Exchange (OEX)
9
6
3
If you have at least 1 article posted on DC or 1 app uploaded to OEX
6
4
2
If you make any valid contribution to DC (posted a comment/question, etc.)
3
2
1
Level
Place
1st
2nd
3rd
VIP Global Masters level or ISC Product Managers
15
10
5
Ambassador GM level
12
8
4
Expert GM level or DC Moderators
9
6
3
Specialist GM level
6
4
2
Advocate GM level or ISC Employees
3
2
1
Blind vote!
The number of votes for each app will be hidden from everyone. Once a day we will publish the leaderboard in the comments to this post.
The order of projects on the Contest Page will be as follows: the earlier an application was submitted to the competition, the higher it will be in the list.
P.S. Don't forget to subscribe to this post (click on the bell icon) to be notified of new comments.
To take part in the voting, you need:
Sign in to Open Exchange – DC credentials will work.
Make any valid contribution to the Developer Community – answer or ask questions, write an article, contribute applications on Open Exchange – and you'll be able to vote. Check this post on the options to make helpful contributions to the Developer Community.
If you changed your mind, cancel the choice and give your vote to another application!
Support the application you like!
Note: contest participants are allowed to fix the bugs and make improvements to their applications during the voting week, so don't miss and subscribe to application releases! Hi, Developers!
Here are the results after the first day of voting:
Expert Nomination, Top 3
IRIS Interoperability Message Viewer by @Henrique.GoncalvesDias
appmsw-telealerts by @MikhailenkoSergey
IRIS Big Data SQL Adapter by @YURI MARX GOMES
➡️ Voting is here.
Community Nomination, Top 3
IRIS Interoperability Message Viewer by @Henrique.GoncalvesDias
appmsw-telealerts by @MikhailenkoSergey
IRIS Big Data SQL Adapter by @YURI MARX GOMES
➡️ Voting is here.
If you want to support the application by your like, please read our rules of voting first😊 Here are the results after 2 days of voting:
Expert Nomination, Top 3
IRIS Interoperability Message Viewer by @Henrique Dias
Node-RED node for InterSystems IRIS by @Dmitry.Maslennikov
appmsw-telealerts by @Sergey Mikhailenko
➡️ Voting is here.
Community Nomination, Top 3
IRIS Interoperability Message Viewer by @Henrique Dias
Node-RED node for InterSystems IRIS by @Dmitry.Maslennikov
appmsw-telealerts by @Sergey Mikhailenko
➡️ Voting is here.
So, the voting continues.
Please support the application you like! Voting for the InterSystems Interoperability contest goes ahead!
And here're the results at the moment:
Expert Nomination, Top 3
Node-RED node for InterSystems IRIS by @Dmitry Maslennikov
IRIS Big Data SQL Adapter by @Yuri.Gomes
IRIS Interoperability Message Viewer by @Henrique Dias
➡️ Voting is here.
Community Nomination, Top 3
IRIS Interoperability Message Viewer by @Henrique Dias
Node-RED node for InterSystems IRIS by @Dmitry Maslennikov
IRIS Big Data SQL Adapter by @Yuri.Gomes
➡️ Voting is here. Hey Developers!
3 days left before the end of voting!
Please check out the Contest Board and vote for the applications you like! 👍🏼