Clear filter
Question
Manoj Krishnamoorthy · Jul 19, 2017
Hi,I'm interested in participating InterSystems Global Masters!Can anyone send the invitation to join InterSystems Global Masters!Thanks Thanks Evgeny How do I invite a colleague to Global Masters ?Will be able to earn points by sharing the link ? Hi, Manoj!You are invited!Check the mail.
Question
Laura Cavanaugh · Aug 1, 2017
My boss would like to change the Ensemble logo that one sees in the mangement portal, because it's part of the DeepSee Anaylzer. I can see where it lives on the generated html. I kow that you can set the logo for the User POrtal settings in DeepSee -- you can specifiy a URL for your logo. But we'd like to go one stee further and change the Ensemble by InterSystems to our own logo / company name. Is it possible to change this in the code? Is there a Configuration setting to change this?Thanks,Laura Is that uhmm... an empty rectangle? For me there is the InterSystems Ensemble Logo - NOW Yeah, you can't upload photos directly to this forum and it is somewhat bad, even though there's a lot of image related configs that most won't even think about using... The only way is hosting in the cloud. Yep, now it's displaying for me too. You can upload images using this button: Thanks Ed! I just went through old DC post for half an hour+ to detect it. It's much better than my link to Facebook. Regards I can't believe how much I look forward to these discussions. Unfortunately, there is no EnsembleLogo* anywhere in the InterSystems directory. It almost looks like it's simple text, made to look like a logo using CSS. This is from the page source: <div class="portalLogoBox"> <div class="portalLogo" title="Powered by Zen">Ensemble</div> <div class="portalLogoSub">by InterSystems</div> The class portalLogoBox must do all that CSS/html stuff to make it look cool. I was wondering if I can change the text from Ensemble to something else. If the user is in the DeepSee Analyzer, it will say DeepSee by InterSystems instead, but with similar CSS modifications.I figured out why he wants to do this; we have a few "portal" users who have accss to all of the clients' namespaces. For a demo, they will go into the DeepSee UserPortal, then exit to the management portal to switch namespaces, then go back in to the DeepSee user portal. THe DeepSee user portal has settings where you can change the logo that is displayed (what are those classes? I need to be able to change them programmatically rather than manually 50 times!) but when the portal users go back to the management portal, our company logo is lost; instead the lovely Ensemble by InterSystems (Powered by Zen) is there. I personally think IS should get the credit, but my boss is wondering if we can change it for the purposes of these demos. If not, that's OK; but now I'm simply curious. Thanks!Laura I could swear it was related to this widget's toolbar. It seems they have changed it to an image for newer releases. (Even Caché is now an image instead of pure CSS).Well, I do agree with crediting IS, however it should be something like:"Powered by InterSystems ENSEMBLE" like a single message, since the focus is to demonstrate your work, not theirs. do you look for this one?%ENSInstallDir%\CSP\broker\portal\EnsembleLogo210x50.png I often forget to mention that we are on 2014. Thanks - at least I can tell him we can't do it but maybe in a future release we can change the IS logo.Thanks,Laura
Announcement
Evgeny Shvarov · Aug 2, 2017
Hi, Community!
You know we would have a meetup in Boston on the 8th of July August with the following agenda:
Time
Topic
Presenter
5-30pm
Registration and welcome coffee
5-55pm
Opening
Evgeny Shvarov
6-00pm
Atelier 1.1
Andreas Dieckow, Joyce Zhang, Michelle Stolwyk
6-30pm
REST API in Caché
Fabian Haupt
7-00pm
Coffee break
7-30pm
Online Learning and Developer Community
Douglas Foster, Evgeny Shvarov
8-00pm
End of the Meetup. Coffee, beverages
Let me share some details about presenters.
Andreas Dieckow the product manager, Joyce Zhang the tech lead and Michelle Stolwyk the UX designer from the Atelier team will be sharing the benefits of the InterSystems new IDE and introducing the new features in the next release.
Fabian Haupt is an InterSystems senior support engineer, you might have worked with him on one of your WRC issues. Fabian is also an honored author on Developer Community. He will share some best practices on implementing REST API backends using InterSystems Caché, Ensemble, HealthShare.
Douglas Foster, manager of the InterSystems Online Learning department will present the latest news about online learning and how the courses have evolved. He will demonstrate the resources available now and will gather feedback about what to include in the future online learning catalog.
And you can introduce your topic on the next meetup and share your success built with InterSystems Technology.
Join us on the 8th of August!
You forgot Evgeny Shvarov, community manager of InterSystems who will persuade you to join and contribute to our beloved Developers Community! 8th of July? Thank you, Sergey! Will do! ) 8th of August of course! Thanks) So I've used my moderator superpowers to correct the first line of the post. Thanks, John!
Article
Niyaz Khafizov · Jul 27, 2018
Hi all. Today we are going to upload a ML model into IRIS Manager and test it.
Note: I have done the following on Ubuntu 18.04, Apache Zeppelin 0.8.0, Python 3.6.5.
Introduction
These days many available different tools for Data Mining enable you to develop predictive models and analyze the data you have with unprecedented ease. InterSystems IRIS Data Platform provide a stable foundation for your big data and fast data applications, providing interoperability with modern DataMining tools.
In this series of articles we explore Data mining capabilities available with InterSystems IRIS. In the first article we configured our infrastructure and got ready to start. In the second article we built our first predictive model that predicts species of flowers using instruments from Apache Spark and Apache Zeppelin. In this article we will build a KMeans PMML model and test it in InterSystems IRIS.
Intersystems IRIS provides PMML execution capabilities. So, you can upload your model and test it against any data using SQL queries. It will show accuracy, precision, F-score and more.
Check requirements
First, download jpmml (look at the table and select suitable version) and move it to any directory. If you use Scala, it will be enough.
If you use Python, run the following in the terminal
pip3 install --user --upgrade git+https://github.com/jpmml/pyspark2pmml.git
After success message go to Spark Dependencies and add dependence to downloaded jpmml:
Create KMeans model
PMML builder uses pipelines, so I changed the code written in the previous article a bit. Run the following code in Zeppelin:
%pysparkfrom pyspark.ml.linalg import Vectorsfrom pyspark.ml.feature import VectorAssemblerfrom pyspark.ml.clustering import KMeansfrom pyspark.ml import Pipelinefrom pyspark.ml.feature import RFormulafrom pyspark2pmml import PMMLBuilder
dataFrame=spark.read.format("com.intersystems.spark").\option("url", "IRIS://localhost:51773/NEWSAMPLE").option("user", "dev").\option("password", "123").\option("dbtable", "DataMining.IrisDataset").load() # load iris dataset
(trainingData, testData) = dataFrame.randomSplit([0.7, 0.3]) # split the data into two setsassembler = VectorAssembler(inputCols = ["PetalLength", "PetalWidth", "SepalLength", "SepalWidth"], outputCol="features") # add a new column with features
kmeans = KMeans().setK(3).setSeed(2000) # clustering algorithm that we use
pipeline = Pipeline(stages=[assembler, kmeans]) # First, passed data will run against assembler and after will run against kmeans.modelKMeans = pipeline.fit(trainingData) # pass training data
pmmlBuilder = PMMLBuilder(sc, dataFrame, modelKMeans)pmmlBuilder.buildFile("KMeans.pmml") # create pmml model
It will create a model, that predicts Species using PetalLength, PetalWidth, SepalLength, SepalWidth as features. It uses PMML format.
PMML is an XML-based predictive model interchange format that provides a way for analytic applications to describe and exchange predictive models produced by data mining and machine learning algorithms. It allows us to separate model building from model execution.
In the output, you will see a path to the PMML model.
Upload and test the PMML model
Open IRIS manager -> Menu -> Manage Web Applications -`> click on your namespace -> enable Analytics -> Save.
Now, go to Analytics -> Tools -> PMML Model Tester
You should see something like the image below:
Click on New -> write a class name, upload PMML file (the path was in the output), and click on Import . Paste the following SQL querie in Custom data source :
SELECT PetalLength, PetalWidth, SepalLength, SepalWidth, Species, CASE Species WHEN 'Iris-setosa' THEN 0 WHEN 'Iris-versicolor' THEN 2 ELSE 1 ENDAs predictionFROM DataMining.IrisDataset
We use CASE here because KMeans clustering returns clusters as numbers (0, 1, 2) and if we do not replace species to numbers it will count it incorrectly. Please comment if you know how can I replace сluster number with a species name.
My result is below:
There you can look at detailed analytics:
If you want to know better what is true positive, false negative, etc, read Precision and recall.
Conclusion
We have found out that PMML Model Tester is very useful tool to test your model against data. It provides detailed analytics, graphs, and SQL executor. So, you can test your model without any extended tool.
Links
Previous article
PySpark2PMML
JPMML
ML Pipelines
Apache Spark documentation Note that in InterSystems IRIS 2018.2, you'll be able to save a PMML model straight into InterSystems IRIS from SparkML, through a simple iscSave() method we added to the PipelineModel interface. You can already try it for yourself in the InterSystems IRIS Experience using Spark.Also, besides this point-and-click batch test page, you can invoke PMML models stored in IRIS programmatically from your applications and workflows as explained in the documentation. We have a number of customers using it in production, for example to score patient risk models for current inpatient lists at HBI Solutions. The article is considered as InterSystems Data Platform Best Practice.
Announcement
Evgeny Shvarov · Sep 25, 2018
Hi Community!
I'm pleased to announce that InterSystems IRIS is available on Microsoft Azure Marketplace!
→ InterSystems IRIS on Azure Marketplace
This is the second public cloud offering of InterSystems IRIS after recently announced Google Cloud Platform support.
What's inside?
You can find InterSystems IRIS 2018.1.2 for Ubuntu there with BYOL option for the license.
Stay tuned!
Announcement
Anastasia Dyubaylo · Oct 9, 2018
Hi Community!I want to share with you my impressions of Global Summit 2018.It was my first time in the USA and my first Global Summit. And I can say that it was really amazing trip and a great experience!And here are some key highlights of my choice:1. This year we had a special Global Masters Meeting Point under my direction. There, all the summit participants could learn more about our gamification platform, ask questions, share ideas, check out some samples of rewards and, of course, register and get their first points. What is more, many of our Advocates were awarded with special Badges for their active participation in Global Masters and Developer Community life. It was great to meet Developer Community Heroes in person, congratulations again! 2. Developer Community Flash Talks and the launch of InterSystems Open Exchange — a marketplace of Solutions, Interfaces, Plugins and Tools build with and build for InterSystems Data Platforms. Big applause goes to all the speakers, @Eduard.Lebedyuk, @Dmitry.Maslennikov, @John.Murray, @Daniel.Tamajon, @Evgeny.Shvarov and @Peter.Steiwer, thank you guys! 3. Global Summit Partner Pavilion was really full of people involved in interesting conversations. Please find more photos on DC Twitter. 4. A great activity on social networks, especially on Twitter. Here are some facts:More than 1,000 Tweets were posted with our special #GlobalSummit18 hashtag and garnered about 2 million impressions. The Hashtag #GlobalSummit18 was in top 20 Twitter Hashtags worldwide (11th place).We had 9 Broadcasts in DC Twitter and 2 Broadcasts in YouTube and in total more than 3,000 views. And...What about you? We're waiting of your stories from InterSystems Global Summit 2018!Please share your impressions with our special #GlobalSummit18 hashtag!
Announcement
Jeff Fried · Nov 12, 2018
Note: there are more recent updates, see https://community.intersystems.com/post/updates-our-release-cadence
InterSystems is adopting a new approach to releasing InterSystems IRIS. This blog explains the new release model and what customers should expect to see. We laid this out at Global Summit at the end of the InterSystems IRIS roadmap session and have received a lot of positive feedback from customers.
With this new model we offer two release streams:
1) An annual traditional release that we call EM (for Extended Maintenance)
2) A quarterly release that is tagged CD (for Continuous Delivery) and will be available only in a container format.
Why change? Speed and predictability
The pace of change in our industry is increasing, and this model ensures that we can publish the latest features very quickly to be responsive and competitive in the market. Many customers have told us they want two things:
Quicker time from requesting a new feature to having it available
A predictable schedule that allows them to plan for updates
Our new release cadence, inspired by continuous delivery principles, is similar to two-stream models used at many major software companies and a substantial fraction of enterprise-ready open source projects. Those that have successfully adopted this approach resoundingly report that they have higher quality and lower-risk releases, as well as a faster response time.
What’s not changing? Traditional releases are the same
Traditional releases (the "EM" releases) work the same way that our customers are used to. They receive ongoing maintenance releases, are the basis for adhocs as needed, and are supported on all platforms. Full product installation kits are available through the WRC Software Distribution portal as usual. Field tests will be available for major releases as we have done in the past. Maintenance releases will be made available for EM releases using the same ground rules we’ve used traditionally.
The difference is that these are released once per year, at a predictable time. Version 2019.1 of InterSystems IRIS is scheduled for March of 2019, version 2020.1 is scheduled for March 2020, and so on, as shown in the diagram below.
New quarterly releases are container-only
Every three months, new features and functionality will be available via a new quarterly release stream, denoted with a “CD”. For example, InterSystems IRIS version 2018.2 CD is scheduled for November 2018, version 2019.1 CD is scheduled for February 2019, version 2019.2 CD is scheduled for May 2019 and so on, as shown in the following diagram.
There are restrictions on these CD releases:
They are only available as container images, using the Open Container Initiative (OCI) format. This is widely used and supported by many companies including Docker, Amazon, Microsoft, Google, and IBM.
They only run on OCI compatible infrastructure. Docker is the most common OCI runtime, so InterSystems provides and supports docker containers built with the Ubuntu Linux kernel. This runs on a wide variety of platforms: all major cloud platforms (Amazon AWS, Microsoft Azure, Google GCP, IBM cloud), essentially all flavors of Linux, Windows Server 2016 and 2019, and more. InterSystems supports deploying containers on Windows 10 and Mac OS for development only, using Docker-for-windows and Docker-for-mac respectively. (The most notable platform that does not currently support OCI containers is AIX.)
Because these are containers, there is no install and no image upgrade. You can use the container images provided by InterSystems and compose your own images using them. To deploy, you simple replace containers. If there are any data upgrades needed, InterSystems will supply these along with the release.
With CD releases, InterSystems will not provide maintenance releases, security fixes, or adhocs. If you want to get a change, you can simply take the next release. There is a new release with the latest changes every three months, so you don’t need to wait longer than that for important fixes.
CD releases are fully supported by InterSystems for development, test, and production. Along with each CD release, InterSystems will have a preview program and provide preview images ahead of the final release. Preview images are supported for development and test purposes, but not for production.
Although Containers are relatively new, they are now widely used and provide many benefits. Customers do not need to use the CD releases or adopt containers, but there are many resources available from InterSystems to help with using InterSystems IRIS in containers (including multiple online videos) as well as a large ecosystem around containers in the industry at large.
In addition to providing rapid delivery of new features, CD releases will help with the predictability and stability of the traditional (EM) releases. The first CD release of the year has a corresponding EM release (which is the same except for any platform-specific capabilities), and these include all the functionality of the previous CD release plus more. Developers can work with CD releases and be confident that their code will work with traditional releases as well. Even if you never touch a CD release, you can track what features are released with InterSystems IRIS each quarter and plan confidently. Hello @Jeffrey.Fried, quick question.The CD releases will be Docker Images downloadables only from WRC or will be another public site? (Docker marketplace)Thanks Hello @David.Reche - we will have these available in public marketplaces, for sure. We will announce here when that is available, currently the image is available through WRC. Hi @David Reche - We do plan to make these available from the docker store as well as the major cloud marketplaces and of course via the WRC. The idea is to fit smoothly into the DevOps pipelines and toolsets our customers use.We're working on this, so stay tuned, and in the meantime please use the WRC.Any and all feedback welcome.
Article
Gevorg Arutiunian · Nov 16, 2018
The data model of your solution based on InterSystems platforms constantly changes over time. But what do you do with the data that was entered before? Back then, the data was valid, but what’s happening to it now after a number of data model changes? The answer to this question can be provided by the IDP DV tool that checks the property data of persistent and serial classes according to the types of these properties. In case any discrepancies are found, the tool generates a detailed error report for the user.
## Capabilities
### IDP DV offers three methods of scanning classes:
- Scanning of all classes
- Scanning of all the subclasses of the class
- Scanning of all classes corresponding to the SQL mask
## Usage
To install the tool, just download and import its classes from the repository into the necessary namespace and call the corresponding function.
```
s st = ##class(IDP.DV).ScanAllClasses(.Oid) //for all classes
s st = ##class(IDP.DV).ScanSubclassesOf(Class, .Oid) //for subclasses
s st = ##class(IDP.DV).ScanMatchingClasses(Mask, .Oid) //for SQL
```
### where:
*Oid* — the output structure saving data about unacceptable objects in the classes
*Class* — scanning of all subclasses of the class (and the class itself)
*Mask* — a parameter of the “SELECT ID FROM %Dictionary.ClassDefinition Where ID LIKE ?” SQL query
### Example
For our example, let’s use the Sample.Person class and the ScanSubclassesOf method. Let’s start this method without making any changes to it and with making a single change in one of its properties.
### No changes
```
SAMPLES>s st = ##class(IDP.DV).ScanSubclassesOf("Sample.Person", .Oid)
Sample.Employee has 0 invalid objects
Sample.Person has 0 invalid objects
```
### Having changed the Name property from string to integer:
```
SAMPLES>s st = ##class(IDP.DV).ScanSubclassesOf("Sample.Person", .Oid)
Class: Sample.Person
Object: 1
-----------------------------------------------------------------
ERROR #7207: The value of the ‘Ramsay, Jeff X.” data type is not a valid number
> ERROR #5802: Checking of the data type failed for property ‘Sample.Employee:Name’ with the value equal to “Ramsay, Jeff X.”
...
Class: Sample.Person
Object: 200
----------------------------------------------------------------
ERROR #7207: The value of the ‘Kovalev,Howard F.’ is not a valid number
> ERROR #5802 Checking of the data type failed for property ‘Sample.Employee:Name’ with the value equal to “Kovalev, Howard F.”
```
Comments and suggestions are very much welcome.
The [application with source code and documentation is available on Open Exchange](https://openexchange.intersystems.com/index.html#!/package/IDP-DV)
Link to OE shows an error:UPD: Link should be https://openexchange.intersystems.com/index.html#!/package/IDP-DV Thanks! I fixed it
Article
Alex Woodhead · Jun 13, 2023
Yet another example of applying LangChain to give some inspiration for new community Grand Prix contest.
I was initially looking to build a chain to achieve dynamic search of html of documentation site, but in the end it was simpler to borg the static PDFs instead.
Create new virtual environment
mkdir chainpdf
cd chainpdf
python -m venv .
scripts\activate
pip install openai
pip install langchain
pip install wget
pip install lancedb
pip install tiktoken
pip install pypdf
set OPENAI_API_KEY=[ Your OpenAI Key ]
python
Prepare the docs
import glob
import wget;
url='https://docs.intersystems.com/irisforhealth20231/csp/docbook/pdfs.zip';
wget.download(url)
# extract docs
import zipfile
with zipfile.ZipFile('pdfs.zip','r') as zip_ref:
zip_ref.extractall('.')
# get a list of files
pdfFiles=[file for file in glob.glob("./pdfs/pdfs/*")]
Load docs into Vector Store
import lancedb
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import LanceDB
from langchain.document_loaders import PyPDFLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.prompts.prompt import PromptTemplate
from langchain import OpenAI
from langchain.chains import LLMChain
embeddings = OpenAIEmbeddings()
db = lancedb.connect('lancedb')
table = db.create_table("my_table", data=[
{"vector": embeddings.embed_query("Hello World"), "text": "Hello World", "id": "1"}
], mode="overwrite")
documentsAll=[]
pdfFiles=[file for file in glob.glob("./pdfs/pdfs/*")]
for file_name in pdfFiles:
loader = PyPDFLoader(file_name)
pages = loader.load_and_split()
# Strip unwanted padding
for page in pages:
del page.lc_kwargs
page.page_content=("".join((page.page_content.split('\xa0'))))
documents = CharacterTextSplitter().split_documents(pages)
# Ignore the cover pages
for document in documents[2:]:
documentsAll.append(document)
# This will take couple of minutes to complete
docsearch = LanceDB.from_documents(documentsAll, embeddings, connection=table)
Prep the search template
_GetDocWords_TEMPLATE = """Answer the Question: {question}
By considering the following documents:
{docs}
"""
PROMPT = PromptTemplate(
input_variables=["docs","question"], template=_GetDocWords_TEMPLATE
)
llm = OpenAI(temperature=0, verbose=True)
chain = LLMChain(llm=llm, prompt=PROMPT)
Are you sitting down... Lets talk with the documentation
"What is a File adapter?"
# Ask the queston
# First query the vector store for matching content
query = "What is a File adapter"
docs = docsearch.similarity_search(query)
# Only using the first two documents to reduce token search size on openai
chain.run(docs=docs[:2],question=query)
Answer:
'\nA file adapter is a type of software that enables the transfer of data between two different systems. It is typically used to move data from one system to another, such as from a database to a file system, or from a file system to a database. It can also be used to move data between different types of systems, such as from a web server to a database.
"What is a lock table?"
# Ask the queston # First query the vector store for matching content
query = "What is a locak table"
docs = docsearch.similarity_search(query)
# Only using the first two documents to reduce token search size on openai
chain.run(docs=docs[:2],question=query)
Answer:
'\nA lock table is a system-wide, in-memory table maintained by InterSystems IRIS that records all current locks and the processes that have owned them. It is accessible via the Management Portal, where you can view the locks and (in rare cases, if needed) remove them.'
Will leave as a future exercise to format an User interface on this functionality Amazing...
Can we do the same against DC pages? Impressive, and I have some UI ideas. But your example doesn't use IRIS in any way, right? Correct. There is no use of IRIS though the example could be extended to profile metadata from classes. With generative AI the line between Class Dictionary and Documentation could be less distinct. The example doesn't make use of vector metadata yet. The documentation has description of settings, and a deployment has a cpf / api to allude setting values, so maybe it is possible for AI to describe the nature of a deployment or support further information regarding messages / application / event log.
( Deliberately suggesting too many options for a single competition candidate to attempt to take on )
Announcement
Anastasia Dyubaylo · Jul 5, 2023
InterSystems is currently looking for a U.S.-based Developer and Startup Evangelist!
Are you a strong developer who loves writing, speaking and teaching other developers about technology while taking a meaningful role in shaping the experience they have with the platform itself?
Join InterSystems and help us delight developers within the open source community. You will be the subject matter expert and evangelist for modern developers, focused on the end-to-end experience through the entire development cycle, and ensure that we meet the needs of current and future developers.
As the InterSystems Developer and Start Up Evangelist, your mission is to educate and engage early adopters and the broader developer community around the InterSystems IRIS data platform and back it up with practical examples. That means doing whatever it takes to bring them into our developer community of developers and keep them actively and happily building within it while learning from their experiences.
This is a role that will allow you to contribute to building real software and also take a public-facing role representing the platform to developer communities. It is part marketing, part engineering and part community building with a healthy sprinkle of education and public relations.
Job Responsibilities
Expand the number of active and influential open source projects using InterSystems’ technology
Create and deliver impactful content (sample code, blog posts, presentations, etc.) that help developers learn concepts and build successful apps and integrations on our data management platform.
Develop performance indicators and results for awareness and engagement, and create a reporting cadence.
Collaborate across the organization to create videos, courses, or exercises to meet the needs of new-to-InterSystems developers.
Support various development events/hackathons and evangelize our IRIS Data Platform
Skills & Experience
5+ years in software engineering at top companies.
2+ years in developer relations/advocacy/evangelism
A track record of speaking, presenting and teaching highly technical topics.
Continual learner who thrives on figuring out new technology.
Broad understanding of the open source software space.
Superior oral and written communication skills in English.
Experience with a mix of database technologies (SQL/relational, NoSQL/document, key-value, unstructured).
Experience in a variety of programming languages
Willingness to travel and ability to work autonomously.
Bachelors in a STEM related field
Bonus points for having experience with InterSystems technology
>> APPLY HERE <<
Article
Evgeniy Potapov · Jul 10, 2023
In this article, we will explore the use of parameters, formulas and labels in Logi Report Designer (formerly Logi JReport Designer). What are they for and how to create them?
Using the basic functionality of InterSystems Reports Designer, parameters, formulas and labels, you can significantly improve the detail and information content of the generated report. In addition, these tools allow you to automate some of the processes, which greatly speeds up and facilitates the creation of reports.
Let's analyze each tool separately now.
Parameters are individually configurable variables. They can store both static and dynamic data. For a dynamic parameter, you can set your own SQL query, which will be independent of the SQL query of the main dataset. In this way, you can output data without creating multiple datasets, thus keeping your design organized and clean. Parameters can be used both as a part of functions and SQL queries and as an independent indicator with the following syntax: “@YourParameter”. This quality is simply indispensable when the accuracy and detail of data are required.
A static parameter is a predefined value or a list of values used as a condition or filter on the output. A static parameter can also be used in SQL queries, functions and labels using the same syntax: “@YourParameter”.
Formulas are fully programmable functions in the Java language. This powerful tool greatly expands the possibilities of analytics, allowing you to perform complex calculations and set logical conditions for the output data. Formulas are created in the built-in Java IDE inside Logi with a built-in set of functions for working with such data types as Array, Date / Time, Financial, Math, String, etc. Formulas can work with all data available in InterSystems Reports. The built-in IDE understands parameters, calculated fields, and even other formulas.
A label is a text widget. It serves to display any lowercase characters and variables. Its purpose speaks for itself: it is used in headers and footers, as a custom legend for a chart or table, in a word, and wherever data need to be titled. Just like all other InterSystems Reports tools, the label is a very flexible widget. It can be placed anywhere on the page, inside a graph or table.
Here we will look at examples of the most basic use of these tools.
We will create a parameter that returns the number of developers out of the total number of members of the InterSystems community.
In order to create a parameter, you need to click on the Catalog Manager button in the upper left corner of the toolbar.
In the window that opens, on the left, select the Parameters item, then click on the “New Parameter” button in the upper left corner.
The parameter creation window will open.
In the first line, we need to set the parameter name. It is recommended to choose a name that reflects the purpose of the parameter as accurately as possible and at the same time is short enough. It is necessary because during the development process, you will create an abundance of different parameters, and there is a risk of overloading the lists of parameters and functions. In this article, we will analyze the dynamic parameter, so in the second line in the Value Setting we will select Bind with Single Column. In the next Data Source line, we will choose the table from which the selection will occur. In our case, it is Members. Then in the Bind Column, we will select the columns from which we will return the value.
There isn't a separate column that can count the number of developers for us in the Members table. Yet, thanks to the ability to set a special SQL query, we can establish a condition for this particular selection. To do that, scroll down the list of properties, find the line Import SQL and click on it.
A request creation window will appear. It has already pre-recorded the selection string of the members_count column - the total number of participants. We only need to add the condition “where Developer = 1”. We can check the request by clicking on the Check button, and if it is successful, you should click OK.
After completing the previous steps, click OK in the parameter creation window, and the new Developer_member parameter will appear in the directory on the left. At this point, close the Catalog Manager window and try the newly created parameter. To do that, drag it to the report page and click View. After completing this step, you will be prompted to accept the value of the parameter taken from the current database, so click OK.
Ready! Now we can see how many people in the community are developers. This setting will automatically get updated every time the data changes.
Now we will create a formula. This formula will calculate the percentage of developers compared to the total number of participants.
To do that, you should repeat the same steps we took when creating the parameter. Let me remind you. Click on the Catalog Manager, select Formulas, and then New Formula in the upper left corner.
Before the formula creation window appears, we will be prompted to enter a name for the formula.
After that, the IDE will open for you to write code in Java. Built in InterSystems Reports, the compiler allows you to write short functions without defining classes and methods. In order to calculate the percentage, we need to divide our new parameter by the total number of members and multiply by 100. We have already created the CountTotalMembers indicator. We did it with the tool called InterSystems Adaptive Analytics (powered by AtScale). You can find out more about it here (link).
Thus, we got the following formula. Pay attention to the function allowing you to write comments to the code. Well-documented code will make it easier for others to work with your formulae.
After writing, you can check if the resulting code has any errors by clicking the checkmark button on the toolbar. Toolwill check whether the syntax is correct and whether the specified parameters are present.
After the function has been tested, it can be saved. To make that happen, click the Save button, and the formula will appear in the catalog.
Following the parameter example, our next step will be to drag the new formula onto the page and find out the share of developers in the total number of participants.
This function can be utilized as an indicator for graphs and tables. I will give you an example of using our new formula in a bar graph. You can learn more about charts and how to create them here (link).
In this example, I used a new formula and derived the proportion of developers over the past five months.
It's a bar graph, with the formula developers_to_members on the y-axis and the dimension month_year_num on the x-axis. It turned out a very visual trend and we did it in just a few minutes.
Now it is time to look at the labels.
They are embedded into the report page from the Insert tab with the Label button.
A window to enter the text will appear on the page.
The built-in functionality allows you to edit many parameters for the new Label. In this article, we used this widget to enhance our new table.
There are a lot of parameters placed on the panel on the right that can help you design a label. To make them appear, select the Label widget.
To set the borders for the widget, scroll down to the Border category. Select all 4 borders: Bottom Line, Left Line, Right Line, and Top Line, and set all of them to Solid.
In order to fill the background, you need to scroll through the properties to the Color category and select the desired shade. The same can be done in the Format tab on the toolbar at the top.
If you wish to select the font size, stay on the same Format tab, click the drop-down menu with sizes, and pick the desired one. It can also set the font type and the location of the text inside the widget.
For more precise positioning relative to other widgets, use the coordinates the Label has inside the page. By default, sheet sizes are in inches. The positioning settings are located in the Geometry category on the widget's properties panel on the left.
In this article, we tried to cover three basic features of InterSystems Reports (powered by Logi Report). That is why by now, we expect you to know how to create formulae, parameters, and labels with great confidence. Great article! @Jean.Millette / @Timothy.Leavitt - are we leveraging all of these features in our use of InterSystems Reports? Hi,
I think you missed some links references:
Btw, great article! :)
Article
Evgeniy Potapov · Mar 18, 2024
Pandas is not just a popular software library. It is a cornerstone in the Python data analysis landscape. Renowned for its simplicity and power, it offers a variety of data structures and functions that are instrumental in transforming the complexity of data preparation and analysis into a more manageable form. It is particularly relevant in such specialized environments as ObjectScript for Key Performance Indicators (KPIs) and reporting, especially within the framework of the InterSystems IRIS platform, a leading data management and analysis solution. In the realm of data handling and analysis, Pandas stands out for several reasons. In this article, we aim to explore those aspects in depth:
Key Benefits of Pandas for Data Analysis:
In this part, we will delve into the various advantages of using Pandas. It includes intuitive syntax, efficient handling of large datasets, and the ability to work seamlessly with different data formats. The ease with which Pandas integrates into existing data analysis workflows is also a significant factor that enhances productivity and efficiency.
Solutions to Typical Data Analysis Tasks with Pandas:
Pandas is versatile enough to tackle routine data analysis tasks, ranging from simple data aggregation to complex transformations. We will explore how Pandas can be used to solve these typical challenges, demonstrating its capabilities in data cleaning, transformation, and exploratory data analysis. This section will provide practical insights into how Pandas simplifies these tasks.
Using Pandas Directly in ObjectScript KPIs in IRIS:
The integration of Pandas with ObjectScript for the development of KPIs in the IRIS platform is simply a game-changer. This part will cover how Pandas can be utilized directly within ObjectScript, enhancing the KPI development process. We will also explore practical examples of how Pandas can be employed to analyze and visualize data, thereby contributing to more robust and insightful KPIs.
Recommendations for Implementing Pandas in IRIS Analytic Processes:
Implementing a new tool in an existing analytics process can be challenging. For that reason, this section aims to provide best practices and recommendations for integrating Pandas into the IRIS analytics ecosystem as smoothly as possible. From setup and configuration to optimization and best practices, we will cover essential guidelines to ensure a successful integration of Pandas into your data analysis workflow. Pandas is a powerful data analytics library in the Python programming language. Below, you can find a few benefits of Pandas for data analytics:
Ease of use: Pandas provides a simple and intuitive interface for working with data. It is built on top of the NumPy library and provides such high-level data structures as DataFrames, which makes it easy to work with tabular data.
Data Structures: The principal data structures in Pandas are Series and DataFrame. Series is a one-dimensional array with labels, whereas DataFrame is a two-dimensional table representing a set of Series. These data structures combined allow convenient storage and manipulation of data.
Handling missing data: Pandas provides convenient methods for detecting and handling missing data (NaN or None). It includes some methods for deleting, filling, or replacing missing values, simplifying your work with real data.
Data grouping and aggregation: With Pandas it is easy to group data by features and apply aggregation functions (sum, mean, median, etc.) to each data group.
Powerful indexing capabilities: Pandas provides flexible tools for indexing data. You can use labels, numeric indexes, or multiple levels of indexing. It allows you to filter, select, and manipulate data efficiently.
Reading and writing data: Pandas supports multiple data formats, including CSV, Excel, SQL, JSON, HTML, etc. It facilitates the process of reading and writing data from/to various sources.
Extensive visualization capabilities: Pandas is integrated with such visualization libraries as Matplotlib and Seaborn, making it simple to create graphs and visualize data, especially with the help of DeepSeeWeb through integration via embedded Python.
Efficient time management: Pandas provides multiple features for working with time series, including powerful tools for working with timestamps and periods.
Extensive data manipulation capabilities: The library provides various functions for filtering, sorting, and reshaping data, as well as joining and merging tables, which makes it a powerful tool for data manipulation.
Excellent performance: Pandas is purposefully optimized to handle large amounts of data. It provides high performance by using Cython and enhanced data structures.
Let's look at an example of Pandas' implementation in an ObjectScript environment. We will employ VSCode as our development environment. The choice of IDE in this case was determined by the availability of InterSystems ObjectScript Extension Pack, which provides a debugger and editor for ObjectScript.First of all, let's create a KPI class:
Class BI.KPI.pandasKpi Extends %DeepSee.KPI
{
}
Then, we should make an XML document defining the type, name, and number of columns and filters of our KPI:
XData KPI [ XMLNamespace = "http://www.intersystems.com/deepsee/kpi" ]
{
<!-- 'manual' KPI type will tell DeepSee that data will be gathered from the class method defined by us-->
<kpi name="MembersPandasDemo" sourceType="manual">
<!-- we are going to need only one column for our KPI query -->
<property columnNo="1" name="Members" displayName="Community Members"/>
<!-- and lastly we should define a filter for our members -->
<filter name="InterSystemsMember"
displayName="InterSystems member"
sql="SELECT DISTINCT ISCMember from Community.Member"/>
</kpi>
}
The next step is to define the python function, write the import, and create the necessary variables:
ClassMethod MembersDF(sqlstring) As %Library.DynamicArray [ Language = python ]
{
# First of all, we import the most important library in our script: IRIS.
# IRIS library provides syntax for calling ObjectScript classes.
# It simplifies Python-ObjectScript integration.
# With the help of the library we can call any class and class method, and
# it returns whatever data type we like, and ObjectScript understands it.
import iris
# Then, of course, import the pandas itself.
import pandas as pd
# Create three empty arrays:
Id_list = []
time_list = []
ics_member = []
Next step: define a query against the database:
# Define SQL query for fetching data.
# The query can be as simple as possible.
# All the work will be done by pandas:
query = """
SELECT
id as ID, CAST(TO_CHAR(Created, 'YYYYMM') as Int) as MonthYear, ISCMember as ISC
FROM Community.Member
order by Created DESC
"""
Then, we need to save the resulting data into an array group:
# Call the class specified for executing SQL statements.
# We use embedded Python library to call the class:
sql_class = iris.sql.prepare(query)
# We use it again to call dedicated class methods:
rs = sql_class.execute()
# Then we use pandas directly on the result set to make dataFrame:
data = rs.dataframe()
We also can pass an argument to filter our data frame.
# Filter example
# We take an argument sqlstring which, in this case, contains boolean data.
# With a handy function .loc filtering all the data
if sqlstring is not False:
data = data.loc[data["ISC"] == int(sqlstring)]
Now, we should group the data and define x-axis for it:
# Group data by date displayed like MonthYear:
grouped_data = data.groupby(["MonthYear"]).count()
Unfortunately, we cannot take the date column directly from grouped data DataFrame,so, instead, we take the date column from the original DataFrame and process it.
# Filter out duplicate dates and append them to a list.
# After grouping by MonthYear, pandas automatically filters off duplicate dates.
# We should do the same to match our arrays:
sorted_filtered_dates = [item for item in set(data["MonthYear"])]
# Reverse the dates from left to right:
date = sorted(sorted_filtered_dates, reverse=True)
# Convert dict to a list:
id = grouped_data["ID"].id.tolist()
# Reverse values according to the date array:
id.reverse()
# In order to return the appropriate object to ObjectScript so that it understands it,
# we call '%Library.DynamicArray' (it is the closest one to python and an easy-to-use type of array).
# Again, we use IRIS library inside python code:
OBJIDList = iris.cls('%Library.DynamicArray')._New()
OBJtimeList = iris.cls('%Library.DynamicArray')._New()
# Append all data to DynamicArray class methods Push()
for i in date:
OBJtimeList._Push(i)
for i in ID:
OBJIDList._Push(i)
return OBJIDList, OBJtimeList
}
Next step is to define KPI specific method for DeepSee to understand what data to take:
// Define method. The method must always be %OnLoadKPI(). Otherwise, the system will not recognise it.
Method %OnLoadKPI() As %Status
{
//Define string for the filter. Set the default to zero
set sqlstring = 0
//Call %filterValues method to fetch any filter data from the widget.
if $IsObject(..%filterValues) {
if (..%filterValues.InterSystemsMember'="")
{
set sqlstring=..%filterValues.%data("InterSystemsMember")
}
}
//Call pandas function, pass filter value if any, and receive dynamic arrays with data.
set sqlValue = ..MembersDF(sqlstring)
//Assign each tuple to a variable.
set idList = sqlValue.GetAt(1)
set timeList = sqlValue.GetAt(2)
//Calculate size of x-axis. It will be rows for our widget:
set rowCount = timeList.%Size()
//Since we need only one column, we assign variable to 1:
set colCount = 1
set ..%seriesCount=rowCount
//Now, for each row, assign time value and ID value of our members:
for rows = 1:1:..%seriesCount
{
set ..%seriesNames(rows)=timeList.%Get(rows-1)
for col = 1:1:colCount
{
set ..%data(rows,"Members")=idList.%Get(rows-1)
}
}
quit $$$OK
At this point, compile the KPI and create a widget on a dashboard using KPI data source.
That's it! We have successfully navigated through the process of integrating and utilizing Pandas in our ObjectScript applications on InterSystems IRIS. This journey has taken us from fetching and formatting data to filtering and displaying it, all within a single, streamlined function. This demonstration highlights the efficiency and power of Pandas in data analysis. Now, let's explore some practical recommendations for implementing Pandas within the IRIS environment and conclude with insights on its transformative impact.Recommendations for Practical Application of Pandas in IRIS
Start with Prototyping:
Begin your journey with Pandas by using example datasets and utilities. This approach helps you understand the basics and nuances of Pandas in a controlled and familiar environment. Prototyping allows us to experiment with different Pandas functions and methods without the risks associated with live data.
Gradual Implementation:
Introduce Pandas incrementally into your existing data processes. Instead of a complete overhaul, identify the areas where Pandas can enhance or simplify data handling and analysis. It could be some simple tasks like data cleaning aggregation or a more complex analysis where Pandas capabilities can be fully leveraged.
Optimize Pandas Use:
Prior to working with large datasets, it is crucial to optimize your Pandas code. Efficient code can significantly reduce processing time and resource consumption, which is especially important in large-scale data analysis. Such techniques such as vectorized operations, using appropriate data types, and avoiding loops in data manipulation can significanlty enhance performance.
Conclusion
The integration of Pandas into ObjectScript applications on the InterSystems IRIS platform marks a significant advancement in the field of data analysis. Pandas brings us an array of powerful tools for data processing, analysis, and visualization, which are now at the disposal of IRIS users. This integration not only accelerates and simplifies the development of KPIs and analytics but also paves the way for more sophisticated and advanced data analytical capabilities within the IRIS ecosystem. With Pandas, analysts and developers can explore new horizons in data analytics, leveraging its extensive functionalities to gain deeper insights from their data. The ability to process and analyze large datasets efficiently, coupled with the ease of creating compelling visualizations, empowers users to make more informed decisions and uncover trends and patterns that were previously difficult to detect. In summary, Pandas integration into the InterSystems IRIS environment is a transformative step, enhancing the capabilities of the platform and offering users an expanded toolkit for tackling the ever-growing challenges and complexities of modern data analysis.
Article
Eduard Lebedyuk · Apr 18
For my hundredth article on the Developer Community, I wanted to present something practical, so here's a comprehensive implementation of the [GPG Interoperability Adapter for InterSystems IRIS](https://github.com/intersystems-community/GPG).
Every so often, I would encounter a request for some GPG support, so I had several code samples written for a while, and I thought to combine all of them and add missing GPG functionality for a fairly complete coverage. That said, this Business Operation primarily covers data actions, skipping management actions such as key generation, export, and retrieval as they are usually one-off and performed manually anyways. However, this implementation does support key imports for obvious reasons. Well, let's get into it.
# What is GPG?
[GnuPG](https://gnupg.org/) is a complete and free implementation of the OpenPGP standard as defined by RFC4880 (also known as PGP). GnuPG allows you to encrypt and sign your data and communications and perform the corresponding tasks of decryption and signature verification.
For InterSystems Interoperability adapter, I will be using Embedded Python and [gnupg](https://gnupg.readthedocs.io/en/latest/) Python library specifically.
> The gnupg module allows Python programs to make use of the functionality provided by the GNU Privacy Guard (abbreviated GPG or GnuPG). Using this module, Python programs can encrypt and decrypt data, digitally sign documents and verify digital signatures, manage (generate, list and delete) encryption keys, using Public Key Infrastructure (PKI) encryption technology based on OpenPGP.
# Disclaimer
This project, whenever possible, aims to use GPG defaults. Your organization's security policies might differ. Your jurisdiction or your organization's compliance with various security standards might require you to use GPG with different settings or configurations. The user is wholly responsible for verifying that cryptographic settings are adequate and fully compliant with all applicable regulations. This module is provided under an MIT license. Author and InterSystems are not responsible for any improper or incorrect use of this module.
# Installation
## OS and Python
First, we'll need to install `python-gnupg`, which can be done using `pip` or `irispip`:
```
irispip install --target C:\InterSystems\IRIS\mgr\python python-gnupg
```
If you're on Windows, you should [install GPG itself](https://gnupg.org/download/index.html). GPG binaries must be in the path, and you must restart IRIS after GPG installation. If you're on Linux or Mac, you likely already have GPG installed.
## InterSystems IRIS
After that, load the [code](https://github.com/intersystems-community/GPG) into any Interoperability-enabled namespace and compile it. The code is in `Utils.GPG` package and has the following classes:
- `Operation`: main Business Operation class
- `*Request`: Interoperability request classes
- `*Response`: Interoperability response classes
- `File*`: Interoperability request and response classes using `%Stream.FileBinary` for payload
- `Tests`: code for manual testing, samples
Each request has two properties:
- `Stream` — set that to your payload. In File* requests, your stream must be of the `%Stream.FileBinary` class; for non-file requests, it must be of the `%Stream.GlobalBinary` class.
- `ResponseFilename` — (Optional) If set, the response will be written into a file at the specified location. If not set, for File requests, the response will be placed into a file with `.asc` or `.txt` added to the request filename. If not set, for global stream requests, the response will be a global stream.
The request type determines the GPG operation to perform. For example, `EncryptionRequest` is used to encrypt plaintext payloads.
Each response (except for Verify) has a `Stream` property, which holds the response, and a `Result` property, which holds a serializable subset of a GPG result object converted into IRIS persistent object. The most important property of a `Result` object is a boolean `ok`, indicating overall success.
## Sample key generation
Next, you need a sample key; skip this step if you already have one ([project repo](https://github.com/intersystems-community/GPG) also contains sample keys, you can use them for debugging, passphrase is `123456`):
Use any Python shell (for example, `do $system.Python.Shell()`):
```python
import gnupg
gpg_home = 'C:\InterSystems\IRIS\Mgr\pgp'
gpg = gnupg.GPG(gnupghome=gpg_home)
input_data = gpg.gen_key_input(key_type="RSA", key_length=2048)
master_key = gpg.gen_key(input_data)
public_key = 'C:\InterSystems\IRIS\Mgr\keys\public.asc'
result_public = gpg.export_keys(master_key.fingerprint, output=public_key)
private_key = 'C:\InterSystems\IRIS\Mgr\keys\private.asc'
result_private = gpg.export_keys(master_key.fingerprint, True, passphrase="", output=private_key)
```
You must set `gpg_home`, `private_key`, and `public_key` to valid paths. Note that a private key can only be exported with a passphrase.
# Production configuration
Add `Utils.GPG.Operation` to your Production, there are four custom settings available:
- `Home`: writable directory for GPG to keep track of an internal state.
- `Key`: path to a key file to import
- `Credentials`: if a key file is passphrase protected, select a Credential with a password to be used as a passphrase.
- `ReturnErrorOnNotOk`: If this is `False` and the GPG operation fails, the response will be returned with all the info we managed to collect. If this is `True`, any GPG error will result in an exception.
On startup, the operation loads the key and logs `GPG initialized` if everything is okay. After that, it can accept all request types based on an imported key (a public key can only encrypt and verify).
# Usage
Here's a sample encryption request:
```objectscript
/// do ##class(Utils.GPG.Tests).Encrypt()
ClassMethod Encrypt(target = {..#TARGET}, plainFilename As %String, encryptedFilename As %String)
{
if $d(plainFilename) {
set request = ##class(FileEncryptionRequest).%New()
set request.Stream = ##class(%Stream.FileBinary).%New()
$$$TOE(sc, request.Stream.LinkToFile(plainFilename))
} else {
set request = ##class(EncryptionRequest).%New()
set request.Stream = ##class(%Stream.GlobalBinary).%New()
do request.Stream.Write("123456")
$$$TOE(sc, request.Stream.%Save())
}
if $d(encryptedFilename) {
set request.ResponseFilename = encryptedFilename
}
set sc = ##class(EnsLib.Testing.Service).SendTestRequest(target, request, .response, .sessionid)
zw sc, response, sessionid
}
```
In the same manner, you can perform Decryption, Sign, and Verification requests. Check `Utils.GPG.Tests` for all the examples.
# Why Business Operation?
While writing this, I received a very interesting question about why GPG needs to be a separate Business Host and not a part of a Business Process. As this can be very important for any cryptography code, I wanted to include my rationale on that topic.
I would like to start with how Business Processes work and why this is a crucial consideration for cryptography code.
Consider this simple Business Process `User.BPL`:
```xml
```
It will generate the `Context` class with one property:
```objectscript
Class User.BPL.Context Extends Ens.BP.Context
{
Property x As %Integer;
}
```
And `State` class with two methods (simplified):
```objectscript
Method S1(process As User.BPL, context As User.BPL.Context)
{
Set context.x=1
Set ..%NextState="S2"
Quit ..ManageState()
}
Method S2(process As User.BPL, context As User.BPL.Context)
{
Set context.x=2
Set ..%NextState="S3"
Quit ..ManageState()
}
```
Since BP is a state machine, it will simply call the first state and then whatever is set in `%NextState`. Each state has information on all possible next states—for example, one next state for a true path and another for a false path in the if block state.
However, the BP engine manages the state between state invocations. In our case, it saves the `User.BPL.Context` object which holds an entire context - property `x`.
But there's no guarantee that after saving the state of a particular BP invocation, the subsequent state for this invocation would be called next immediately.
The BP engine might wait for a reply from BO/BP, work on another invocation, or even work on another process entirely if we're using shared pool workers. Even with a dedicated worker pool, another worker might grab the same process invocation to continue working on it.
This is usually fine since the worker's first action before executing the next state is loading the context from memory—in our example, it's an object of the `User.BPL.Context` class with one integer property `x`, which works.
But in the case of any cryptography library, the context must contain something along the lines of:
```objectscript
/// Private Python object holding GPG module
Property %GPG As %SYS.Python;
```
Which is a runtime Python module object that cannot be persisted. It also likely cannot be pickled or even dilled as we initialize a crypto context to hold a key — the library is rather pointless without it, after all.
So, while theoretically, it could work if the entire cryptography workload (idempotent init – idempotent key load - encryption - signing) is handled within one state, that is a consideration that must always be carefully observed. Especially since, in many cases, it will work in low-load environments (i.e., dev) where there's no queue to speak of, and one BP invocation will likely progress from beginning to end uninterrupted. But when the same code is promoted to a high-load environment with queues and resource contention (i.e., live), the BP engine is likelier to switch between different invocations to speed things up.
That's why I highly recommend extracting your cryptography code into a separate business operation. Since one business operation can handle multiple message types, you can have one business operation that processes PGP signing/encryption/verification requests. Since BOs (and BSes) are not state machines, once you load the library and key(s) in the init code, they will not be unloaded until your BH job expires one way or another.
# Conclusion
GPG Interoperability Adapter for InterSystems IRIS allows you to use GPG easily if you need Encryption/Decryption and Signing/Verification.
# Documentation
- [GnuPG](https://gnupg.org/)
- [Python GnuPG](https://gnupg.readthedocs.io/en/latest/)
- [OpenExchange](https://openexchange.intersystems.com/package/GPG)
- [Repo](https://github.com/intersystems-community/GPG)
Announcement
Anastasia Dyubaylo · Apr 7
Hi Community,
It's time to announce the winners of the AI Programming Contest: Vector Search, GenAI and AI Agents!
Thanks to all our amazing participants who submitted 15 applications 🔥
Now it's time to announce the winners!
Experts Nomination
🥇 1st place and $5,000 go to the bg-iris-agent app by @geotat, @Elena.Karpova, @Alena.Krasinskiene
🥈 2nd place and $2,500 go to the mcp-server-iris app by @Dmitry.Maslennikov
🥉 3rd place and $1,000 go to the langchain-iris-tool app by @Yuri.Gomes
🏅 4th place and $500 go to the Facilis app by @Henrique, @henry, @José.Pereira
🏅 5th place and $300 go to the toot app by @Alex.Woodhead
🌟 $100 go to the iris-AgenticAI app by @Muhammad.Waseem
🌟 $100 go to the iris-easybot app by @Eric.Fortenberry
🌟 $100 go to the oncorag app by Patrick Salome
🌟 $100 go to the AiAssistant app by @XININGMA
🌟 $100 go to the iris-data-analysis app by @lando.miller
Community Nomination
🥇 1st place and $1,000 go to the AiAssistant app by @XININGMA
🥈 2nd place and $600 go to the bg-iris-agent app by @geotat, @Elena.Karpova, @Alena.Krasinskiene
🥉 3rd place and $300 go to the iris-data-analysis app by @lando.miller
🏅 4th place and $200 go to the Facilis app by @Henrique, @henry, @José.Pereira
🏅 5th place and $100 go to the langchain-iris-tool app by @Yuri.Gomes
Our sincerest congratulations to all the participants and winners!
Join the fun next time ;)
Article
Yuri Marx · Nov 1, 2021
The InterSystems IRIS is a great data platform and it is met the current features required by the market. In this article, you see the top 10:
Note: this list was updated because many features are added to IRIS in last 3 years (thanks @Kristina.Lauer)
Rank
Feature
Why
Learning more about it
1
Democratized analytics
InterSystems IRIS Adaptive Analytics:Delivers virtual cubes with centralized business semantics, abstracted from technical details and modeling, to allow business users to easily and quickly create their analyses in Excel or their preferred analytics product (PowerBI, Tableau, etc.). There are no consumption restrictions per user.InterSystems Reports:It is a low code report designer to deliver operational data reports embedded on any application or in a web report portal.
Overview of Adaptive Analytics, Adaptive Analytics Essentials
Introduction to InterSystems Reports,Delivering Data Visually with InterSystems Reports
2
API Manager
The digital assets are consumed using API REST. Is required govern the reuse, security, consuming, asset catalog, developer ecosystem and others aspects in a central point. The API Manager is the right tool to do this. So, all the companies have or want to have an API Manager.
Hands-On with API Manager for Devs
3
Scalable Databases
Sharding DatabaseThe total amount of data created, captured, copied, and consumed globally is forecast to increase rapidly, reaching 64.2 zettabytes in 2020. Over the next five years up to 2025, global data creation is projected to grow to more than 180 zettabytes. In 2020, the amount of data created and replicated reached a new high (source: https://www.statista.com/ statistics/871513/worldwide-data-created/). In this scenario, is critical to the business be able to process data in a distributed way (into shards, like hadoop, or mongodb), to increase and mantain the performance. The other important thing is the IRIS is 3 times more rapid then Cache, and more rapid then AWS databases, into the AWS cloud.Columnar storageChanges the storage of repeating data into columns instead of rows, allowing you to achieve up to 10x higher performance, especially in aggregated (analytical) data storage scenarios.
Planning and Deploying a Sharded Cluster
Scaling for Data Volume with Sharding
Increasing Analytical Query Speed Using Columnar StorageUsing Columnar Storage
4
Python support
Python is the most popular language to do AI and AI is in the center of the business strategy, because allows you get new insights, get more productivity and reduce costs.
Writing Python Applications with InterSystems
Leveraging Embedded Python in Interoperability Productions
5
Native APIs (Java, .NET, Node.js, Python) and PEX
The US has nearly 1 million open IT jobs (source: https://www.cnbc.com/2019/11/06/ how-switching-careers-to-tech-could-solve-the-us-talent-shortage.html). Is very hard find an Object Script developer. So, is important be able use IRIS features, like interoperability with the developer team official programming language (Python, Java, .NET, etc.).
Creating Interoperability Productions Using PEX, InterSystems IRIS for Coders, Node.js QuickStart, Using the Native API for Python
6
Interoperability, FHIR and IoT
Businesses are constantly connecting and exchanging data. Departments also need to work connected to deliver business processes with more strategic value and lower cost. The best technology to do this, is the interoperability tools, especially ESB, Integration Adapters, Business Process automation engines (BPL), data transformation tools (DTL) and the adoption of market interoperability standards, like FHIR and MQTT/IoT. The InterSystems Interoperability supports all this (for FHIR use IRIS for Health).
Receiving and Routing Data in a Production, Building Basic FHIR Integrations with InterSystems, Monitoring Remotely with MQTT, Building Business Integrations with InterSystems IRIS
7
Cloud, Docker & Microservices
Everyone now wants cloud microservices architecture. They want to break the monoliths to create projects that are smaller, less complex, less coupled, more scalable, reusable, and independent. IRIS allows you deploy data, application and analytics microservices, thanks IRIS support to shards, docker, kubernetes, distributed computing, DevOps tools and lower CPU/memory consumption (IRIS supports even ARM processors!). But microservices requires the microservice API management, using API Manager, to be used aligned to the business.
Deploying InterSystems IRIS in Containers and the Cloud
Deploying and Testing InterSystems Products Using CI/CD Pipelines
8
Vector Search and Generative AI
Vectors are mathematical representations of data and textual semantics (NLP), and are the raw material for generative AI applications to understand questions and tasks and return correct answers. Vector repositories and searches are capable of storing vectors (AI processing) so that for each new task or question, they can retrieve what has already been produced (AI memory or knowledge base), making everything faster and cheaper.
Developing Generative AI Applications, Using Vector Search
9
VSCode support
VSCode is the most popular IDE and InterSystems IRIS has a good set of tools for it.
Developing on an InterSystems Server Using VS Code
10
Data Science
The ability to apply data science to the data, integration and transaction requests and responses, using Python, R and IntegratedML (AutoML) enable AI intelligence at the moment is required by the business. The InterSystems IRIS deliver AI with Python, R and IntegratedML (AutoML)
Hands-On with IntegratedML
Developing in Python or R within InterSystems IRIS
Predicting Outcomes with IntegratedML in InterSystems IRIS
nice summary :) 💡 This article is considered as InterSystems Data Platform Best Practice. I updated this article, thanks @Kristina.Lauer