Search

Clear filter
Announcement
Evgeny Shvarov · Jul 14

Technology Bonuses for the InterSystems Developer Tools Contest 2025

Here are the technology bonuses for the InterSystems Developer Tools Contest 2025 that will give you extra points in the voting: IRIS Vector Search usage -3 Embedded Python usage -3 InterSystems Interoperability - 3 InterSystems IRIS BI - 3 VSCode Plugin - 3 FHIR Tools - 3 Docker container usage -2 ZPM Package Deployment - 2 Implement InterSystems Community Idea - 4 Find a bug in Embedded Python - 2 Article on Developer Community - 2 The second article on Developer Community - 1 Video on YouTube - 3 First Time Contribution - 3 See the details below. IRIS Vector Search - 3 points Starting from 2024.1 release InterSystems IRIS contains a new technology vector search that allows to build vectors over InterSystems IRIS data and perform search of already indexed vectors. Use it in your solution and collect 3 bonus points. Here is the demo project that leverages it. Embedded Python - 3 points Use Embedded Python in your application and collect 3 extra points. You'll need at least InterSystems IRIS 2021.2 for it. InterSystems Interoperability - 3 points Make a tool to enhance developer experience or to maintain or monitor or use the InterSystems Interoperability engine.Inteoperability tool example. Interoperability adapter example. Basic Interoperability template. Python Interoperability template. InterSystems IRIS BI - 3 points Develop a tool that improves the developer experience or uses InterSystems IRIS BI feature of IRIS Platform. Examples: DeepSeeWeb, IRIS BI Utils, AnalyzeThis. IRIS BI Analytics template. VSCode Plugin - 3 points Develop a plugin to Visual Studio Code editor that will help developers to develop with InterSystems IRIS. Examples: VSCode ObjectScript, CommentToObjectScript, OEX-VSCode-snippets-Example, irislab, vscode-snippets-template and more. FHIR Tools - 3 points Develop a tool that helps to develop and maintain FHIR applications in InterSystems IRIS or help with FHIR enablement tools, such as FHIR SQL Builder, FHIR Object Model, and InterSystems FHIR Server. Here is a basic InterSystems FHIR template and examples of FHIR-related tools. Docker container usage - 2 points The application gets a 'Docker container' bonus if it uses InterSystems IRIS running in a docker container. Here is the simplest template to start from. ZPM Package deployment - 2 points You can collect the bonus if you build and publish the ZPM(ObjectScript Package Manager) package for your Full-Stack application so it could be deployed with: zpm "install your-multi-model-solution" command on IRIS with ZPM client installed. ZPM client. Documentation. Implement Community Opportunity Idea - 4 points Implement any related to developer tools idea from the InterSystems Community Ideas portal which has the "Community Opportunity" status. Only ideas created before the contest is announced will be accepted. This will give you 4 additional bonus points. Find a bug in Embedded Python - 2 pointsWe want the broader adoption of InterSystems Embedded python, so we encourage you to report the bugs you will face during the development of your python application with IRIS in order to fix it. Please submit the bug here in a form of issue and how to reproduce it. You can collect 2 bonus points for the first reproducible bug. Article on Developer Community - 2 points Write a brand new article on Developer Community that describes the features of your project and how to work with it. The minimum article length is 1000 characters. Collect 2 points for the article. *This bonus is subject to the discretion of the experts whose decision is final. The Second article on Developer Community - 1 point You can collect one more bonus point for the second article or the translation of the first article. The second article should go into detail about a feature of your project. The 3rd and more articles will not bring more points, but the attention will be all yours. *This bonus is subject to the discretion of the experts whose decision is final. Video on YouTube - 3 points Make the Youtube video that demonstrates your product in action and collect 3 bonus points per each. Examples. First-Time Contribution - 3 points Collect 3 bonus points if you participate in InterSystems Open Exchange contests for the first time! The list of bonuses is subject to change. Stay tuned! Good luck with the competition! The Java language is very important for IRIS, many native features of IRIS use Java libraries, like the XML engine, but only Python Embedded has bonus (+3) and Java don't have any bonus! I suggest create a bonus for Java also. And I'm using NodeJS, and the official driver has so many issues, quite critical to make it somewhat useful, but only bugs in Embedded Python are worth extra points. Does not look fair. NodeJS support requires a lot of attention, too, not only Python I agree with you
Announcement
Carmen Logue · Jul 22

InterSystems Reports Version 25.1 Release Announcement

InterSystems Reports version 25.1 is now available from the InterSystems Software Distribution site in the Components section. The software is labeled InterSystems Reports Designer and InterSystems Reports Server and is available for Mac OSX, Windows and Linux operating systems. This new release brings along enhancements and fixes from our partner, insightsoftware. InterSystems Reports 25.1 is powered by Logi Report Version 25.1 which includes: - support for dynamic subject construction in scheduled email distribution of reports - ability to easily switch between metric and imperial units of measure - several improvements to ease of alignment in designing pixel-perfect page reports For more information about these features and others, see the release notes available from insightsoftware. Note that the InterSystems Reports 25.1 installation requires JDK version 11 or 17 for the installation to execute. Please upgrade if you are using JDK 8 prior to the InterSystems Reports installation. For more information about InterSystems Reports, see the InterSystems documentation and learning services content.
Announcement
Anastasia Dyubaylo · Jul 28

Time to vote in the InterSystems Developer Tools Contest 2025

Hi Community, It's voting time! Cast your votes for the best applications in our InterSystems Developer Tools Contest: 🔥 VOTE FOR THE BEST APPS 🔥 How to vote? Details below. Experts nomination: An experienced jury from InterSystems will choose the best apps to nominate for the prizes in the Experts Nomination. Community nomination: All active members of the Developer Community with a “trusted” status in their profile are eligible to vote in the Community nomination. To check your status, please click on your profile picture at the top right, and you will see it under your picture. To become a trusted member, you need to participate in the Community at least once. Blind vote! The number of votes for each app will be hidden from everyone. We will publish the leaderboard in the comments section of this post daily. Experts may vote any time so it is possible that the places change dramatically at the last moment. The same applies to bonus points. The order of projects on the contest page will be determined by the order in which applications were submitted to the competition, with the earliest submissions appearing higher on the list. P.S. Don't forget to subscribe to this post (click on the bell icon) to be notified of new comments. To take part in the voting, you need: Sign in to Open Exchange – DC credentials will work. Make any valid contribution to the Developer Community – answer or ask questions, write an article, contribute applications on Open Exchange – and you'll be able to vote. Check this post on the options to make helpful contributions to the Developer Community. If you change your mind, cancel the choice and give your vote to another application! Support the application you like! Note: Contest participants are allowed to fix bugs and make improvements to their applications during the voting week, so be sure to subscribe to application releases! So! After the first day of the voting, we have: Community Nomination, Top 5 InterSystems Testing Manager for VS Code by @John Murray typeorm-iris by @Dmitry Maslenniko toolqa by @André.DienesFriedrich, @Andre.LarsenBarbosa Global-Inspector by @Robert.Cemper1003 dc-artisan by @Henrique, @José.Pereira , @henry ➡️ Voting is here. Expert Nomination, Top 5 typeorm-iris by @Dmitry Maslenniko InterSystems Testing Manager for VS Code by @John Murray IPM Explorer for VSCode by @John.McBrideDev PyObjectscript Gen by @Antoine.Dh dc-artisan by @Henrique, @José.Pereira , @henry ➡️ Voting is here. Experts, we are waiting for your votes! 🔥 Participants, improve & promote your solutions! Voting for the InterSystems Developer Tools Contest 2025 goes ahead! And here're the results at the moment: Community Nomination, Top 5 InterSystems Testing Manager for VS Code by @John Murray toolqa by @André.DienesFriedrich, @Andre Larsen Barbosa typeorm-iris by @Dmitry Maslenniko dc-artisan by @Henrique Dias, @José Pereira , @Henry Pereira IPM Explorer for VSCode by @John McBride ➡️ Voting is here. Expert Nomination, Top 5 typeorm-iris by @Dmitry Maslenniko InterSystems Testing Manager for VS Code by @John Murray IPM Explorer for VSCode by @John McBride PyObjectscript Gen by @Antoine.Dh dc-artisan by @Henrique Dias, @José Pereira , @Henry Pereira ➡️ Voting is here. Hi Devs! At the moment, we can see the following results of the voting: Community Nomination, Top 5 InterSystems Testing Manager for VS Code by @John Murray dc-artisan by @Henrique Dias, @José Pereira , @Henry Pereira toolqa by @André.DienesFriedrich, @Andre Larsen Barbosa IPM Explorer for VSCode by @John McBride typeorm-iris by @Dmitry Maslenniko ➡️ Voting is here. Expert Nomination, Top 5 IPM Explorer for VSCode by @John McBride typeorm-iris by @Dmitry Maslenniko InterSystems Testing Manager for VS Code by @John Murray dc-artisan by @Henrique Dias, @José Pereira , @Henry Pereira iris4word by @Yuri.Gomes ➡️ Voting is here. Developers, only a few days remain until the end of the voting period!Don't forget to check for Technological Bonuses, you can gain some extra points from them! Developers!Today is the last day of voting! The winners will be announced tomorrow! Community and experts, vote for the applications! And devs, check the technology bonuses to earn more points! Good luck, participants!
Announcement
Kate Lau · Aug 12

[Demo Video] Data Transformation Adventures with InterSystems IRIS

#InterSystems Demo Games entry ⏯️ Data Transformation Adventures with InterSystems IRIS Navigating interoperability and healthcare can be an exciting adventure. In this demo, we will show you how InterSystems IRIS can be the perfect tool to get data - wherever it may lie, in whatever format, into a format of your choice, including FHIR! Once that is done, analytics is a piece of cake with the FHIR SQL Builder and Deep See Web. Let the questing begin! Presenters:🗣 @Kate.Lau, Sales Engineer, InterSystems🗣 @Merlin, Sales Engineer, InterSystems🗣 @Martyn.Lee, Sales Engineer, InterSystems🗣 @Bryan.Hoon, Sales Engineer, InterSystems 🗳 If you like this video, don't forget to vote for it in the Demo Games!
Announcement
Anastasia Dyubaylo · Jul 11

[Video] Discovering InterSystems Products - A High Level Overview

Hi Community, Enjoy the new video on InterSystems Developers YouTube: ⏯ Discovering InterSystems Products - A High Level Overview @ Global Summit 2024 Watch this video for a high-level overview of the full InterSystems product portfolio, especially if you're looking to understand how the various technologies connect and support real-world use cases. 🗣 Presenter: @joe.gallant, Senior Technical Trainer, InterSystems Tune in to the latest innovations and subscribe to catch what’s coming!
Article
Hannah Kimura · Jun 23

Transforming with Confidence: A GenAI Assistant for InterSystems OMOP

## INTRO Barricade is a tool developed by ICCA Ops to streamline and scale support for FHIR-to-OMOP transformations for InterSystems OMOP. Our clients will be using InterSystems OMOP to transform FHIR data to this OMOP structure. As a managed service, our job is to troubleshoot any issues that come with the transformation process. Barricade is the ideal tool to aid us in this process for a variety of reasons. First, effective support demands knowledge across FHIR standards, the OHDSI OMOP model, and InterSystems-specific operational workflows—all highly specialized areas. Barricade helps bridge knowledge gaps by leveraging large language models to provide expertise regarding FHIR and OHDSI. In addition to that, even when detailed explanations are provided to resolve specific transformation issues, that knowledge is often buried in emails or chats and lost for future use. Barricade can capture, reuse, and scale that knowledge across similar cases. Lastly, we often don’t have access to the source data, which means we must diagnose issues without seeing the actual payload—relying solely on error messages, data structure, and transformation context. This is exactly where Barricade excels: by drawing on prior patterns, domain knowledge, and contextual reasoning to infer the root cause and recommend solutions without direct access to the underlying data. ## IMPLEMENTATION OVERVIEW Barricade is built off of Vanna.AI. So, while I use Barricade a lot in this article to refer to the AI agent we created, the underlying model is really a Vanna model. At its core, Vanna is a Python package that uses retrieval augmentation to help you generate accurate SQL queries for your database using LLMs. We customized our vanna model to not only generate SQL queries, but to also be able to answer conceptual questions. Setting up vanna is extremely easy and quick. For our setup, we used chatgpt as the LLM, ChromaDB as our vector database, and Postgres as the database that stores the data we want to query (our run data -- data related to the transformation from FHIR-to-OMOP). You can choose from many different options for your LLM, vector database, and Postgres. Valid options are detailed here: [Quickstart with your own data](https://vanna.ai/docs/postgres-openai-standard-chromadb/). ## HOW THIS IS DIFFERENT THAN CHATGPT ChatGPT is a standalone large language model (LLM) that generates responses based solely on patterns learned during its training. It does not access external data at inference time. Barricade, on the other hand, is built on Vanna.ai, a Retrieval-Augmented Generation (RAG) system. RAG enhances LLMs by layering dynamic retrieval over the model, which allows it to query external sources for real-time, relevant information and incorporate those results into its output. By integrating our Postgres database directly with Vanna.ai, Barricade can access: - The current database schema. - Transformation run logs. - Internal documentation. - Proprietary transformation rules. This live access is critical when debugging production data issues, because the model isn't just guessing- it’s seeing and reasoning with real data. In short, Barricade merges the language fluency of ChatGPT with the real-time accuracy of direct data access, resulting in more reliable, context-aware insights. ## HOW BARRICADE WAS CREATED ### STEP 1: CREATE VIRTUAL ENVIRONMENT This step creates all the files that make up Vanna. virtualenv --python="/usr/bin/python3.10" barricade source barricade/bin/activate pip install ipykernel python -m ipykernel install --user --name=barricade jupyter notebook --notebook-dir=. For barricade, we will be editing many of these files to customize our experience. Notable files include: - `barricade/lib/python3.13/site-packages/vanna/base/base.py` - `barricade/lib/python3.13/site-packages/vanna/openai/openai_chat.py` - `barricade/lib/python3.13/site-packages/vanna/flask/__init__.py` - `barricade/lib/python3.13/site-packages/vanna/flask/assets.py` ### STEP 2: INITIALIZE BARRICADE This step includes importing the packages needed and initializing the model. The minimum imports needed are: `from vanna.chromadb import ChromaDB_VectorStore from vanna.openai import OpenAI_Chat` NOTE: Each time you create a new model, make sure to remove all remnants of old training data or vector dbs. To use the code we used below, import os and shutil. if os.path.exists("chroma.sqlite3"): os.remove("chroma.sqlite3") print("Unloading Vectors...") else: print("The file does not exist") base_path = os.getcwd() for root, dirs, files in os.walk(base_path, topdown=False): if "header.bin" in files: print(f"Removing directory: {root}") shutil.rmtree(root) Now, it's time to initialize our model: class MyVanna(ChromaDB_VectorStore, OpenAI_Chat): def __init__(self, config=None): # initialize the vector store (this calls VannaBase.__init__) ChromaDB_VectorStore.__init__(self, config=config) # initialize the chat client (this also calls VannaBase.__init__ but more importantly sets self.client) OpenAI_Chat.__init__(self, config=config) self._model = "barricade" vn = MyVanna(config={'api_key': CHAT GPT API KEY, 'model': 'gpt-4.1-mini'}) ### STEP 3: TRAINING THE MODEL This is where the customization begins. We begin my connecting to the Postgres tables. This has our run data. Fill in the arguments with your host, dbname, username, password, and port for the Postgres database. vn.connect_to_postgres(host='POSTGRES DATABASE ENDPOINT', dbname='POSTGRES DB NAME', user='', password='', port='') From there, we trained the model on the information schemas. df_information_schema = vn.run_sql("SELECT * FROM INFORMATION_SCHEMA.COLUMNS") plan = vn.get_training_plan_generic(df_information_schema) vn.train(plan=plan) After that, we went even further and decided to send Barricade to college. We loaded all of Chapters 4-7 from the book of OHDSI to give barricade a good understanding of OMOP core principals. We loaded FHIR documentation, specifically explanations of various FHIR resource types, which describe how healthcare data is structured, used, and interrelated so barricade understood FHIR resources. We loaded FHIR-to-OMOP mappings, so barricade understood which FHIR resource mapped to which OMOP table(s). And finally, we loaded specialized knowledge regarding edge cases that need to be understood for FHIR-to-OMOP transformations. Here is a brief overview of how that training looked: **FHIR resource example**: def load_fhirknowledge(vn): url = "https://build.fhir.org/datatypes.html#Address" response = requests.get(url) soup = BeautifulSoup(response.text, "html.parser") text_only = soup.get_text(separator="\n", strip=True) vn.train(documentation=text_only) **MAPPING example:** def load_mappingknowledge(vn): # Transform url = "https://docs.intersystems.com/services/csp/docbook/DocBook.UI.Page.cls?KEY=RDP_transform" response = requests.get(url) soup = BeautifulSoup(response.text, "html.parser") text_only = soup.get_text(separator="\n", strip=True) vn.train(documentation=text_only) **The book of OHDSI example**: def load_omopknowledge(vn): # Chapter 4 url = "https://ohdsi.github.io/TheBookOfOhdsi/CommonDataModel.html" response = requests.get(url) soup = BeautifulSoup(response.text, "html.parser") text_only = soup.get_text(separator="\n", strip=True) vn.train(documentation=text_only) **Specialized knowledge example**: def load_opsknowledge(vn): vn.train(documentation="Our business is to provide tools for generating evidence for the transformation Runs") Finally, we train barricade to understand the SQL queries that are commonly used. This will help the system understand the context of the questions that are being asked: cdmtables = ["conversion_warnings","conversion_issues","ingestion_report","care_site", "cdm_source", "cohort", "cohort_definition", "concept", "concept_ancestor", "concept_class", "concept_relationship", "concept_synonym", "condition_era", "condition_occurrence", "cost", "death", "device_exposure", "domain", "dose_era", "drug_era", "drug_exposure", "drug_strength", "episode", "episode_event", "fact_relationship", "location", "measurement", "metadata", "note", "note_nlp", "observation", "observation_period", "payer_plan_period", "person", "procedure_occurrence", "provider", "relationship", "source_to_concept_map", "specimen", "visit_detail", "visit_occurrence", "vocabulary"] for table in cdmtables: vn.train(sql="SELECT * FROM WHERE omopcdm54." + table) ### STEP 4: CUSTOMIZATIONS (OPTIONAL) Barricade is quite different from the Vanna base model, and these customizations are a big reason for that. **UI CUSTOMIZATIONS** To customize anything related to the UI (what you see when you spin up the flask app and barricade comes to life), you should edit the `barricade/lib/python3.13/site-packages/vanna/flask/__init__.py` and `barricade/lib/python3.13/site-packages/vanna/flask/assets.py` paths. For barricade, we customized the suggested questions and the logos. To edit the suggested questions, modify the `generate_questions` function in `barricade/lib/python3.13/site-packages/vanna/flask/__init__.py`. We added this: if hasattr(self.vn, "_model") and self.vn._model == "barricade": return jsonify( { "type": "question_list", "questions": [ "What are the transformation warnings?", "What does a fhir immunization resource map to in omop?", "Can you retrieve the observation period of the person with PERSON_ID 61?", "What are the mappings for the ATC B03AC?", "What is the Common Data Model, and why do we need it for observational healthcare data?" ], "header": "Here are some questions you can ask:", } ) Note that these suggested questions will only come up if you set self._model= barricade in STEP 2. To change the logo, you must edit assets.py. This is tricky, because assets.py contains the *compiled* js and css code. To find the logo you want to replace, go to your browser that has the flask app running, and inspect element. Find the SVG block that corresponds to the logo, and replace that block in assets.py with the SVG block of the new image you want. We also customized the graph response. In relevant questions, a graph is generated using plotly. The default prompts were generating graphs that were almost nonsensical. To fix this, we edited the `generate_plotly_figure` function in `barricade/lib/python3.13/site-packages/vanna/flask/__init__.py`. We specifically changed the prompt: def generate_plotly_figure(user: any, id: str, df, question, sql): table_preview = df.head(10).to_markdown() # or .to_string(), or .to_json() prompt = ( f"Here is a preview of the result table (first 10 rows):\n{table_preview}\n\n" "Based on this data, generate Plotly Python code that produces an appropriate chart." ) try: question = f"{question}. When generating the chart, use these special instructions: {prompt}" code = vn.generate_plotly_code( question=question, sql=sql, df_metadata=f"Running df.dtypes gives:\n {df.dtypes}", ) self.cache.set(id=id, field="plotly_code", value=code) fig = vn.get_plotly_figure(plotly_code=code, df=df, dark_mode=False) fig_json = fig.to_json() self.cache.set(id=id, field="fig_json", value=fig_json) return jsonify( { "type": "plotly_figure", "id": id, "fig": fig_json, } ) except Exception as e: import traceback traceback.print_exc() return jsonify({"type": "error", "error": str(e)}) The last UI customization we did was we chose to provide our own index.html. You specify the path to your index.html file when you initialize the flask app (otherwise it will use the default index.html). **PROMPTING/MODEL/LLM CUSTOMIZATIONS**: We made *many* modifications in `barricade/lib/python3.13/site-packages/vanna/base/base.py`. Notable modifications include setting the temperature for the LLM dynamically (depending on the task), creating different prompts based on if the question was conceptual or one that required sql generation, and adding a check for hallucinations in the SQL generated by the LLM. Temperature controls the randomness of text that is generated by LLMs during inference. A lower temperature essentially makes those tokens with the highest probability more likely to be selected; a higher temperature increases a model's likelihood of selecting less probable tokens. For tasks such as generating SQL, we want the temperature to be lower to prevent hallucinations. Hallucinations are when the LLM makes up something that doesn't exist. In SQL generation, a hallucination may look like the LLM querying a column that doesn't exist. This renders the query unusable, and throws an error. Thus, we edited the `generate_sql` function to change the temperature dynamically. The temperature is between 0 - 1. For questions deemed to be conceptual, we set the temperature to be 0.6, and for questions that require generating sql, we set the temperature to be 0.2. Furthermore, for tasks such as generating sql, the temperature is 0.2, while for tasks such as generating graphs and summaries, the default temperature is 0.5. We decided on 0.5 for the graph and summary tasks, because they require more creativity. We added two helper functions: `decide_temperature` and `is_conceptual`. the keyword to indicate a conceptual question is if the user says "barricade" in their question. def is_conceptual(self, question:str): q = question.lower() return ( "barricade" in q and any(kw in q for kw in ["how", "why", "what", "explain", "cause", "fix", "should", "could"]) ) def decide_temperature(self, question: str) -> float: if "barricade" in question.lower(): return 0.6 # Conceptual reasoning return 0.2 # Precise SQL generation If a question is conceptual, sql is not generated, and the LLM response is returned. We specified this in the prompt for the LLM. The prompt is different depending on if the question requires sql generation or if the question is conceptual. We do this in `get_sql_prompt`, which is called in `generate_sql`: def get_sql_prompt(self, initial_prompt : str, question: str, question_sql_list: list, ddl_list: list, doc_list: list, conceptual: bool = False, **kwargs): if initial_prompt is None: if not conceptual: initial_prompt = f"You are a {self.dialect} expert. " + \ "Please help to generate a SQL query to answer the question. Your response should ONLY be based on the given context and follow the response guidelines and format instructions. " else: initial_prompt = "Your name is barricade. If someone says the word barricade, it is not a part of the question, they are just saying your name. You are an expert in FHIR to OMOP transformations. Do not generate SQL. Your role is to conceptually explain the issue, root cause, or potential resolution. If the user mentions a specific table or field, provide interpretive guidance — not queries. Focus on summarizing, explaining, and advising based on known documentation and transformation patterns." initial_prompt = self.add_ddl_to_prompt( initial_prompt, ddl_list, max_tokens=self.max_tokens ) if self.static_documentation != "": doc_list.append(self.static_documentation) initial_prompt = self.add_documentation_to_prompt( initial_prompt, doc_list, max_tokens=self.max_tokens ) if not conceptual: initial_prompt += ( "===Response Guidelines \n" "1. If the provided context is sufficient, please generate a valid SQL query without any explanations for the question. \n" "2. If the provided context is almost sufficient but requires knowledge of a specific string in a particular column, please generate an intermediate SQL query to find the distinct strings in that column. Prepend the query with a comment saying intermediate_sql \n" "3. If the provided context is insufficient, please explain why it can't be generated. \n" "4. Please use the most relevant table(s). \n" "5. If the question has been asked and answered before, please repeat the answer exactly as it was given before. \n" f"6. Ensure that the output SQL is {self.dialect}-compliant and executable, and free of syntax errors. \n" ) else: initial_prompt += ( "===Response Guidelines \n" "1. Do not generate SQL under any circumstances. \n" "2. Provide conceptual explanations, interpretations, or guidance based on FHIR-to-OMOP transformation logic. \n" "3. If the user refers to warnings or issues, explain possible causes and common resolutions. \n" "4. If the user references a table or field, provide high-level understanding of its role in the transformation process. \n" "5. Be concise but clear. Do not make assumptions about schema unless confirmed in context. \n" "6. If the question cannot be answered due to lack of context, state that clearly and suggest what additional information would help. \n" ) message_log = [self.system_message(initial_prompt)] for example in question_sql_list: if example is None: print("example is None") else: if example is not None and "question" in example and "sql" in example: message_log.append(self.user_message(example["question"])) message_log.append(self.assistant_message(example["sql"])) message_log.append(self.user_message(question)) return message_log def generate_sql(self, question: str, allow_llm_to_see_data=True, **kwargs) -> str: temperature = self.decide_temperature(question) conceptual = self.is_conceptual(question) question = re.sub(r"\bbarricade\b","",question,flags=re.IGNORECASE).strip() if self.config is not None: initial_prompt = self.config.get("initial_prompt", None) else: initial_prompt = None question_sql_list = self.get_similar_question_sql(question, **kwargs) ddl_list = self.get_related_ddl(question, **kwargs) doc_list = self.get_related_documentation(question, **kwargs) prompt = self.get_sql_prompt( initial_prompt=initial_prompt, question=question, question_sql_list=question_sql_list, ddl_list=ddl_list, doc_list=doc_list, conceptual=conceptual, **kwargs, ) self.log(title="SQL Prompt", message=prompt) llm_response = self.submit_prompt(prompt, temperature, **kwargs) self.log(title="LLM Response", message=llm_response) if 'intermediate_sql' in llm_response: if not allow_llm_to_see_data: return "The LLM is not allowed to see the data in your database. Your question requires database introspection to generate the necessary SQL. Please set allow_llm_to_see_data=True to enable this." if allow_llm_to_see_data: intermediate_sql = self.extract_sql(llm_response,conceptual) try: self.log(title="Running Intermediate SQL", message=intermediate_sql) df = self.run_sql(intermediate_sql,conceptual) prompt = self.get_sql_prompt( initial_prompt=initial_prompt, question=question, question_sql_list=question_sql_list, ddl_list=ddl_list, doc_list=doc_list+[f"The following is a pandas DataFrame with the results of the intermediate SQL query {intermediate_sql}: \n" + df.to_markdown()], **kwargs, ) self.log(title="Final SQL Prompt", message=prompt) llm_response = self.submit_prompt(prompt, **kwargs) self.log(title="LLM Response", message=llm_response) except Exception as e: return f"Error running intermediate SQL: {e}" return self.extract_sql(llm_response,conceptual) Even with a low temperature, the LLM would still generate hallucinations sometimes. To further prevent hallucinations, we added a check for them before returning the sql. We created a helper function, `clean_sql_by_schema`, which takes the generated SQL, and finds any columns that do not exist. It then removes that SQL, and returns the cleaned version with no hallucinations. For cases where the SQL is something like "SELECT cw.id FROM omop.conversion_issues", it uses the `extract_alias_mapping` to map cw to the conversion_issues table. Here are those functions for reference: def extract_alias_mapping(self,sql: str) -> dict[str,str]: """ Parse the FROM and JOIN clauses to build alias → table_name. """ alias_map = {} # pattern matches FROM schema.table alias or FROM table alias for keyword in ("FROM", "JOIN"): for tbl, alias in re.findall( rf'{keyword}\s+([\w\.]+)\s+(\w+)', sql, flags=re.IGNORECASE ): # strip schema if present: table_name = tbl.split('.')[-1] alias_map[alias] = table_name return alias_map def clean_sql_by_schema(self, sql: str, schema_dict: dict[str,list[str]] ) -> str: """ Returns a new SQL where each SELECT-line is kept only if its alias.column is in the allowed columns for that table. schema_dict: { 'conversion_warnings': [...], 'conversion_issues': [...] } """ alias_to_table = self.extract_alias_mapping(sql) lines = sql.splitlines() cleaned = [] in_select = False for line in lines: stripped = line.strip() # detect start/end of the SELECT clause if stripped.upper().startswith("SELECT"): in_select = True cleaned.append(line) continue if in_select and re.match(r'FROM\b', stripped, re.IGNORECASE): in_select = False cleaned.append(line) continue if in_select: # try to find alias.column in this line m = re.search(r'(\w+)\.(\w+)', stripped) if m: alias, col = m.group(1), m.group(2) table = alias_to_table.get(alias) if table and col in schema_dict.get(table, []): cleaned.append(line) else: # drop this line entirely continue else: # no alias.column here (maybe a comma, empty line, etc) cleaned.append(line) else: cleaned.append(line) # re-join and clean up any dangling commas before FROM out = "\n".join(cleaned) out = re.sub(r",\s*\n\s*FROM", "\nFROM", out, flags=re.IGNORECASE) self.log("RESULTING SQL:" + out) return out ### STEP 5: Initialize the Flask App Now we are ready to bring Barricade to life! The code below will spin up a flask app, which lets you communicate with the AI agent, barricade. As you can see, we specified our own index_html_path, subtitle, title, and more. This is all optional. These arguments are defined here: [web customizations](https://vanna.ai/docs/web-app/) from vanna.flask import VannaFlaskApp app = VannaFlaskApp(vn, chart=True, sql=False, allow_llm_to_see_data=True,ask_results_correct=False, title="InterSystems Barricade", subtitle="Your AI-powered Transformation Cop for InterSystems OMOP", index_html_path=current_dir + "/static/index.html" ) Once your app is running, you will see barricade: ## RESULTS/DEMO Barricade can help you gain a deeper understanding about omop and fhir, and it can also help you debug transformation issues that you run into when you are trying to transform your fhir data to omop. To showcase Barricade's ability, I will show a real life example. A few months ago, we got an iService ticket, with the following description: To test Barricade, I copied this description into Barricade, and here is the response: First, Barricade gave me a table documenting the issues: Then, Barricade gave me a graph to visualize the issues: And, most importantly, Barricade gave me a description of the exact issue that was causing problems AND told me how to fix it: ### READY 2025 demo Infographic Here is a link to download the handout from our READY 2025 demo: Download the handout.
Article
Nikolay Solovyev · Jul 29

Dynamic Templated Emails in InterSystems IRIS with templated_email

Sending emails is a common requirement in integration scenarios — whether for client reminders, automatic reports, or transaction confirmations. Static messages quickly become hard to maintain and personalize. This is where the templated_email module comes in, combining InterSystems IRIS Interoperability with the power of Jinja2 templates. Why Jinja2 for Emails Jinja2 is a popular templating engine from the Python ecosystem that enables fully dynamic content generation. It supports: Variables — inject dynamic data from integration messages or external sources Conditions (if/else) — change the content based on runtime data Loops (for) — generate tables, lists of items, or repeatable sections Filters and macros — format dates, numbers, and reuse template blocks Example of a simple email body template: Hello {{ user.name }}! {% if orders %} You have {{ orders|length }} new orders: {% for o in orders %} - Order #{{ o.id }}: {{ o.amount }} USD {% endfor %} {% else %} You have no new orders today. {% endif %} With this approach, your email content becomes dynamic, reusable, and easier to maintain. Email Styling and Rendering Considerations Different email clients (Gmail, Outlook, Apple Mail, and others) may render the same HTML in slightly different ways. To achieve a consistent appearance across platforms, it is recommended to: Use inline CSS instead of relying on external stylesheets Prefer table‑based layouts for the overall structure, as modern CSS features such as flexbox or grid may not be fully supported Avoid script and complex CSS rules, as many email clients block or ignore them Test emails in multiple clients and on mobile devices Capabilities of the templated_email Module The templated_email module is designed to simplify creating and sending dynamic, professional emails in InterSystems IRIS. Its key capabilities include: render Jinja2 templates to generate dynamic email content Flexible usage — can be used both from Interoperability productions and directly from ObjectScript code Built‑in Business Operation — ready‑to‑use operation for production scenarios with SMTP integration Automatic inline styling — converts CSS styles into inline attributes for better email client compatibility Markdown support — allows creating clean, maintainable templates that are rendered to HTML before sending These features make it easy to produce dynamic, well‑formatted emails that display consistently across clients. Module templated_email The module provides the following key components: TemplatedEmail.BusinessOperation — a Business Operation to send templated emails via SMTP TemplatedEmail.EmailRequest — an Interoperability Message for triggering email sends, with fields for templates, data, and recipients TemplatedEmail.MailMessage — an extension of %Net.MailMessage that adds: applyBodyTemplate() — renders and sets the email body using a Jinja2 + Markdown template applySubjectTemplate() — renders and sets the email subject using a Jinja2 template This allows you to replace %Net.MailMessage with TemplatedEmail.MailMessage in ObjectScript and gain dynamic templating: set msg = ##class(TemplatedEmail.MailMessage).%New() do msg.applySubjectTemplate(data,"New order from {{ customer.name }}") do msg.applyBodyTemplate(data,,"order_template.tpl") More details and usage examples can be found in the GitHub repository. Templates Are Not Just for Emails The template rendering methods can also be used for generating HTML reports, making it easy to reuse the same templating approach outside of email. With templated_email, you can create dynamic, professional, and maintainable emails for InterSystems IRIS — while leveraging the full flexibility of Jinja2 templates.
Announcement
Adam Coppola · Aug 7

[Video] Using Entity Framework Core with InterSystems IRIS

Hi, Community! Need to connect your .NET application to InterSystems IRIS® data platform? See how Entity Framework Core can help: Using Entity Framework Core with InterSystems IRIS 🌐With EF Core, you can: Map .NET objects to relational tables in InterSystems IRIS. Generate .NET object definitions from InterSystems IRIS tables. In this demo by @Summer.Gerry, how to set up a .NET project and dependencies, establish a connection, and use the connection for code-first and database-first application development.
Article
sween · Feb 18

OMOP Odyssey - InterSystems OMOP, The Cloud Service (Troy)

InterSystems OMOP, The Cloud Service (Troy) An Implementer's approach into the OHDSI ( pronounced "Odyssey" ) Community through an Expiring Trial of InterSystems OMOP Cloud Service.What is it? The InterSystems OMOP, available as a HealthShare service through the InterSystems Cloud Services Portal, transforms HL7® FHIR® data into the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM). The InterSystems OMOP looks at FHIR data stored in an S3 bucket and automatically transforms and sends the data to the cloud-hosted repository in the OMOP CDMO format. You can then use external Observational Health Data Sciences and Informatics (OHDSI) tools, such as ATLAS or HADES, in conjunction with a database driverOpens in a new tab, such as JDBC, to perform analytical tasks on your data. Abridged: It transforms S3 Hosted FHIR Bulk Export Data to the OMOP CDM to a Cloud Hosted IRIS or a postgres flavored database of your choice. Going to take the above for a spin here from "soup to nuts" as they say and go end to end with an implementation surrounded by modern powerful tools and the incredible ecosystem of applications from the OHDSI Community. Will try not to re-hash the docs, neither here or there, and surface some foot guns 👣 🔫 along the way. Everything Good Starts with a Bucket When you first provision the service, you may feel immediately that you are in a chicken and egg situation when you get to the creation dialog and prompted for S3 information right out of the gate. You can fake this best you can and, and update it later or take an approach that is less hurried where you understand how you are provisioning an Amazon S3 Bucket for transformation use. Its a modern approach implemented in most Cloud based data solutions to share data, where you provision the source location yourself, then grant the service access to interact with it. Provision Bucket and Initial Policy Stack Create the Deployment for the Service Update the Bucket Policy constrained to the Deployment We can click the console to death , or do this with an example stack. s3-omop-fhir-stack.yaml --- AWSTemplateFormatVersion: '2010-09-09' Description: AWS Cloudformation Stack that will create the S3 Bucket and Policy for the OMOP Server Parameters: BucketName: Description: The name of the bucket to use to upload FHIR Bundles for Transformation. Type: String AllowedPattern: "^[a-z0-9][a-z0-9-]{1,61}[a-z0-9]$" PolicyfromOMOPConfiguration: Description: The name of the bucket to use to upload FHIR Bundles for Transformation. Default: "arn:aws:iam::1234567890:role/skipper-deployment-*-Role" Type: String Resources: OMOPFHIRTransactionBucket: Type: AWS::S3::Bucket Properties: BucketName: !Sub ${BucketName} PublicAccessBlockConfiguration: BlockPublicAcls: true BlockPublicPolicy: true IgnorePublicAcls: true RestrictPublicBuckets: true OMOPBucketPolicy: Type: AWS::S3::BucketPolicy Properties: Bucket: !Ref OMOPFHIRTransactionBucket PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Principal: AWS: - !Sub ${PolicyfromOMOPConfiguration} Action: - "s3:GetObject" - "s3:GetBucketLocation" - "s3:ListBucket" Resource: - !Sub "arn:aws:s3:::${BucketName}" # Bucket itself - !Sub "arn:aws:s3:::${BucketName}/*" # FHIR Objects within the bucket Create the stack anyway you want to, one way is to use the aws cli. aws cloudformation create-stack --stack-name omopfhir --template-body s3-omop-fhir-bucket.yaml --parameters ParameterKey=BucketName,ParameterValue=omop-fhir Create some initial keys in the bucket to use for provisioning and the source folder for FHIR ingestion. aws s3api put-object --bucket omop-fhir --key Transactions/in --profile pidtoo aws s3api put-object --bucket omop-fhir --key termz --profile pidtoo You should now be setup to provision the service with the following, pay attention to the field asking for the arn, is actually asking for the arn of the bucket despite the description asking for the name... small 👣🔫 here.After the deployment is created, head over to the "Configurations" navigation item inside the "FHIR to OMOP Pipeline" deployment and grab the policy by Copying it to your clipboard. You can just follow the directions supplied there, and wedge this into your your current policy or just snag the value of the role and update your stack. aws cloudformation update-stack --stack-name omopfhir --template-body s3-omop-fhir-bucket.yaml --parameters ParameterKey=PolicyfromOMOPConfiguration,ParameterValue="arn:aws:iam::1234567890:role/skipper-deployment-4a358965ec38ba179ebeeeeeeeeeee-Role" Either way, you should end up with a policy that looks like this on your source bucket under permissions... (account number, role fuzzed) Recommended Policy { "Version": "2012-10-17", "Statement": [ { "Principal": { "AWS": "arn:aws:iam::123456789099:role/skipper-deployment-33e46da9bf8464bbe5d1411111-Role" }, "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::omop-fhir", "arn:aws:s3:::omop-fhir/*" ], "Sid": "IRIS Cloud Bucket Access" } ] } I used a more open policy that allowed for the opening the root account, but constraining on the buckets. This way I could support multiple deployments with a single bucket (or multiple buckets). Not advised I guess, but a second example for reference to support multiple environments in a single policy for IAC purposes. Root Account { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::111:root", ### Entire accounts(3) "arn:aws:iam::222:root", "arn:aws:iam::333:root" ] }, "Action": [ "s3:GetObject", "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::omop-fhir", "arn:aws:s3:::omop-fhir/*", "arn:aws:s3:::omop-fhir2", "arn:aws:s3:::omop-fhir2/*" ], "Condition": { "StringLike": { "aws:PrincipalArn": [ "arn:aws:iam::111:role/skipper-deployment-*-Role", "arn:aws:iam::222:role/skipper-deployment-*-Role", "arn:aws:iam::333:role/skipper-deployment-*-Role" ] } } } ] } That's our source for the transformation, now lets move on to the target, the OMOP Database. Meet OMOP Lets take a quick look over to the other deployment "OMOP on IRIS" and meet the Common Data Model. The OMOP (Observational Medical Outcomes Partnership) database is a monument on how you boil ridiculous complexity to support multiple sources into a common data model, referred to as the CDM. Any further explanation outside of the community would be an exercise in cut and paste (or even worse generative content), and the documentation in this community is really, really well done. Navigating to the "SQL Query Tools" navigation and you can see the InterSystems implementation of the Common Data Model, shown here next to the infamous diagram of OMOP Schema from the OHDSI community. That's as far as we go with this work of art, let's investigate another option for using the service for transformation purposes only. BYODB (Bring Your Own Database) We got a database for free when we provisioned last time, but if we want to target another database, we can surely do that as the service at this time of writing supports transforming to flavors of Postgres as well. For this we will outline how to wrangle an external database, via Amazon RDS, and connect it to the service. Compute I'll throw a flag here and call another 👣🔫 I refer to as "Biggie Smalls" in regards to sizing your database for the service if you bring your own. InterSystems does a pretty good job of sizing the transform side to the database side, so you will have to follow suit and consider the fact that the speed of your transform performance is dependent on the sql instance you procure to write to, so do so accordingly. This may be obvious to some, but witnessed it and thought Id call it out, as I went cheap with RDS, Google Cloud SQL et al, and the persistence times of the FHIR Bundles to the OMOP database were impacted. Having said all that, I do exactly the opposite and give Jeff Bezos the least amount of money possible for the task regardless, with a db.t4g.micro postgres RDS Instance. We expose it publicly and head over to download the certificate bundle for the corresponding region your database is in... make sure its in .pem format.Next, however you interact with databases these days, connect to your db instance and create a DATABASE and SCHEMA: Load OMOP CDM 5.4 Now we get a little help from our friends in the OHDSI Community to provision the supported schema at version 5.4 in RStudio with OMOP Common Data Model schema using OHDSI Tools. install.packages("devtools") devtools::install_github("OHDSI/CommonDataModel") install.packages("DatabaseConnector") install.packages("SqlRender") Sys.setenv("DATABASECONNECTOR_JAR_FOLDER" = "/home/sween/Desktop/OMOP/iris-omop-extdb/jars") library(DatabaseConnector) downloadJdbcDrivers("postgresql") We now have what we need and can connect to our postgres instance and created the tables in the OMOPCDM54 we provisioned above.Connect cd <- DatabaseConnector::createConnectionDetails(dbms = "postgresql", server = "extrdp-ops.biggie-smalls.us-east-2.rds.amazonaws.com/OMOPCDM54", user = "postgres", password = "REDACTED", pathToDriver = "./jars" ) Create CommonDataModel::executeDdl(connectionDetails = cd, cdmVersion = "5.4", cdmDatabaseSchema = "omopcdm54" ) Barring a "sea of red", it should have executed successfully.Now lets check out work, we should have an external postgres OMOP database suitable for using with the service. Configure OMOP Cloud Service We have the sources, we have the targets, lets configure the service to glue them together to complete the transformation pipeline from FHIR to the external database.InterSystems OMOP Cloud Service Should be all set!The OMOP Journey continues...
Article
Evgeny Shvarov · Aug 27

Importing CSV Data Into InterSystems IRIS and preserving IDs

Hi folks! It is very easy to import CSV data into IRIS. But what if we want to preserve the original IDs in CSV? Recently I came across with the situation when I needed to import two csv's into IRIS which were linked by one column referencing to another csv's col: a typical Foreign Key and Primary Key situation, where csv1 contains this column as Primary Key, and csv2 as Foreign key with id's related to csv1. The image is generated by ChatGPT so don't blame it - it tried its best to generate countries as primary keys with countries.csv-cities.csv relationship :) I know at least three useful utilities to import CSV: csvgen, csvgen-python, and bdb-sql-utils. But if I import both CSVs, e.g., via csvgen, there will be two classes in IRIS with data imported, generated internal ID, and IDKey. And it is not possible to change the IDKey to another index in the class once you have data in it. So turned out it's not that obvious how to import CSV and preserve a column with ID data as ID Key in IRIS. Of course it is possible and I'm sure you know a lot of ways to do and it is possible now to import and preserve existing ID's in csv as ID keys in csvgen and csvgen-python both. To generate an iris class and import data vs a given CSV with a primary key, provide the name of the column in the pkey parameter (the last one), so the utility will add an IDKey, PrimaryKey index to the class. E.g. if we import countries.csv and want to make the Name column an IDKey and Primary Key call csvgen as follows: //primary key name is the 11th parameter :) zw ##class(community.csvgen).Generate("path/to/countries.csv,",","package.Countries",,,,,,,,"Name") What it does under the hood can be listed as follows: - generates class with properties as usual - deletes all data, - deletes the DDLBEIndex bitmap if exists (it prevents from creating an alternative IDKey to existing one), - sets (temporary) the system-wide option DDLPKeyNotIDKey=0 - adds Primary Key index for a given column with name provided And as a result, you have a newly generated class with data and primary key ID key for a given column name. Here is the code in csvgen. So, how do you connect two generated classes. In my case I needed to have swizzling of class1 instances in class2.property. So I just renamed the datatype in generated class to a class with Primary Key-IdKey. Here is the example demo app that analyses potato consumption and import in different countries (don't ask me why I invented such an example - maybe I was hungry). Countries are real, but the consumption is generated by gpt he/she said it is close to reality, as it turned out that it is pretty dificult to obtain this data. Here is the countries.csv and potatos_sales.csv. This is how I import data and generate classes: zpm "install csvgen" set file="/home/irisowner/dev/data/countries.csv" zw ##class(community.csvgen).Generate(file,",","esh.csvpkey.Countries",,,,,,,,"Name") set file="/home/irisowner/dev/data/potato_sales.csv" zw ##class(community.csvgen).Generate(file,",","esh.csvpkey.Potatos",,,,,1) It generates a countries class with PrimaryKey: Class esh.csvpkey.Countries Extends %Persistent [ ClassType = persistent, DdlAllowed, Final, Owner = {irisowner}, ProcedureBlock, SqlRowIdPrivate, SqlTableName = Countries ] { Property Name As %Library.String(MAXLEN = 250) [ SqlColumnNumber = 2 ]; Property Code As %Library.String(MAXLEN = 250) [ SqlColumnNumber = 3 ]; ... Index COUNTRIESPKEY1 On Name [ IdKey, PrimaryKey, SqlName = COUNTRIES_PKEY1, Unique ]; ... } Then I changed the generated property countries from %String to reference the countries class: Property Country As esh.csvpkey.Countries [ SqlColumnNumber = 2 ]; And I've built a very obvious IRIS BI/DSW demo to see how it's going with potatoes in countries through the years: Hope you found this helpful and entertaining ;)
Article
Vishal Pallerla · Jul 17

No More Local Limits: Exposing InterSystems IRIS with ngrok

At hackathons that InterSystems participated and I supported, many students were asking how all their teammates could use the same IRIS database that they spun up in a container. I suggested using ngrok to expose their localhost IRIS and realized we don't have documentation on that. Hence, I thought this would be great to let more people knwo about this powerful technique for enhancing collaboration during development and testing. ## Step-by-Step Guide to Exposing InterSystems IRIS with ngrok This guide will walk you through the process of exposing your local InterSystems IRIS instance using ngrok. Follow these steps to get started quickly. ### Step 1: Set Up Your IRIS Container 1. **Install Docker**: Ensure that Docker is installed on your machine. 2. **Run the IRIS Container**: Use the following command to start an InterSystems IRIS container: ```bash docker run --name iris -d --publish 52773:52773 containers.intersystems.com/intersystems/iris-community:latest ``` This command pulls the latest version of the IRIS Community Edition and runs it on port 52773. ### Step 2: Install ngrok 1. **Download ngrok**: Go to the [ngrok website](https://ngrok.com/download) and download the appropriate version for your operating system. 2. **Install ngrok**: - For **MacOS**: Use Homebrew: ```bash brew install ngrok/ngrok/ngrok ``` - For **Windows**: Use Chocolatey: ```bash choco install ngrok ``` - For **Linux**: Follow the installation instructions provided on the ngrok website. ### Step 3: Configure ngrok 1. **Authenticate ngrok**: After installing, you need to authenticate your ngrok account. Run the following command: ```bash ngrok config add-authtoken YOUR_AUTHTOKEN ``` Replace `YOUR_AUTHTOKEN` with your actual token from the ngrok dashboard. ### Step 4: Start the Tunnel 1. **Expose Your IRIS Instance**: Run this command to create a tunnel to your local IRIS instance: ```bash ngrok http 52773 ``` 2. **Access the Public URL**: After running the command, ngrok will provide a public URL (e.g., `https://abc123.ngrok.io`). This URL can be accessed by anyone over the internet. ### Step 5: Share Access - Share the public URL with your teammates or collaborators so they can access the IRIS database running on your local machine. ## Best Practices - **Security**: Implement authentication and authorization for your IRIS instance to protect sensitive data. - **Temporary Use**: Remember that ngrok is primarily for development and testing; avoid using it for production environments. - **Monitor Connections**: Keep an eye on the ngrok dashboard for connection statistics and potential issues. ## Conclusion Exposing your InterSystems IRIS container using ngrok is a straightforward process that enhances collaboration during development. By following this step-by-step guide, you can easily make your local database accessible to teammates, facilitating better teamwork and innovation. Always prioritize security when exposing local services, and enjoy seamless development with IRIS and ngrok!
Discussion
Admin GlobalMasters · Jul 3

InterSystems READY 2025 Memes You've Sent!

You’ve been dropping memes into our inbox — here's our top 20!Get ready to laugh, nod in agreement, and maybe spot the one you created! 👀Let us know in the comments which meme is your favorite! 1. Author: Robert Cemper 2. Author: Aya Heshmat 3. Author: Matthew Thornhill 4. Author: Henry Ames 5. Author: Ben Spead 6. Author: Jonathan Zhou 7. Author: Alessandra Carena 8. Author: Haddon Baker 9. Author: Liam Evans 10. Author: Macey Minor 11. Author: Marco Di Giglio 12. Author: FRANCISCO LOPEZ 13. Author: Pietro Montorfano 14. Author: David Cho 15. Author: Henry Ames 16. Author: Andre Larsen Barbosa 17. Author: Liam Evans 18. Author: Mindy Caldwell 19. Author: Dinesh 20. Author: Mathieu Delagnes love it! great work everyone :) awesome !! 😁 🤣🤣🤣🤣🤣🤣 These are fun, great work by all the authors :)
Article
Anastasia Dyubaylo · Jan 26, 2023

How to add InterSystems certification to your DC profile

Hello Community, Some of you have passed the InterSystems Official Certification and would like to get a nifty green tick on your profile avatar and all your certificates in your DC profile so that others know that you know... you know what we mean So, to add certification to your DC profile, you need to take 3 easy steps: 1️⃣ Go to your DC profile 2️⃣ Go to the InterSystems Certification section 3️⃣ Click on the Load my certification(s) button and that's it! The system will send the request to Credly with your DC email. If your certification is linked to the same email, your certificates will be loaded automatically: If not, please follow the detailed steps described on the page: And you're done. Now everyone knows that you know... ;) Congratulations on adding the Certification and on actually passing it. Well done! Ooooh, I have a green tick. That's one up on twitter :) Yeah, congrats!! 🎉 The certifications came across on my profile but for some reason I am not seeing the green checkmark. Is this supposed to work for InterSystems employees? nevermind - it shows up now for some reason :) Yes, it may take some time for the checkmark to load on your profile ;)
Announcement
Celeste Canzano · May 13

InterSystems IRIS Development Professional Exam is now LIVE!

Hello community, The Certification Team of InterSystems Learning Services is excited to announce the release of our new InterSystems IRIS Development Professional exam. It is now available for purchase and scheduling in InterSystems exam catalog. Potential candidates can review the exam topics and the practice questions to help orient them to exam question approaches and content. Candidates who successfully pass the exam will receive a digital certification badge that can be shared on social media accounts like LinkedIn. If you are new to InterSystems Certification, please review our program pages that include information on taking exams, exam policies, FAQ and more. If you have ideas about creating new certifications that can help you advance your career, the Certification Team of InterSystems Learning Services is always open to ideas and suggestions. Please contact us at certification@intersystems.com if you would like to share any ideas. Looking forward to celebrating your success, Celeste Canzano - Certification Operations Specialist, InterSystems It's a great exam. I'm not just saying that because I helped develop it, but it has some great questions and topics on it that really make you think. I can't say I enjoyed the process of taking my last Intersystems exam, (not this one), but I certainly enjoyed the feeling after I passed it. :)It's great to get that validation that you do actually know what you're talking about!