Search

Clear filter
Announcement
Anastasia Dyubaylo · May 22

[Webinar] Unlocking the Power of InterSystems Data Fabric Studio

Hey Community, We're excited to invite you to the next InterSystems UKI Tech Talk webinar: 👉Unlocking the Power of InterSystems Data Fabric Studio ⏱ Date & Time: Thursday, May 29, 2025 10:30-11:30 UK 👨‍🏫 Speaker: @Saurav.Gupta, Data Platform Team Lead, InterSystems Business leaders across every industry – including healthcare, financial services, supply chain, manufacturing, and the public sector – struggle with accessing unified, timely data spread across applications, data feeds, data warehouses, data lakes, and data marts. Join us for our next exclusive Tech Talk Webinar hosted by InterSystems expert Saurav Gupta as he introduces InterSystems Data Fabric Studio™—the latest innovation designed to offer a new approach to accessing data, delivering the right data to the right person at the right time, in a secure and controlled environment. It's a fully managed, self-service, cloud solution with all the necessary components to create and maintain an enterprise data fabric. An enterprise data fabric is a new architectural approach that speeds and simplifies access to data assets across the entire business. It accesses, transforms, and harmonises data from multiple sources, on demand, to make it usable and actionable for a wide variety of business applications. >> REGISTER HERE <<
Announcement
Anthony Jackson · Jun 3

InterSystems HealthShare developer interoperability Developer/Lead

Summary: Duties and Responsibilities: Design and implement healthcare data integration solutions using the InterSystems platform /HealthShare platform, ensuring data interoperability across various healthcare systems. Develop and maintain data mappings and transformations to ensure accurate data exchange between systems, leveraging IRIS API’s, HL7, FHIR, and other healthcare data standards. Build and maintain interfaces to connect health information systems, including clinical applications, EHR, and other healthcare data sources. Work closely with cross-functional teams, including developers, business analysts, and healthcare professionals, to gather requirements, design solutions, and implement integrations. Create and maintain comprehensive documentation for integration solutions, including interface specifications, data mappings, and test plans. Identify opportunities to improve integration processes and solutions, contributing to the overall efficiency and effectiveness of healthcare data exchange. Education and Skills: Experience with interoperability tools (like InterSystems), such as NextGen Mirth or "Availity" or "Infor" or "IRIS" or Cloverleaf or Rhapsody or Ensemble etc. Experience and expertise with InterSystems IRIS, InterSystems HealthShare. Familiarity / Experience working with Medical Claims data Experience with Amazon Web Services (AWS). Experience developing Continuous Integration / Continuous Delivery (CI/CD). Knowledge and understanding of agile methodologies Please send your profile to : Antony.Jackson@infinite.com antonyjacks@gmail.com
Question
Laura Blázquez García · Jun 9

Protocol Error between the Web Gateway and InterSystems IRIS

I have created a new docker stack with webgateway and IRIS for Health 2025.1. I have mapped the posts of wegateway like this: 8743:443 8780:80 I can access IRIS portal through 8743 without problems. I also have created a FHIR repository, and I'm able to access it through 8743 port. I have a web application, in another server with another domain, that connects to this FHIR repository. I have configured in FHIR endpoint the allowed origin to the domain of this application. But when I try to connect from this application to FHIR repository I have this error on Webgateway: Protocol Error between the Web Gateway and InterSystems IRIS This is a second instance that I'm configuring, with the first one I didn't see this error. Could this be because first instance runs in 8443 port? Or maybe it is because 2025.1 version? I don't know what to do...
Announcement
Anastasia Dyubaylo · Jun 10

[Video] Foreign Tables in InterSystems IRIS 2025.1

Hi Community, Our Project Managers created a series of videos to highlight some of the interesting features of the new 2025.1 release. Please welcome the first one of them: ⏯ Foreign Tables in InterSystems IRIS 2025.1 The InterSystems IRIS 2025.1 release introduces powerful new features for data management, including full support for foreign tables in production. It adds fine-grained privileges for managing access to foreign servers and querying them, plus a new command for sending direct SQL queries to external relational databases. New table conversion options also allow switching between row and column layouts for better analytics performance. 🗣 Presenter: @Benjamin.DeBoe, Manager, Analytics Product Management, InterSystems Want to stay up to date? Watch the video and subscribe for more!
Article
Hannah Kimura · Jun 23

Transforming with Confidence: A GenAI Assistant for InterSystems OMOP

## INTRO Barricade is a tool developed by ICCA Ops to streamline and scale support for FHIR-to-OMOP transformations for InterSystems OMOP. Our clients will be using InterSystems OMOP to transform FHIR data to this OMOP structure. As a managed service, our job is to troubleshoot any issues that come with the transformation process. Barricade is the ideal tool to aid us in this process for a variety of reasons. First, effective support demands knowledge across FHIR standards, the OHDSI OMOP model, and InterSystems-specific operational workflows—all highly specialized areas. Barricade helps bridge knowledge gaps by leveraging large language models to provide expertise regarding FHIR and OHDSI. In addition to that, even when detailed explanations are provided to resolve specific transformation issues, that knowledge is often buried in emails or chats and lost for future use. Barricade can capture, reuse, and scale that knowledge across similar cases. Lastly, we often don’t have access to the source data, which means we must diagnose issues without seeing the actual payload—relying solely on error messages, data structure, and transformation context. This is exactly where Barricade excels: by drawing on prior patterns, domain knowledge, and contextual reasoning to infer the root cause and recommend solutions without direct access to the underlying data. ## IMPLEMENTATION OVERVIEW Barricade is built off of Vanna.AI. So, while I use Barricade a lot in this article to refer to the AI agent we created, the underlying model is really a Vanna model. At its core, Vanna is a Python package that uses retrieval augmentation to help you generate accurate SQL queries for your database using LLMs. We customized our vanna model to not only generate SQL queries, but to also be able to answer conceptual questions. Setting up vanna is extremely easy and quick. For our setup, we used chatgpt as the LLM, ChromaDB as our vector database, and Postgres as the database that stores the data we want to query (our run data -- data related to the transformation from FHIR-to-OMOP). You can choose from many different options for your LLM, vector database, and Postgres. Valid options are detailed here: [Quickstart with your own data](https://vanna.ai/docs/postgres-openai-standard-chromadb/). ## HOW THIS IS DIFFERENT THAN CHATGPT ChatGPT is a standalone large language model (LLM) that generates responses based solely on patterns learned during its training. It does not access external data at inference time. Barricade, on the other hand, is built on Vanna.ai, a Retrieval-Augmented Generation (RAG) system. RAG enhances LLMs by layering dynamic retrieval over the model, which allows it to query external sources for real-time, relevant information and incorporate those results into its output. By integrating our Postgres database directly with Vanna.ai, Barricade can access: - The current database schema. - Transformation run logs. - Internal documentation. - Proprietary transformation rules. This live access is critical when debugging production data issues, because the model isn't just guessing- it’s seeing and reasoning with real data. In short, Barricade merges the language fluency of ChatGPT with the real-time accuracy of direct data access, resulting in more reliable, context-aware insights. ## HOW BARRICADE WAS CREATED ### STEP 1: CREATE VIRTUAL ENVIRONMENT This step creates all the files that make up Vanna. virtualenv --python="/usr/bin/python3.10" barricade source barricade/bin/activate pip install ipykernel python -m ipykernel install --user --name=barricade jupyter notebook --notebook-dir=. For barricade, we will be editing many of these files to customize our experience. Notable files include: - `barricade/lib/python3.13/site-packages/vanna/base/base.py` - `barricade/lib/python3.13/site-packages/vanna/openai/openai_chat.py` - `barricade/lib/python3.13/site-packages/vanna/flask/__init__.py` - `barricade/lib/python3.13/site-packages/vanna/flask/assets.py` ### STEP 2: INITIALIZE BARRICADE This step includes importing the packages needed and initializing the model. The minimum imports needed are: `from vanna.chromadb import ChromaDB_VectorStore from vanna.openai import OpenAI_Chat` NOTE: Each time you create a new model, make sure to remove all remnants of old training data or vector dbs. To use the code we used below, import os and shutil. if os.path.exists("chroma.sqlite3"): os.remove("chroma.sqlite3") print("Unloading Vectors...") else: print("The file does not exist") base_path = os.getcwd() for root, dirs, files in os.walk(base_path, topdown=False): if "header.bin" in files: print(f"Removing directory: {root}") shutil.rmtree(root) Now, it's time to initialize our model: class MyVanna(ChromaDB_VectorStore, OpenAI_Chat): def __init__(self, config=None): # initialize the vector store (this calls VannaBase.__init__) ChromaDB_VectorStore.__init__(self, config=config) # initialize the chat client (this also calls VannaBase.__init__ but more importantly sets self.client) OpenAI_Chat.__init__(self, config=config) self._model = "barricade" vn = MyVanna(config={'api_key': CHAT GPT API KEY, 'model': 'gpt-4.1-mini'}) ### STEP 3: TRAINING THE MODEL This is where the customization begins. We begin my connecting to the Postgres tables. This has our run data. Fill in the arguments with your host, dbname, username, password, and port for the Postgres database. vn.connect_to_postgres(host='POSTGRES DATABASE ENDPOINT', dbname='POSTGRES DB NAME', user='', password='', port='') From there, we trained the model on the information schemas. df_information_schema = vn.run_sql("SELECT * FROM INFORMATION_SCHEMA.COLUMNS") plan = vn.get_training_plan_generic(df_information_schema) vn.train(plan=plan) After that, we went even further and decided to send Barricade to college. We loaded all of Chapters 4-7 from the book of OHDSI to give barricade a good understanding of OMOP core principals. We loaded FHIR documentation, specifically explanations of various FHIR resource types, which describe how healthcare data is structured, used, and interrelated so barricade understood FHIR resources. We loaded FHIR-to-OMOP mappings, so barricade understood which FHIR resource mapped to which OMOP table(s). And finally, we loaded specialized knowledge regarding edge cases that need to be understood for FHIR-to-OMOP transformations. Here is a brief overview of how that training looked: **FHIR resource example**: def load_fhirknowledge(vn): url = "https://build.fhir.org/datatypes.html#Address" response = requests.get(url) soup = BeautifulSoup(response.text, "html.parser") text_only = soup.get_text(separator="\n", strip=True) vn.train(documentation=text_only) **MAPPING example:** def load_mappingknowledge(vn): # Transform url = "https://docs.intersystems.com/services/csp/docbook/DocBook.UI.Page.cls?KEY=RDP_transform" response = requests.get(url) soup = BeautifulSoup(response.text, "html.parser") text_only = soup.get_text(separator="\n", strip=True) vn.train(documentation=text_only) **The book of OHDSI example**: def load_omopknowledge(vn): # Chapter 4 url = "https://ohdsi.github.io/TheBookOfOhdsi/CommonDataModel.html" response = requests.get(url) soup = BeautifulSoup(response.text, "html.parser") text_only = soup.get_text(separator="\n", strip=True) vn.train(documentation=text_only) **Specialized knowledge example**: def load_opsknowledge(vn): vn.train(documentation="Our business is to provide tools for generating evidence for the transformation Runs") Finally, we train barricade to understand the SQL queries that are commonly used. This will help the system understand the context of the questions that are being asked: cdmtables = ["conversion_warnings","conversion_issues","ingestion_report","care_site", "cdm_source", "cohort", "cohort_definition", "concept", "concept_ancestor", "concept_class", "concept_relationship", "concept_synonym", "condition_era", "condition_occurrence", "cost", "death", "device_exposure", "domain", "dose_era", "drug_era", "drug_exposure", "drug_strength", "episode", "episode_event", "fact_relationship", "location", "measurement", "metadata", "note", "note_nlp", "observation", "observation_period", "payer_plan_period", "person", "procedure_occurrence", "provider", "relationship", "source_to_concept_map", "specimen", "visit_detail", "visit_occurrence", "vocabulary"] for table in cdmtables: vn.train(sql="SELECT * FROM WHERE omopcdm54." + table) ### STEP 4: CUSTOMIZATIONS (OPTIONAL) Barricade is quite different from the Vanna base model, and these customizations are a big reason for that. **UI CUSTOMIZATIONS** To customize anything related to the UI (what you see when you spin up the flask app and barricade comes to life), you should edit the `barricade/lib/python3.13/site-packages/vanna/flask/__init__.py` and `barricade/lib/python3.13/site-packages/vanna/flask/assets.py` paths. For barricade, we customized the suggested questions and the logos. To edit the suggested questions, modify the `generate_questions` function in `barricade/lib/python3.13/site-packages/vanna/flask/__init__.py`. We added this: if hasattr(self.vn, "_model") and self.vn._model == "barricade": return jsonify( { "type": "question_list", "questions": [ "What are the transformation warnings?", "What does a fhir immunization resource map to in omop?", "Can you retrieve the observation period of the person with PERSON_ID 61?", "What are the mappings for the ATC B03AC?", "What is the Common Data Model, and why do we need it for observational healthcare data?" ], "header": "Here are some questions you can ask:", } ) Note that these suggested questions will only come up if you set self._model= barricade in STEP 2. To change the logo, you must edit assets.py. This is tricky, because assets.py contains the *compiled* js and css code. To find the logo you want to replace, go to your browser that has the flask app running, and inspect element. Find the SVG block that corresponds to the logo, and replace that block in assets.py with the SVG block of the new image you want. We also customized the graph response. In relevant questions, a graph is generated using plotly. The default prompts were generating graphs that were almost nonsensical. To fix this, we edited the `generate_plotly_figure` function in `barricade/lib/python3.13/site-packages/vanna/flask/__init__.py`. We specifically changed the prompt: def generate_plotly_figure(user: any, id: str, df, question, sql): table_preview = df.head(10).to_markdown() # or .to_string(), or .to_json() prompt = ( f"Here is a preview of the result table (first 10 rows):\n{table_preview}\n\n" "Based on this data, generate Plotly Python code that produces an appropriate chart." ) try: question = f"{question}. When generating the chart, use these special instructions: {prompt}" code = vn.generate_plotly_code( question=question, sql=sql, df_metadata=f"Running df.dtypes gives:\n {df.dtypes}", ) self.cache.set(id=id, field="plotly_code", value=code) fig = vn.get_plotly_figure(plotly_code=code, df=df, dark_mode=False) fig_json = fig.to_json() self.cache.set(id=id, field="fig_json", value=fig_json) return jsonify( { "type": "plotly_figure", "id": id, "fig": fig_json, } ) except Exception as e: import traceback traceback.print_exc() return jsonify({"type": "error", "error": str(e)}) The last UI customization we did was we chose to provide our own index.html. You specify the path to your index.html file when you initialize the flask app (otherwise it will use the default index.html). **PROMPTING/MODEL/LLM CUSTOMIZATIONS**: We made *many* modifications in `barricade/lib/python3.13/site-packages/vanna/base/base.py`. Notable modifications include setting the temperature for the LLM dynamically (depending on the task), creating different prompts based on if the question was conceptual or one that required sql generation, and adding a check for hallucinations in the SQL generated by the LLM. Temperature controls the randomness of text that is generated by LLMs during inference. A lower temperature essentially makes those tokens with the highest probability more likely to be selected; a higher temperature increases a model's likelihood of selecting less probable tokens. For tasks such as generating SQL, we want the temperature to be lower to prevent hallucinations. Hallucinations are when the LLM makes up something that doesn't exist. In SQL generation, a hallucination may look like the LLM querying a column that doesn't exist. This renders the query unusable, and throws an error. Thus, we edited the `generate_sql` function to change the temperature dynamically. The temperature is between 0 - 1. For questions deemed to be conceptual, we set the temperature to be 0.6, and for questions that require generating sql, we set the temperature to be 0.2. Furthermore, for tasks such as generating sql, the temperature is 0.2, while for tasks such as generating graphs and summaries, the default temperature is 0.5. We decided on 0.5 for the graph and summary tasks, because they require more creativity. We added two helper functions: `decide_temperature` and `is_conceptual`. the keyword to indicate a conceptual question is if the user says "barricade" in their question. def is_conceptual(self, question:str): q = question.lower() return ( "barricade" in q and any(kw in q for kw in ["how", "why", "what", "explain", "cause", "fix", "should", "could"]) ) def decide_temperature(self, question: str) -> float: if "barricade" in question.lower(): return 0.6 # Conceptual reasoning return 0.2 # Precise SQL generation If a question is conceptual, sql is not generated, and the LLM response is returned. We specified this in the prompt for the LLM. The prompt is different depending on if the question requires sql generation or if the question is conceptual. We do this in `get_sql_prompt`, which is called in `generate_sql`: def get_sql_prompt(self, initial_prompt : str, question: str, question_sql_list: list, ddl_list: list, doc_list: list, conceptual: bool = False, **kwargs): if initial_prompt is None: if not conceptual: initial_prompt = f"You are a {self.dialect} expert. " + \ "Please help to generate a SQL query to answer the question. Your response should ONLY be based on the given context and follow the response guidelines and format instructions. " else: initial_prompt = "Your name is barricade. If someone says the word barricade, it is not a part of the question, they are just saying your name. You are an expert in FHIR to OMOP transformations. Do not generate SQL. Your role is to conceptually explain the issue, root cause, or potential resolution. If the user mentions a specific table or field, provide interpretive guidance — not queries. Focus on summarizing, explaining, and advising based on known documentation and transformation patterns." initial_prompt = self.add_ddl_to_prompt( initial_prompt, ddl_list, max_tokens=self.max_tokens ) if self.static_documentation != "": doc_list.append(self.static_documentation) initial_prompt = self.add_documentation_to_prompt( initial_prompt, doc_list, max_tokens=self.max_tokens ) if not conceptual: initial_prompt += ( "===Response Guidelines \n" "1. If the provided context is sufficient, please generate a valid SQL query without any explanations for the question. \n" "2. If the provided context is almost sufficient but requires knowledge of a specific string in a particular column, please generate an intermediate SQL query to find the distinct strings in that column. Prepend the query with a comment saying intermediate_sql \n" "3. If the provided context is insufficient, please explain why it can't be generated. \n" "4. Please use the most relevant table(s). \n" "5. If the question has been asked and answered before, please repeat the answer exactly as it was given before. \n" f"6. Ensure that the output SQL is {self.dialect}-compliant and executable, and free of syntax errors. \n" ) else: initial_prompt += ( "===Response Guidelines \n" "1. Do not generate SQL under any circumstances. \n" "2. Provide conceptual explanations, interpretations, or guidance based on FHIR-to-OMOP transformation logic. \n" "3. If the user refers to warnings or issues, explain possible causes and common resolutions. \n" "4. If the user references a table or field, provide high-level understanding of its role in the transformation process. \n" "5. Be concise but clear. Do not make assumptions about schema unless confirmed in context. \n" "6. If the question cannot be answered due to lack of context, state that clearly and suggest what additional information would help. \n" ) message_log = [self.system_message(initial_prompt)] for example in question_sql_list: if example is None: print("example is None") else: if example is not None and "question" in example and "sql" in example: message_log.append(self.user_message(example["question"])) message_log.append(self.assistant_message(example["sql"])) message_log.append(self.user_message(question)) return message_log def generate_sql(self, question: str, allow_llm_to_see_data=True, **kwargs) -> str: temperature = self.decide_temperature(question) conceptual = self.is_conceptual(question) question = re.sub(r"\bbarricade\b","",question,flags=re.IGNORECASE).strip() if self.config is not None: initial_prompt = self.config.get("initial_prompt", None) else: initial_prompt = None question_sql_list = self.get_similar_question_sql(question, **kwargs) ddl_list = self.get_related_ddl(question, **kwargs) doc_list = self.get_related_documentation(question, **kwargs) prompt = self.get_sql_prompt( initial_prompt=initial_prompt, question=question, question_sql_list=question_sql_list, ddl_list=ddl_list, doc_list=doc_list, conceptual=conceptual, **kwargs, ) self.log(title="SQL Prompt", message=prompt) llm_response = self.submit_prompt(prompt, temperature, **kwargs) self.log(title="LLM Response", message=llm_response) if 'intermediate_sql' in llm_response: if not allow_llm_to_see_data: return "The LLM is not allowed to see the data in your database. Your question requires database introspection to generate the necessary SQL. Please set allow_llm_to_see_data=True to enable this." if allow_llm_to_see_data: intermediate_sql = self.extract_sql(llm_response,conceptual) try: self.log(title="Running Intermediate SQL", message=intermediate_sql) df = self.run_sql(intermediate_sql,conceptual) prompt = self.get_sql_prompt( initial_prompt=initial_prompt, question=question, question_sql_list=question_sql_list, ddl_list=ddl_list, doc_list=doc_list+[f"The following is a pandas DataFrame with the results of the intermediate SQL query {intermediate_sql}: \n" + df.to_markdown()], **kwargs, ) self.log(title="Final SQL Prompt", message=prompt) llm_response = self.submit_prompt(prompt, **kwargs) self.log(title="LLM Response", message=llm_response) except Exception as e: return f"Error running intermediate SQL: {e}" return self.extract_sql(llm_response,conceptual) Even with a low temperature, the LLM would still generate hallucinations sometimes. To further prevent hallucinations, we added a check for them before returning the sql. We created a helper function, `clean_sql_by_schema`, which takes the generated SQL, and finds any columns that do not exist. It then removes that SQL, and returns the cleaned version with no hallucinations. For cases where the SQL is something like "SELECT cw.id FROM omop.conversion_issues", it uses the `extract_alias_mapping` to map cw to the conversion_issues table. Here are those functions for reference: def extract_alias_mapping(self,sql: str) -> dict[str,str]: """ Parse the FROM and JOIN clauses to build alias → table_name. """ alias_map = {} # pattern matches FROM schema.table alias or FROM table alias for keyword in ("FROM", "JOIN"): for tbl, alias in re.findall( rf'{keyword}\s+([\w\.]+)\s+(\w+)', sql, flags=re.IGNORECASE ): # strip schema if present: table_name = tbl.split('.')[-1] alias_map[alias] = table_name return alias_map def clean_sql_by_schema(self, sql: str, schema_dict: dict[str,list[str]] ) -> str: """ Returns a new SQL where each SELECT-line is kept only if its alias.column is in the allowed columns for that table. schema_dict: { 'conversion_warnings': [...], 'conversion_issues': [...] } """ alias_to_table = self.extract_alias_mapping(sql) lines = sql.splitlines() cleaned = [] in_select = False for line in lines: stripped = line.strip() # detect start/end of the SELECT clause if stripped.upper().startswith("SELECT"): in_select = True cleaned.append(line) continue if in_select and re.match(r'FROM\b', stripped, re.IGNORECASE): in_select = False cleaned.append(line) continue if in_select: # try to find alias.column in this line m = re.search(r'(\w+)\.(\w+)', stripped) if m: alias, col = m.group(1), m.group(2) table = alias_to_table.get(alias) if table and col in schema_dict.get(table, []): cleaned.append(line) else: # drop this line entirely continue else: # no alias.column here (maybe a comma, empty line, etc) cleaned.append(line) else: cleaned.append(line) # re-join and clean up any dangling commas before FROM out = "\n".join(cleaned) out = re.sub(r",\s*\n\s*FROM", "\nFROM", out, flags=re.IGNORECASE) self.log("RESULTING SQL:" + out) return out ### STEP 5: Initialize the Flask App Now we are ready to bring Barricade to life! The code below will spin up a flask app, which lets you communicate with the AI agent, barricade. As you can see, we specified our own index_html_path, subtitle, title, and more. This is all optional. These arguments are defined here: [web customizations](https://vanna.ai/docs/web-app/) from vanna.flask import VannaFlaskApp app = VannaFlaskApp(vn, chart=True, sql=False, allow_llm_to_see_data=True,ask_results_correct=False, title="InterSystems Barricade", subtitle="Your AI-powered Transformation Cop for InterSystems OMOP", index_html_path=current_dir + "/static/index.html" ) Once your app is running, you will see barricade: ## RESULTS/DEMO Barricade can help you gain a deeper understanding about omop and fhir, and it can also help you debug transformation issues that you run into when you are trying to transform your fhir data to omop. To showcase Barricade's ability, I will show a real life example. A few months ago, we got an iService ticket, with the following description: To test Barricade, I copied this description into Barricade, and here is the response: First, Barricade gave me a table documenting the issues: Then, Barricade gave me a graph to visualize the issues: And, most importantly, Barricade gave me a description of the exact issue that was causing problems AND told me how to fix it: ### READY 2025 demo Infographic Here is a link to download the handout from our READY 2025 demo: Download the handout.
Announcement
Anastasia Dyubaylo · Jun 25

[Video] Using Character Slice Index in InterSystems IRIS

Hi Community, Enjoy the new video on InterSystems Developers YouTube from our Tech Video Challenge: ⏯ Using Character Slice Index in InterSystems IRIS Learn about the Character Slice Index, first introduced back in June 2008. Inspired by bit-slice indexing for numerical data, this approach offers a unique way to index and search for individual characters within texts—ideal for languages like Japanese and Chinese, where single characters carry substantial meaning. In this presentation, you'll learn how character slicing works, leveraging a specialized data type that defines language-specific word patterns to enhance text indexing performance. We’ll run a live demo using SQL on a test dataset, comparing the performance and efficiency of character indexing versus traditional approaches. 🗣 Presenter: @Robert.Cemper1003 Stay ahead of the curve! Watch the video and subscribe to keep learning! 👍
Announcement
Celeste Canzano · May 13

InterSystems IRIS Development Professional Exam is now LIVE!

Hello community, The Certification Team of InterSystems Learning Services is excited to announce the release of our new InterSystems IRIS Development Professional exam. It is now available for purchase and scheduling in InterSystems exam catalog. Potential candidates can review the exam topics and the practice questions to help orient them to exam question approaches and content. Candidates who successfully pass the exam will receive a digital certification badge that can be shared on social media accounts like LinkedIn. If you are new to InterSystems Certification, please review our program pages that include information on taking exams, exam policies, FAQ and more. If you have ideas about creating new certifications that can help you advance your career, the Certification Team of InterSystems Learning Services is always open to ideas and suggestions. Please contact us at certification@intersystems.com if you would like to share any ideas. Looking forward to celebrating your success, Celeste Canzano - Certification Operations Specialist, InterSystems It's a great exam. I'm not just saying that because I helped develop it, but it has some great questions and topics on it that really make you think. I can't say I enjoyed the process of taking my last Intersystems exam, (not this one), but I certainly enjoyed the feeling after I passed it. :)It's great to get that validation that you do actually know what you're talking about!
Announcement
Anastasia Dyubaylo · Jun 30

[Webinar] The Future of Healthcare Integration with Health Connect & InterSystems

Hey Community, We're excited to invite you to the next InterSystems UKI Tech Talk webinar: 👉The Future of Healthcare Integration with Health Connect & InterSystems ⏱ Date & Time: Thursday, July 3, 2025 10:30-11:30 UK Speakers:👨‍🏫 @Mark.Massias, Senior Sales Engineer, InterSystems👨‍🏫 Mike Fitzgerald. Head of Customer Solutions, ReStart As the NHS continues to advance its digital transformation agenda, seamless interoperability has become a crucial priority for healthcare IT professionals. InterSystems Health Connect is leading the way as a superior integration engine, offering enhanced scalability, security, and performance to support evolving data exchange needs. This exclusive webinar will explore how you can overcome interoperability challenges, streamline system migrations, and benefit from real-world success stories—ensuring you stay ahead of NHS standards for connected care. Join us for an insightful session where InterSystems and our highly experienced implementation partner ReStart will showcase the benefits of Health Connect, providing actionable strategies for seamless integration and transitioning from legacy solutions. Discover how Health Connect supports NHS standards out-of-the box and enhances connectivity across trust and regional healthcare networks. This is your opportunity to future-proof your healthcare IT strategy and drive smarter, more efficient data integration. >> REGISTER HERE << When I looked yesterday there was a "Learning Services, Dev Community and Partner Portal Overview" listed for 24th July. When I went to register for both the future of healthcare integration and the learning services, Dev Community etc one, the latter has disappeared. Any plans to reschedule?
Discussion
Admin GlobalMasters · Jul 3

InterSystems READY 2025 Memes You've Sent!

You’ve been dropping memes into our inbox — here's our top 20!Get ready to laugh, nod in agreement, and maybe spot the one you created! 👀Let us know in the comments which meme is your favorite! 1. Author: Robert Cemper 2. Author: Aya Heshmat 3. Author: Matthew Thornhill 4. Author: Henry Ames 5. Author: Ben Spead 6. Author: Jonathan Zhou 7. Author: Alessandra Carena 8. Author: Haddon Baker 9. Author: Liam Evans 10. Author: Macey Minor 11. Author: Marco Di Giglio 12. Author: FRANCISCO LOPEZ 13. Author: Pietro Montorfano 14. Author: David Cho 15. Author: Henry Ames 16. Author: Andre Larsen Barbosa 17. Author: Liam Evans 18. Author: Mindy Caldwell 19. Author: Dinesh 20. Author: Mathieu Delagnes
Announcement
Anastasia Dyubaylo · Jul 10

InterSystems Developer Ecosystem News, Q2 2025

Hello and welcome to the Developer Ecosystem News! The second quarter of the year was full of exciting activities in the InterSystems Developer Ecosystem. In case you missed something, we've prepared a selection of the hottest news and topics for you to catch up on! News 🎇 Introducing Developer Community AI Bot 🎆 Chat with Developer Community AI! 💡 InterSystems Ideas News #21 and #22 🏌 Code Golf - Maximum Coverage 👨‍💻 Early Access Program: OAuth2 improvements 📝 Try the new AskMe learning chatbot! 📝 InterSystems IRIS Development Professional Exam is now LIVE! 📝 InterSystems Platforms Update Q2-2025 📝 June 10, 2025 – Advisory: Namespace Switching and Global Display Failures 📝 InterSystems announces InterSystems IRIS support for Red Hat Enterprise Linux 10 📝 InterSystems Cloud Services - Release Notes - 27 June 2025 📝 Alert: InterSystems IRIS 2024.3 – AIX JSON Parsing Issue & IntegratedML Incompatibilities Contests & Events AI Programming Contest Announcement Kick-off Webinar Technology Bonuses Time to vote Technological Bonuses Results Winners Online Meetup with the Winners FHIR and Digital Health Interoperability Contest Announcement Kick-off Webinar Technology Bonuses Time to vote Technology Bonuses Results Winners Online Meetup Code Your Way to InterSystems READY 2025 Announcement Winners InterSystems READY 2025 Registration Early bird discount for the InterSystems READY 2025 Get READY, get set, get certified! Hear from Our Community: Why You Should Join READY 2025 Be READY with personalized learning InterSystems Ready 2025 sessions to sink your teeth into Look what’s coming to the Tech Exchange @ InterSystems READY 2025 Meet the Winners of the AI Programming Contest at InterSystems READY 2025 @ Tech Exchange! Stream / watch keynotes from the InterSystems Ready 2025 online First half of the InterSystems Ready 2025 Second half of the InterSystems Ready 2025 The last day of the InterSystems Ready 2025 Did You Do It All at InterSystems READY 2025? Check Your Bingo! 📰 Fourth Spanish Technical Article Contest 🤝 Developer Meetup in Cambridge - GenAI & AI Agents 🤝 Cambridge Developer Meetup - AI Coding Assistants & MCP 🤝 FHIR France Meetup #13 📺 [Webinar] SQL Cloud Services 📺 [Webinar] Unlocking the Power of InterSystems Data Fabric Studio 📺 [Hebrew Webinar] Discover the All-New UI in Version 2025.1 — and More! Latest Releases ⬇️ InterSystems API Manager (IAM) 3.10 Release Announcement ⬇️ IKO 3.8 Release ⬇️ New Point Release for IRIS 2025.1.0: Critical Interoperability Fix ⬇️ Point Releases Available to Address Namespace Switching and Global Display Issues in Recent 2025.1.0, 2024.1.4, 2023.1.6, and 2022.1.7 Versions ⬇️ Maintenance Releases 2024.1.4 and 2023.1.6 of InterSystems IRIS, IRIS for Health, & HealthShare HealthConnect are now available Best Practices & Key Questions 🔥 Best practices What more can be done with lists in SQL (%DLIST, %INLIST, FOR SOME) Creating Unit Tests in ObjectScript for HL7 pipelines using %UnitTest class Adding OpenAI to Interoperability Production InterSystems IRIS® CloudSQL Metrics to Google Cloud Monitoring How to send messages to Microsoft Teams Connecting to Cloud SQL with DBeaver using SSL/TLS Kinds of properties in IRIS ❓ Key Questions: April, May, June People and Companies to Know About 👩‍🏫 Celebrating a Lifelong Contributor of the Developer Community 💼 CACHÉ #Job opportunity 💼 InterSystems HealthShare developer interoperability Developer/Lead 💼 IRIS Data Platform Engineer 👨‍💻 InterSystems developer Job So... Here is our take on the most interesting and important things! What were your highlights from this past quarter? Share them in the comments section and let's remember the fun we've had!
Announcement
Anastasia Dyubaylo · Jul 11

[Video] Discovering InterSystems Products - A High Level Overview

Hi Community, Enjoy the new video on InterSystems Developers YouTube: ⏯ Discovering InterSystems Products - A High Level Overview @ Global Summit 2024 Watch this video for a high-level overview of the full InterSystems product portfolio, especially if you're looking to understand how the various technologies connect and support real-world use cases. 🗣 Presenter: @joe.gallant, Senior Technical Trainer, InterSystems Tune in to the latest innovations and subscribe to catch what’s coming!
Announcement
Jonathan Gern · Jun 16, 2020

Intersystems Solutions Engineer- Full Time

My organization is looking for a full time Intersystems Solutions Engineer to join the team. Based in NY, the Solutions Engineer will design technical architectures to support efforts across the health system focused on clinical data exchange with other provider organizations and payors.
Question
Robert Bee · Feb 13, 2019

Intersystems Cache and MS Access Passthrough

Edit:May have found the issue but not the solution."SELECT * FROM wmhISTORYdETAIL" runs as a passthrough without asking for the DNS.but'SELECT Count([wmhISTORYdETAIL].[HistHMNumber] AS CountOfHistHMNumber FROM [wmhISTORYdETAIL] WHERE ((([wmhISTORYdETAIL].[HistMovType])='Receipt') AND (([wmhISTORYdETAIL].[HistMovDate])>=Date()-1) AND (([wmhISTORYdETAIL].[HistMovDate])<Date()));'asks for the DNS but both are linked to a table that has the password saved.Any Ideas please?RobHiI have created an MS Access database with a passthrough query to our Intersystems Cache WMS system. If I use "SELECT * from thetable" as the passthough query I can use VB.NET to query the passthrough and it works fine but this dataset getting rather large so I changed it to"Select field1, field2, filed3 from thetable" but the passthrough no longer works as it did.....it works in MS Access but not from the VB.NET app.The VB.Net Query is:SELECT Count([xxx].[HistHMNumber] AS CountOfHistHMNumber FROM [xxx] WHERE ((([xxx].[HistMovType])='Receipt') AND (([xxx].[HistMovDate])>=Date()-1) AND (([xxx].[HistMovDate])<Date()));where [xxx] is the passthrough querybut now I get an ODBC error in the VB.Net app"System.Data.OleDb.OleDbException: 'ODBC--call failed.'"The error/issue appears to be in the SQL but if Im lift it and paste it into the MS Access database. it works?!?!Any help would be appreciated.Many ThanksRob Hello Robert, Did you resolve this or log with our helpdesk? Regards David Underhill @ Chess
Question
Kurt Hofman · Jul 3, 2019

Using Intersystems Caché as a LDAP server

We would like to use our Caché-server as the source for our PABX-addressbook.The PABX only supports LDAP.Is it possible to use our Caché-instance as an LDAP-server ?Regards, Kurt Hofman. The native Cache LDAP support is only for an LDAP client.
Announcement
Neerav Verma · Mar 15, 2019

Intersystems Technologies : Connect with fellow individuals

Hello All,I have been associated with Intersystems technologies for over a decade working on Cache, Zen, Ensemble etc.This is a very niche field and a lovely community. I wanted to extend my hands to connect with people who are of same field or related to it.Here is my linkedin profile. Pls feel free to send me an invite or drop me a messagehttps://www.linkedin.com/in/vneerav/ Hi Neerav!Perhaps we just need to add the LinkedIn field in a member's profile?Would it help? What do you think? Yes. Definitely Issue is filed