Search

Clear filter
Announcement
Anastasia Dyubaylo · Jun 10

[Video] Foreign Tables in InterSystems IRIS 2025.1

Hi Community, Our Project Managers created a series of videos to highlight some of the interesting features of the new 2025.1 release. Please welcome the first one of them: ⏯ Foreign Tables in InterSystems IRIS 2025.1 The InterSystems IRIS 2025.1 release introduces powerful new features for data management, including full support for foreign tables in production. It adds fine-grained privileges for managing access to foreign servers and querying them, plus a new command for sending direct SQL queries to external relational databases. New table conversion options also allow switching between row and column layouts for better analytics performance. 🗣 Presenter: @Benjamin.DeBoe, Manager, Analytics Product Management, InterSystems Want to stay up to date? Watch the video and subscribe for more!
Article
Hannah Kimura · Jun 23

Transforming with Confidence: A GenAI Assistant for InterSystems OMOP

## INTRO Barricade is a tool developed by ICCA Ops to streamline and scale support for FHIR-to-OMOP transformations for InterSystems OMOP. Our clients will be using InterSystems OMOP to transform FHIR data to this OMOP structure. As a managed service, our job is to troubleshoot any issues that come with the transformation process. Barricade is the ideal tool to aid us in this process for a variety of reasons. First, effective support demands knowledge across FHIR standards, the OHDSI OMOP model, and InterSystems-specific operational workflows—all highly specialized areas. Barricade helps bridge knowledge gaps by leveraging large language models to provide expertise regarding FHIR and OHDSI. In addition to that, even when detailed explanations are provided to resolve specific transformation issues, that knowledge is often buried in emails or chats and lost for future use. Barricade can capture, reuse, and scale that knowledge across similar cases. Lastly, we often don’t have access to the source data, which means we must diagnose issues without seeing the actual payload—relying solely on error messages, data structure, and transformation context. This is exactly where Barricade excels: by drawing on prior patterns, domain knowledge, and contextual reasoning to infer the root cause and recommend solutions without direct access to the underlying data. ## IMPLEMENTATION OVERVIEW Barricade is built off of Vanna.AI. So, while I use Barricade a lot in this article to refer to the AI agent we created, the underlying model is really a Vanna model. At its core, Vanna is a Python package that uses retrieval augmentation to help you generate accurate SQL queries for your database using LLMs. We customized our vanna model to not only generate SQL queries, but to also be able to answer conceptual questions. Setting up vanna is extremely easy and quick. For our setup, we used chatgpt as the LLM, ChromaDB as our vector database, and Postgres as the database that stores the data we want to query (our run data -- data related to the transformation from FHIR-to-OMOP). You can choose from many different options for your LLM, vector database, and Postgres. Valid options are detailed here: [Quickstart with your own data](https://vanna.ai/docs/postgres-openai-standard-chromadb/). ## HOW THIS IS DIFFERENT THAN CHATGPT ChatGPT is a standalone large language model (LLM) that generates responses based solely on patterns learned during its training. It does not access external data at inference time. Barricade, on the other hand, is built on Vanna.ai, a Retrieval-Augmented Generation (RAG) system. RAG enhances LLMs by layering dynamic retrieval over the model, which allows it to query external sources for real-time, relevant information and incorporate those results into its output. By integrating our Postgres database directly with Vanna.ai, Barricade can access: - The current database schema. - Transformation run logs. - Internal documentation. - Proprietary transformation rules. This live access is critical when debugging production data issues, because the model isn't just guessing- it’s seeing and reasoning with real data. In short, Barricade merges the language fluency of ChatGPT with the real-time accuracy of direct data access, resulting in more reliable, context-aware insights. ## HOW BARRICADE WAS CREATED ### STEP 1: CREATE VIRTUAL ENVIRONMENT This step creates all the files that make up Vanna. virtualenv --python="/usr/bin/python3.10" barricade source barricade/bin/activate pip install ipykernel python -m ipykernel install --user --name=barricade jupyter notebook --notebook-dir=. For barricade, we will be editing many of these files to customize our experience. Notable files include: - `barricade/lib/python3.13/site-packages/vanna/base/base.py` - `barricade/lib/python3.13/site-packages/vanna/openai/openai_chat.py` - `barricade/lib/python3.13/site-packages/vanna/flask/__init__.py` - `barricade/lib/python3.13/site-packages/vanna/flask/assets.py` ### STEP 2: INITIALIZE BARRICADE This step includes importing the packages needed and initializing the model. The minimum imports needed are: `from vanna.chromadb import ChromaDB_VectorStore from vanna.openai import OpenAI_Chat` NOTE: Each time you create a new model, make sure to remove all remnants of old training data or vector dbs. To use the code we used below, import os and shutil. if os.path.exists("chroma.sqlite3"): os.remove("chroma.sqlite3") print("Unloading Vectors...") else: print("The file does not exist") base_path = os.getcwd() for root, dirs, files in os.walk(base_path, topdown=False): if "header.bin" in files: print(f"Removing directory: {root}") shutil.rmtree(root) Now, it's time to initialize our model: class MyVanna(ChromaDB_VectorStore, OpenAI_Chat): def __init__(self, config=None): # initialize the vector store (this calls VannaBase.__init__) ChromaDB_VectorStore.__init__(self, config=config) # initialize the chat client (this also calls VannaBase.__init__ but more importantly sets self.client) OpenAI_Chat.__init__(self, config=config) self._model = "barricade" vn = MyVanna(config={'api_key': CHAT GPT API KEY, 'model': 'gpt-4.1-mini'}) ### STEP 3: TRAINING THE MODEL This is where the customization begins. We begin my connecting to the Postgres tables. This has our run data. Fill in the arguments with your host, dbname, username, password, and port for the Postgres database. vn.connect_to_postgres(host='POSTGRES DATABASE ENDPOINT', dbname='POSTGRES DB NAME', user='', password='', port='') From there, we trained the model on the information schemas. df_information_schema = vn.run_sql("SELECT * FROM INFORMATION_SCHEMA.COLUMNS") plan = vn.get_training_plan_generic(df_information_schema) vn.train(plan=plan) After that, we went even further and decided to send Barricade to college. We loaded all of Chapters 4-7 from the book of OHDSI to give barricade a good understanding of OMOP core principals. We loaded FHIR documentation, specifically explanations of various FHIR resource types, which describe how healthcare data is structured, used, and interrelated so barricade understood FHIR resources. We loaded FHIR-to-OMOP mappings, so barricade understood which FHIR resource mapped to which OMOP table(s). And finally, we loaded specialized knowledge regarding edge cases that need to be understood for FHIR-to-OMOP transformations. Here is a brief overview of how that training looked: **FHIR resource example**: def load_fhirknowledge(vn): url = "https://build.fhir.org/datatypes.html#Address" response = requests.get(url) soup = BeautifulSoup(response.text, "html.parser") text_only = soup.get_text(separator="\n", strip=True) vn.train(documentation=text_only) **MAPPING example:** def load_mappingknowledge(vn): # Transform url = "https://docs.intersystems.com/services/csp/docbook/DocBook.UI.Page.cls?KEY=RDP_transform" response = requests.get(url) soup = BeautifulSoup(response.text, "html.parser") text_only = soup.get_text(separator="\n", strip=True) vn.train(documentation=text_only) **The book of OHDSI example**: def load_omopknowledge(vn): # Chapter 4 url = "https://ohdsi.github.io/TheBookOfOhdsi/CommonDataModel.html" response = requests.get(url) soup = BeautifulSoup(response.text, "html.parser") text_only = soup.get_text(separator="\n", strip=True) vn.train(documentation=text_only) **Specialized knowledge example**: def load_opsknowledge(vn): vn.train(documentation="Our business is to provide tools for generating evidence for the transformation Runs") Finally, we train barricade to understand the SQL queries that are commonly used. This will help the system understand the context of the questions that are being asked: cdmtables = ["conversion_warnings","conversion_issues","ingestion_report","care_site", "cdm_source", "cohort", "cohort_definition", "concept", "concept_ancestor", "concept_class", "concept_relationship", "concept_synonym", "condition_era", "condition_occurrence", "cost", "death", "device_exposure", "domain", "dose_era", "drug_era", "drug_exposure", "drug_strength", "episode", "episode_event", "fact_relationship", "location", "measurement", "metadata", "note", "note_nlp", "observation", "observation_period", "payer_plan_period", "person", "procedure_occurrence", "provider", "relationship", "source_to_concept_map", "specimen", "visit_detail", "visit_occurrence", "vocabulary"] for table in cdmtables: vn.train(sql="SELECT * FROM WHERE omopcdm54." + table) ### STEP 4: CUSTOMIZATIONS (OPTIONAL) Barricade is quite different from the Vanna base model, and these customizations are a big reason for that. **UI CUSTOMIZATIONS** To customize anything related to the UI (what you see when you spin up the flask app and barricade comes to life), you should edit the `barricade/lib/python3.13/site-packages/vanna/flask/__init__.py` and `barricade/lib/python3.13/site-packages/vanna/flask/assets.py` paths. For barricade, we customized the suggested questions and the logos. To edit the suggested questions, modify the `generate_questions` function in `barricade/lib/python3.13/site-packages/vanna/flask/__init__.py`. We added this: if hasattr(self.vn, "_model") and self.vn._model == "barricade": return jsonify( { "type": "question_list", "questions": [ "What are the transformation warnings?", "What does a fhir immunization resource map to in omop?", "Can you retrieve the observation period of the person with PERSON_ID 61?", "What are the mappings for the ATC B03AC?", "What is the Common Data Model, and why do we need it for observational healthcare data?" ], "header": "Here are some questions you can ask:", } ) Note that these suggested questions will only come up if you set self._model= barricade in STEP 2. To change the logo, you must edit assets.py. This is tricky, because assets.py contains the *compiled* js and css code. To find the logo you want to replace, go to your browser that has the flask app running, and inspect element. Find the SVG block that corresponds to the logo, and replace that block in assets.py with the SVG block of the new image you want. We also customized the graph response. In relevant questions, a graph is generated using plotly. The default prompts were generating graphs that were almost nonsensical. To fix this, we edited the `generate_plotly_figure` function in `barricade/lib/python3.13/site-packages/vanna/flask/__init__.py`. We specifically changed the prompt: def generate_plotly_figure(user: any, id: str, df, question, sql): table_preview = df.head(10).to_markdown() # or .to_string(), or .to_json() prompt = ( f"Here is a preview of the result table (first 10 rows):\n{table_preview}\n\n" "Based on this data, generate Plotly Python code that produces an appropriate chart." ) try: question = f"{question}. When generating the chart, use these special instructions: {prompt}" code = vn.generate_plotly_code( question=question, sql=sql, df_metadata=f"Running df.dtypes gives:\n {df.dtypes}", ) self.cache.set(id=id, field="plotly_code", value=code) fig = vn.get_plotly_figure(plotly_code=code, df=df, dark_mode=False) fig_json = fig.to_json() self.cache.set(id=id, field="fig_json", value=fig_json) return jsonify( { "type": "plotly_figure", "id": id, "fig": fig_json, } ) except Exception as e: import traceback traceback.print_exc() return jsonify({"type": "error", "error": str(e)}) The last UI customization we did was we chose to provide our own index.html. You specify the path to your index.html file when you initialize the flask app (otherwise it will use the default index.html). **PROMPTING/MODEL/LLM CUSTOMIZATIONS**: We made *many* modifications in `barricade/lib/python3.13/site-packages/vanna/base/base.py`. Notable modifications include setting the temperature for the LLM dynamically (depending on the task), creating different prompts based on if the question was conceptual or one that required sql generation, and adding a check for hallucinations in the SQL generated by the LLM. Temperature controls the randomness of text that is generated by LLMs during inference. A lower temperature essentially makes those tokens with the highest probability more likely to be selected; a higher temperature increases a model's likelihood of selecting less probable tokens. For tasks such as generating SQL, we want the temperature to be lower to prevent hallucinations. Hallucinations are when the LLM makes up something that doesn't exist. In SQL generation, a hallucination may look like the LLM querying a column that doesn't exist. This renders the query unusable, and throws an error. Thus, we edited the `generate_sql` function to change the temperature dynamically. The temperature is between 0 - 1. For questions deemed to be conceptual, we set the temperature to be 0.6, and for questions that require generating sql, we set the temperature to be 0.2. Furthermore, for tasks such as generating sql, the temperature is 0.2, while for tasks such as generating graphs and summaries, the default temperature is 0.5. We decided on 0.5 for the graph and summary tasks, because they require more creativity. We added two helper functions: `decide_temperature` and `is_conceptual`. the keyword to indicate a conceptual question is if the user says "barricade" in their question. def is_conceptual(self, question:str): q = question.lower() return ( "barricade" in q and any(kw in q for kw in ["how", "why", "what", "explain", "cause", "fix", "should", "could"]) ) def decide_temperature(self, question: str) -> float: if "barricade" in question.lower(): return 0.6 # Conceptual reasoning return 0.2 # Precise SQL generation If a question is conceptual, sql is not generated, and the LLM response is returned. We specified this in the prompt for the LLM. The prompt is different depending on if the question requires sql generation or if the question is conceptual. We do this in `get_sql_prompt`, which is called in `generate_sql`: def get_sql_prompt(self, initial_prompt : str, question: str, question_sql_list: list, ddl_list: list, doc_list: list, conceptual: bool = False, **kwargs): if initial_prompt is None: if not conceptual: initial_prompt = f"You are a {self.dialect} expert. " + \ "Please help to generate a SQL query to answer the question. Your response should ONLY be based on the given context and follow the response guidelines and format instructions. " else: initial_prompt = "Your name is barricade. If someone says the word barricade, it is not a part of the question, they are just saying your name. You are an expert in FHIR to OMOP transformations. Do not generate SQL. Your role is to conceptually explain the issue, root cause, or potential resolution. If the user mentions a specific table or field, provide interpretive guidance — not queries. Focus on summarizing, explaining, and advising based on known documentation and transformation patterns." initial_prompt = self.add_ddl_to_prompt( initial_prompt, ddl_list, max_tokens=self.max_tokens ) if self.static_documentation != "": doc_list.append(self.static_documentation) initial_prompt = self.add_documentation_to_prompt( initial_prompt, doc_list, max_tokens=self.max_tokens ) if not conceptual: initial_prompt += ( "===Response Guidelines \n" "1. If the provided context is sufficient, please generate a valid SQL query without any explanations for the question. \n" "2. If the provided context is almost sufficient but requires knowledge of a specific string in a particular column, please generate an intermediate SQL query to find the distinct strings in that column. Prepend the query with a comment saying intermediate_sql \n" "3. If the provided context is insufficient, please explain why it can't be generated. \n" "4. Please use the most relevant table(s). \n" "5. If the question has been asked and answered before, please repeat the answer exactly as it was given before. \n" f"6. Ensure that the output SQL is {self.dialect}-compliant and executable, and free of syntax errors. \n" ) else: initial_prompt += ( "===Response Guidelines \n" "1. Do not generate SQL under any circumstances. \n" "2. Provide conceptual explanations, interpretations, or guidance based on FHIR-to-OMOP transformation logic. \n" "3. If the user refers to warnings or issues, explain possible causes and common resolutions. \n" "4. If the user references a table or field, provide high-level understanding of its role in the transformation process. \n" "5. Be concise but clear. Do not make assumptions about schema unless confirmed in context. \n" "6. If the question cannot be answered due to lack of context, state that clearly and suggest what additional information would help. \n" ) message_log = [self.system_message(initial_prompt)] for example in question_sql_list: if example is None: print("example is None") else: if example is not None and "question" in example and "sql" in example: message_log.append(self.user_message(example["question"])) message_log.append(self.assistant_message(example["sql"])) message_log.append(self.user_message(question)) return message_log def generate_sql(self, question: str, allow_llm_to_see_data=True, **kwargs) -> str: temperature = self.decide_temperature(question) conceptual = self.is_conceptual(question) question = re.sub(r"\bbarricade\b","",question,flags=re.IGNORECASE).strip() if self.config is not None: initial_prompt = self.config.get("initial_prompt", None) else: initial_prompt = None question_sql_list = self.get_similar_question_sql(question, **kwargs) ddl_list = self.get_related_ddl(question, **kwargs) doc_list = self.get_related_documentation(question, **kwargs) prompt = self.get_sql_prompt( initial_prompt=initial_prompt, question=question, question_sql_list=question_sql_list, ddl_list=ddl_list, doc_list=doc_list, conceptual=conceptual, **kwargs, ) self.log(title="SQL Prompt", message=prompt) llm_response = self.submit_prompt(prompt, temperature, **kwargs) self.log(title="LLM Response", message=llm_response) if 'intermediate_sql' in llm_response: if not allow_llm_to_see_data: return "The LLM is not allowed to see the data in your database. Your question requires database introspection to generate the necessary SQL. Please set allow_llm_to_see_data=True to enable this." if allow_llm_to_see_data: intermediate_sql = self.extract_sql(llm_response,conceptual) try: self.log(title="Running Intermediate SQL", message=intermediate_sql) df = self.run_sql(intermediate_sql,conceptual) prompt = self.get_sql_prompt( initial_prompt=initial_prompt, question=question, question_sql_list=question_sql_list, ddl_list=ddl_list, doc_list=doc_list+[f"The following is a pandas DataFrame with the results of the intermediate SQL query {intermediate_sql}: \n" + df.to_markdown()], **kwargs, ) self.log(title="Final SQL Prompt", message=prompt) llm_response = self.submit_prompt(prompt, **kwargs) self.log(title="LLM Response", message=llm_response) except Exception as e: return f"Error running intermediate SQL: {e}" return self.extract_sql(llm_response,conceptual) Even with a low temperature, the LLM would still generate hallucinations sometimes. To further prevent hallucinations, we added a check for them before returning the sql. We created a helper function, `clean_sql_by_schema`, which takes the generated SQL, and finds any columns that do not exist. It then removes that SQL, and returns the cleaned version with no hallucinations. For cases where the SQL is something like "SELECT cw.id FROM omop.conversion_issues", it uses the `extract_alias_mapping` to map cw to the conversion_issues table. Here are those functions for reference: def extract_alias_mapping(self,sql: str) -> dict[str,str]: """ Parse the FROM and JOIN clauses to build alias → table_name. """ alias_map = {} # pattern matches FROM schema.table alias or FROM table alias for keyword in ("FROM", "JOIN"): for tbl, alias in re.findall( rf'{keyword}\s+([\w\.]+)\s+(\w+)', sql, flags=re.IGNORECASE ): # strip schema if present: table_name = tbl.split('.')[-1] alias_map[alias] = table_name return alias_map def clean_sql_by_schema(self, sql: str, schema_dict: dict[str,list[str]] ) -> str: """ Returns a new SQL where each SELECT-line is kept only if its alias.column is in the allowed columns for that table. schema_dict: { 'conversion_warnings': [...], 'conversion_issues': [...] } """ alias_to_table = self.extract_alias_mapping(sql) lines = sql.splitlines() cleaned = [] in_select = False for line in lines: stripped = line.strip() # detect start/end of the SELECT clause if stripped.upper().startswith("SELECT"): in_select = True cleaned.append(line) continue if in_select and re.match(r'FROM\b', stripped, re.IGNORECASE): in_select = False cleaned.append(line) continue if in_select: # try to find alias.column in this line m = re.search(r'(\w+)\.(\w+)', stripped) if m: alias, col = m.group(1), m.group(2) table = alias_to_table.get(alias) if table and col in schema_dict.get(table, []): cleaned.append(line) else: # drop this line entirely continue else: # no alias.column here (maybe a comma, empty line, etc) cleaned.append(line) else: cleaned.append(line) # re-join and clean up any dangling commas before FROM out = "\n".join(cleaned) out = re.sub(r",\s*\n\s*FROM", "\nFROM", out, flags=re.IGNORECASE) self.log("RESULTING SQL:" + out) return out ### STEP 5: Initialize the Flask App Now we are ready to bring Barricade to life! The code below will spin up a flask app, which lets you communicate with the AI agent, barricade. As you can see, we specified our own index_html_path, subtitle, title, and more. This is all optional. These arguments are defined here: [web customizations](https://vanna.ai/docs/web-app/) from vanna.flask import VannaFlaskApp app = VannaFlaskApp(vn, chart=True, sql=False, allow_llm_to_see_data=True,ask_results_correct=False, title="InterSystems Barricade", subtitle="Your AI-powered Transformation Cop for InterSystems OMOP", index_html_path=current_dir + "/static/index.html" ) Once your app is running, you will see barricade: ## RESULTS/DEMO Barricade can help you gain a deeper understanding about omop and fhir, and it can also help you debug transformation issues that you run into when you are trying to transform your fhir data to omop. To showcase Barricade's ability, I will show a real life example. A few months ago, we got an iService ticket, with the following description: To test Barricade, I copied this description into Barricade, and here is the response: First, Barricade gave me a table documenting the issues: Then, Barricade gave me a graph to visualize the issues: And, most importantly, Barricade gave me a description of the exact issue that was causing problems AND told me how to fix it: ### READY 2025 demo Infographic Here is a link to download the handout from our READY 2025 demo: Download the handout.
Announcement
Anastasia Dyubaylo · Jun 25

[Video] Using Character Slice Index in InterSystems IRIS

Hi Community, Enjoy the new video on InterSystems Developers YouTube from our Tech Video Challenge: ⏯ Using Character Slice Index in InterSystems IRIS Learn about the Character Slice Index, first introduced back in June 2008. Inspired by bit-slice indexing for numerical data, this approach offers a unique way to index and search for individual characters within texts—ideal for languages like Japanese and Chinese, where single characters carry substantial meaning. In this presentation, you'll learn how character slicing works, leveraging a specialized data type that defines language-specific word patterns to enhance text indexing performance. We’ll run a live demo using SQL on a test dataset, comparing the performance and efficiency of character indexing versus traditional approaches. 🗣 Presenter: @Robert.Cemper1003 Stay ahead of the curve! Watch the video and subscribe to keep learning! 👍
Announcement
Celeste Canzano · May 13

InterSystems IRIS Development Professional Exam is now LIVE!

Hello community, The Certification Team of InterSystems Learning Services is excited to announce the release of our new InterSystems IRIS Development Professional exam. It is now available for purchase and scheduling in InterSystems exam catalog. Potential candidates can review the exam topics and the practice questions to help orient them to exam question approaches and content. Candidates who successfully pass the exam will receive a digital certification badge that can be shared on social media accounts like LinkedIn. If you are new to InterSystems Certification, please review our program pages that include information on taking exams, exam policies, FAQ and more. If you have ideas about creating new certifications that can help you advance your career, the Certification Team of InterSystems Learning Services is always open to ideas and suggestions. Please contact us at certification@intersystems.com if you would like to share any ideas. Looking forward to celebrating your success, Celeste Canzano - Certification Operations Specialist, InterSystems It's a great exam. I'm not just saying that because I helped develop it, but it has some great questions and topics on it that really make you think. I can't say I enjoyed the process of taking my last Intersystems exam, (not this one), but I certainly enjoyed the feeling after I passed it. :)It's great to get that validation that you do actually know what you're talking about!
Announcement
Anastasia Dyubaylo · Jun 30

[Webinar] The Future of Healthcare Integration with Health Connect & InterSystems

Hey Community, We're excited to invite you to the next InterSystems UKI Tech Talk webinar: 👉The Future of Healthcare Integration with Health Connect & InterSystems ⏱ Date & Time: Thursday, July 3, 2025 10:30-11:30 UK Speakers:👨‍🏫 @Mark.Massias, Senior Sales Engineer, InterSystems👨‍🏫 Mike Fitzgerald. Head of Customer Solutions, ReStart As the NHS continues to advance its digital transformation agenda, seamless interoperability has become a crucial priority for healthcare IT professionals. InterSystems Health Connect is leading the way as a superior integration engine, offering enhanced scalability, security, and performance to support evolving data exchange needs. This exclusive webinar will explore how you can overcome interoperability challenges, streamline system migrations, and benefit from real-world success stories—ensuring you stay ahead of NHS standards for connected care. Join us for an insightful session where InterSystems and our highly experienced implementation partner ReStart will showcase the benefits of Health Connect, providing actionable strategies for seamless integration and transitioning from legacy solutions. Discover how Health Connect supports NHS standards out-of-the box and enhances connectivity across trust and regional healthcare networks. This is your opportunity to future-proof your healthcare IT strategy and drive smarter, more efficient data integration. >> REGISTER HERE << When I looked yesterday there was a "Learning Services, Dev Community and Partner Portal Overview" listed for 24th July. When I went to register for both the future of healthcare integration and the learning services, Dev Community etc one, the latter has disappeared. Any plans to reschedule?
Discussion
Admin GlobalMasters · 5 hr ago

InterSystems READY 2025 Memes You've Sent!

You’ve been dropping memes into our inbox — here's our top 20!Get ready to laugh, nod in agreement, and maybe spot the one you created! 👀Let us know in the comments which meme is your favorite! 1. Author: Robert Cemper 2. Author: Aya Heshmat 3. Author: Matthew Thornhill 4. Author: Henry Ames 5. Author: Ben Spead 6. Author: Jonathan Zhou 7. Author: Alessandra Carena 8. Author: Haddon Baker 9. Author: Liam Evans 10. Author: Macey Minor 11. Author: Marco Di Giglio 12. Author: FRANCISCO LOPEZ 13. Author: Pietro Montorfano 14. Author: David Cho 15. Author: Henry Ames 16. Author: Andre Larsen Barbosa 17. Author: Liam Evans 18. Author: Mindy Caldwell 19. Author: Dinesh 20. Author: Mathieu Delagnes
Announcement
Jonathan Gern · Jun 16, 2020

Intersystems Solutions Engineer- Full Time

My organization is looking for a full time Intersystems Solutions Engineer to join the team. Based in NY, the Solutions Engineer will design technical architectures to support efforts across the health system focused on clinical data exchange with other provider organizations and payors.
Question
Robert Bee · Feb 13, 2019

Intersystems Cache and MS Access Passthrough

Edit:May have found the issue but not the solution."SELECT * FROM wmhISTORYdETAIL" runs as a passthrough without asking for the DNS.but'SELECT Count([wmhISTORYdETAIL].[HistHMNumber] AS CountOfHistHMNumber FROM [wmhISTORYdETAIL] WHERE ((([wmhISTORYdETAIL].[HistMovType])='Receipt') AND (([wmhISTORYdETAIL].[HistMovDate])>=Date()-1) AND (([wmhISTORYdETAIL].[HistMovDate])<Date()));'asks for the DNS but both are linked to a table that has the password saved.Any Ideas please?RobHiI have created an MS Access database with a passthrough query to our Intersystems Cache WMS system. If I use "SELECT * from thetable" as the passthough query I can use VB.NET to query the passthrough and it works fine but this dataset getting rather large so I changed it to"Select field1, field2, filed3 from thetable" but the passthrough no longer works as it did.....it works in MS Access but not from the VB.NET app.The VB.Net Query is:SELECT Count([xxx].[HistHMNumber] AS CountOfHistHMNumber FROM [xxx] WHERE ((([xxx].[HistMovType])='Receipt') AND (([xxx].[HistMovDate])>=Date()-1) AND (([xxx].[HistMovDate])<Date()));where [xxx] is the passthrough querybut now I get an ODBC error in the VB.Net app"System.Data.OleDb.OleDbException: 'ODBC--call failed.'"The error/issue appears to be in the SQL but if Im lift it and paste it into the MS Access database. it works?!?!Any help would be appreciated.Many ThanksRob Hello Robert, Did you resolve this or log with our helpdesk? Regards David Underhill @ Chess
Question
Kurt Hofman · Jul 3, 2019

Using Intersystems Caché as a LDAP server

We would like to use our Caché-server as the source for our PABX-addressbook.The PABX only supports LDAP.Is it possible to use our Caché-instance as an LDAP-server ?Regards, Kurt Hofman. The native Cache LDAP support is only for an LDAP client.
Announcement
Neerav Verma · Mar 15, 2019

Intersystems Technologies : Connect with fellow individuals

Hello All,I have been associated with Intersystems technologies for over a decade working on Cache, Zen, Ensemble etc.This is a very niche field and a lovely community. I wanted to extend my hands to connect with people who are of same field or related to it.Here is my linkedin profile. Pls feel free to send me an invite or drop me a messagehttps://www.linkedin.com/in/vneerav/ Hi Neerav!Perhaps we just need to add the LinkedIn field in a member's profile?Would it help? What do you think? Yes. Definitely Issue is filed
Announcement
Neerav Verma · Jan 27, 2021

Certified Intersystems Professional Available on Contract

Hello fellow community members, I would like to offer my services as an Intersystems Professional and am available to work on projects. I have more than a decade experience into Intersystems stack of technologies including IRIS, Ensemble, Healthshare, Healthconnect, Cache Objectscript, Mumps, Zen, Analytics etc. with companies spread over US and UK involved in multiple domains. KEY SKILLS: Cloud Computing (AWS, MS Azure, GCP)Intersystems Technology Stack (IRIS, Ensemble, Healthshare, Cache, Mumps, CSP, ZEN, Analytics)Databases (Modelling & Backend database design, SQL, PL/SQL SOAP & Restful APIsAnalytics & DashboardsHealthcare Interoperability Standards (HL7, FHIR, EDI X12)Notations (XML, JSON) |Agile Frameworks & Tools (Scrum, Kanban, JIRA, Confluence)Dockers | Linux Recent CertificationsIntersystems IRIS Core Solutions Developer SpecialistIntersystems Health Connect HL7 Interface SpecialistMicrosoft Azure Solutions Architect ExpertCertified Scrum Master I am keen and open to work on exciting projects which are not only focused on Intersystems stack but also using cloud and having AI/ML functionalities would be wonderful. My ideal role would be a position where I am able to make a strong impact to the project.Current availability : 20 hours a week.Location : London, UK Please feel free to drop me a line and say Hellonv@nv-enterprises.biz / https://www.linkedin.com/in/vneerav/ RegardsNeerav Verma
Question
Padmini D · Nov 6, 2020

Unable to find Intersystems Cache` software

Hi All, I am new to InterSystems Cache and want to explore the database features for one of the use cases we have. I am trying to find the community version of it from in https://download.InterSystems.com but only found InterSystems IRIS and Intersystems IRIS health community versions. Please help me to download and install this. Regards, Sireesha You can download these versions. Community Edition just has a few limitations, but still can be used. And look at the installation guides Hello Dimitry/Team, can you please let me know the difference of Intersystems cache DB and intersystems IRIS, we are evaluating it in our POC to implement as a application solution. Hi Dmitriy/Team, what is the difference between Intersystems Cache DB and Intersystems IRIS ? I am looking for Cache DB installation details, but getting IRIS only everywhere. Thanks, Kranthi. IRIS is a kind of replacement for Caché, which now no active development. So, while you are evaluating it, you should not look for Caché, and switch to IRIS. Very generally speaking: There is nothing in Caché that you can't do with IRIS.The only thing you might miss eventually, are some ancient compatibility hooks back to the previous millennium. https://cedocs.intersystems.com/latest/csp/docbook/Doc.View.cls?KEY=GCI_windows
Question
Abdul-Rashid Yakubu · Mar 22, 2022

Finding the median in intersystems cache SQL

Hi, Is there a way to find the median in Intersystems Cache SQL? I know it is not available as an aggregate function. Also in SQL Server I could try something like: SELECT ( (SELECT MAX(Score) FROM (SELECT TOP 50 PERCENT Score FROM Posts ORDER BY Score) AS BottomHalf) + (SELECT MIN(Score) FROM (SELECT TOP 50 PERCENT Score FROM Posts ORDER BY Score DESC) AS TopHalf) ) / 2 AS Median However, there is no PERCENT Keyword in Cache as well. Any suggestions? Thanks See Median in SQL As of IRIS 2021.1, we allow users to create their own aggregate functions. Perhaps there's a beautiful community contribution in there? :-) You could build something simple where you just stuff all values in a temporary global (the name of which you pass as a state) and sort them (using $sortbegin/$sortend), maintaining a count and then in the FINALIZE method gather the actual median value. Two caveats: don't bother implementing a MERGE function. We don't support parallel execution just yet. in some query execution plans, the FINALIZE method may be called more than once (e.g. if the aggregate is used in the SELECT list and say a HAVING clause). So you may want to cache your result somewhere (a PPG will do as this is in the last single-process mile of query processing, typically mere milliseconds apart) We'll be removing these annoyances in a future version SELECT TOP 1 AVG(main.age) AS _Average,min(main.age) AS _Min, CASE WHEN %vid = count(main.age)/2 THEN main.age else 0 END+MAX(CASE WHEN %vid = count(main.age)/2 THEN main.age else 0 END) AS _Median, max(main.age) AS _Max FROM ( SELECT TOP all a.Age FROM Sample.Person a ORDER BY a.Age ) main Thanks Randy!
Announcement
Todd Patterson · Sep 24, 2021

Looking for an accomplished Intersystems Software Developer

Intersystems Software Developer – Grand Traverse Plastics Corp. Location: Williamsburg, MI Note: This position is an ‘on site’ position. We are looking for an accomplished InterSystems developer to join our team. Grand Traverse Plastics is a fast growing and leading edge plastics injection molder. With 145 employees and 35 million in annual sales we offer an excellent place to work in one of the nicest areas in the Midwest. The candidate will assist in the development of our custom ERP system running on Cache. The software has continually evolved over 20+ years and is involved in every aspect of our business. We are looking for a candidate that can leverage their skill to help us interface with things from best of breed accounting packages, BI systems and IoT type devices on our plant floors. Qualifications: 5+ years with Intersystems Cache/Iris Cache Object Script as well as Object oriented class development In depth knowledge of data storage and design with Globals and Classes SQL, Angular, Java, Python experience helpful 2+ years with Ensemble Interoperability with various connectors and protocols (APIs, REST, SOAP, XML, EDI) Linux knowledge a plus Experience with any of the following: Deep See, Tableau, Power BI, Crystal Reports, Adaptive Analytics Creative and Innovative mindset Strong verbal, written and inter-personal skills Salary: 70k to 90k depending on skill level If interested, please forward your resume to tpatterson@grand-t.com Learn more about our exciting company here: https://www.gtpplastics.com
Question
Andy Stobirski · Dec 13, 2021

Log4Shell Apache exploit / Intersystems products

Hi everyone I see that a new Apache bug has been discovered, and since various InterSystems products use an Apache webserver, have Intersystems released any news or updates on this? I'm not seeing any updates, press releases from them. Anyone know anything? Andy The Apache HTTP Server is not written in Java (See this StackExchange post) The security exploit refers to a very popular java logging implementation, log4j. Log4j is published under the Apache Foundations name, but is not to be confused with the Apache http server (also called httpd occasionally). That said, you might want to check if you are using any Java libraries in your InterSystems products via the Java gateway - and if they are bundled with log4j for logging. Also check if you are having log4j directly in your Java classpath. What you are looking for is the log4j.jar. If you want to check a library, you can download the jar of the library and open it with 7zip or similar tools, then take a look and check if it contains log4j.jar. If it does, you should get in touch with the creator of the library. Disclaimer: I am not part of InterSystems, this is of course not an official statement. I am just a Java developer that had to deal with this today a bit! We got an answer from ISC: ====IRIS and Cache do use log4j but our products do not include versions affected by this vulnerability. This vulnerability affects versions from 2.0-beta9 to 2.14.1. The log4j versions used in Cache and IRIS product are based on version 1.x of log4j which is not affected by this issue.==== But of course one can use Log4j 2.* in your own Java applications. You can also open your log4j.jar as you would a zip file, go to the META-INF folder, open MANIFEST.MF and look for "Implementation-Version" to see which version of log4j it is. I'm surprised you got an answer as I was unable to get one over the weekend until ISC makes any official statement. However, re: the 1.x comment: 2031667 – (CVE-2021-4104) CVE-2021-4104 log4j: Remote code execution in Log4j 1.x when application is configured to use JMSAppender (redhat.com) The only usage of log4j I could find within an ISC platform was on Clinical Viewer. Curious if you could share where it is otherwise seen as being used? Maybe compiled into one of their own libraries and not directly exposed however. Please see the following page for official InterSystems guidance!https://community.intersystems.com/post/december-13-2021-advisory-vulnerability-apache-log4j2-library-affecting-intersystems-products That's interesting! @Dmitry.Maslennikov posted a quick grep on the community discord and found a few occurrences in the machine learning and fop parts. So I guess these parts are those that might potentially be affected - but actually not, since they are still log4j v1! I'll just repost @Dmitry.Maslennikov grep from the community discord here, which might give you a hint where to look until ISC updated the official statement $ grep -ir log4j /usr/irissys/ /usr/irissys/lib/RenderServer/runwithfop.bat:rem set LOGCHOICE=-Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Log4JLogger Binary file /usr/irissys/dev/java/lib/h2o/h2o-core-3.26.0.jar matches Binary file /usr/irissys/dev/java/lib/uima/uimaj-core-2.10.3.jar matches Binary file /usr/irissys/dev/java/lib/1.8/intersystems-integratedml-1.0.0.jar matches Binary file /usr/irissys/dev/java/lib/1.8/intersystems-cloudclient-1.0.0.jar matches Binary file /usr/irissys/dev/java/lib/1.8/intersystems-cloud-manager-1.2.12.jar matches Binary file /usr/irissys/dev/java/lib/datarobot/datarobot-ai-java-2.0.8.jar matches /usr/irissys/fop/fop:# LOGCHOICE=-Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Log4JLogger /usr/irissys/fop/fop.bat:rem set LOGCHOICE=-Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Log4JLogger Binary file /usr/irissys/fop/lib/commons-logging-1.0.4.jar matches Binary file /usr/irissys/fop/lib/avalon-framework-impl-4.3.1.jar matches /usr/irissys/fop/lib/README.txt: (Logging adapter for various logging backends like JDK 1.4 logging or Log4J) Binary file /usr/irissys/fop/lib/pdfbox-app-2.0.21.jar matches