Clear filter
Article
Brad Nissenbaum · Jul 13
**☤ Care 🩺 Compass 🧭 - Proof-of-Concept - Demo Games Contest Entry**
# Introducing Care Compass: AI-Powered Case Prioritization for Human Services
In today’s healthcare and social services landscape, caseworkers face overwhelming challenges. High caseloads, fragmented systems, and disconnected data often lead to missed opportunities to intervene early and effectively. This results in worker burnout and preventable emergency room visits, which are both costly and avoidable.
Care Compass was created to change that.
*Disclaimer: Care Compass project is a technical demonstration developed by sales engineers and solution engineers. It is intended for educational and prototyping purposes only. We are not medical professionals, and no part of this project should be interpreted as clinical advice or used for real-world patient care without appropriate validation and consultation with qualified healthcare experts.*
## The Problem
Twelve percent of Medicaid beneficiaries account for 38 percent of all Medicaid emergency department (ED) visits. These visits are often driven by unmet needs related to housing instability, mental illness, and substance use. Traditional case management tools rarely account for these upstream risk factors, making it difficult for caseworkers to identify who needs help most urgently.
This data comes from a 2013 study published in *The American Journal of Emergency Medicine*, which highlights how a small portion of the Medicaid population disproportionately contributes to system-wide costs
([Capp et al., 2013](https://doi.org/10.1016/j.ajem.2013.05.050), [PMID: 23850143](https://pubmed.ncbi.nlm.nih.gov/23850143)).
Too often, decisions are reactive and based on incomplete information.
## Our Solution
Care Compass is an AI-powered assistant that helps caseworkers make better decisions based on a complete picture of a client’s medical and social needs. It combines Retrieval-Augmented Generation (RAG) and large language models to interpret data and generate actionable insights.
The assistant assesses real-time information, summarizes key risk factors, calculates dynamic risk scores, and recommends possible next steps and resources. Instead of combing through disconnected records, caseworkers get a unified view of their caseload, prioritized by urgency and context.
## How It Works
The platform integrates a large language model, real-time data retrieval, and custom reasoning logic. Information from structured and unstructured sources is synthesized into readable summaries that explain not only the level of risk, but why a client is considered high-risk.
An intuitive user interface makes it easy for caseworkers to interact with the assistant, review insights, and take appropriate action. The emphasis is on transparency and trust. The system doesn’t just score risk; it explains its reasoning in plain language.
## Lessons Learned
Building Care Compass has taught us that raw model accuracy is only part of the equation. We’ve learned that:
- Small datasets limit the effectiveness of retrieval-based methods
- Structured data is often inconsistent or incomplete
- Fine-tuning models does not always improve performance
- Interpretability is essential—especially for systems that guide care decisions
- HIPAA compliance and data privacy must be built into the system from the beginning
## Looking Ahead
Our next steps include expanding our dataset with more diverse and representative cases, experimenting with different embedding models, and incorporating evaluation metrics that reflect how useful and understandable the assistant’s outputs are in practice.
We’re also exploring how to better communicate uncertainty and strengthen the ethical foundations of the system, especially when working with vulnerable populations.
Care Compass is our response to a widespread need in health and human services: to prioritize what matters, before it becomes a crisis. It empowers caseworkers with the clarity and tools they need to act earlier, intervene more effectively, and deliver more equitable outcomes.
**To see more about how we implemented the solution, please watch our youtube video:**
https://youtu.be/hjCKJxhckbs This is an impressive proof-of-concept! Great work by the team! 👏 Happy to have worked with Brad Nissenbaum, Fan Ji, Lynn Wu. Andrew Wardly Please let us know if you have any questions or feedbacks!
Article
Dmitry Maslennikov · Jul 28
Overview
The typeorm-iris project provides experimental support for integrating TypeORM with InterSystems IRIS, enabling developers to interact with IRIS using TypeORM’s well-known decorators and repository abstractions. This allows a more familiar development experience for JavaScript and TypeScript developers building Node.js applications with IRIS as the backend database.
While the project implements key integration points with TypeORM and supports basic entity operations, it’s not yet battle-tested or suitable for production environments.
Why typeorm-iris?
The official InterSystems IRIS Node.js driver does not provide native SQL query execution in the way that other database drivers (e.g., for PostgreSQL or MySQL) do. Instead, you must use an ObjectScript-based API (e.g., %SQL.Statement) to prepare and execute SQL commands.
This becomes problematic when building modern applications that rely on Object-Relational Mapping (ORM) tools like TypeORM. TypeORM expects a lower-level driver capable of preparing and executing raw SQL in a single connection session, which is not currently available with IRIS’s JavaScript tooling.
To overcome these limitations, typeorm-iris implements the necessary pieces to bridge IRIS and TypeORM, using the available ObjectScript SQL execution interfaces under the hood.
Early Stage & Known Issues
This project is in its initial phase and has only been tested with a limited number of cases. Expect instability, missing features, and breaking changes in future iterations.
Notable limitations observed during development include:
1. Excessive Network Roundtrips
Executing SQL through the %SQL.Statement class from JavaScript involves multiple network messages between the Node.js process and the IRIS server. For example, a single logical SQL operation may require multiple steps like:
Preparing the statement
Executing the query
Fetching metadata
Fetching rows individually
Each of these can result in separate messages over the network, resulting in significantly more overhead compared to using a native SQL driver.
2. No True Async/Parallel Support
The official IRIS Node.js driver does not support asynchronous usage in a multithreaded or worker-based context:
Reconnecting in the same process often fails or causes unpredictable behavior.
Spawning worker threads and using the driver inside them leads to issues.
Only one connection per process works reliably.
These constraints make it unsuitable for modern concurrent Node.js applications. In practice, this limits how well the driver can scale with concurrent workloads, and it significantly restricts architectural choices.
Usage Guide
Due to usage of latest IRIS SQL Fatures requires IRIS 2025.1+ to work.
You can install typeorm-iris via npm:
npm install typeorm-iris
Because this driver is not officially supported by TypeORM, using it requires a workaround for setting up the DataSource. You cannot directly use new DataSource() or createConnection() as you would with official drivers.
Custom DataSource Setup
import { IRISDataSource, IRISConnectionOptions } from "typeorm-iris"
const dataSourceOptions: IRISConnectionOptions = {
name: "iris",
type: "iris",
host: "localhost",
port: 1972,
username: "_SYSTEM",
password: "SYS",
namespace: "USER",
logging: true,
dropSchema: true,
}
export function createDataSource(options: any): IRISDataSource {
// @ts-ignore
const dataSource = new IRISDataSource({ ...dataSourceOptions, ...options })
return dataSource
}
Once initialized, you can use TypeORM decorators as usual:
import { Entity, PrimaryGeneratedColumn, Column } from "typeorm"
@Entity()
export class User {
@PrimaryGeneratedColumn()
id: number
@Column()
name: string
@Column()
email: string
}
Using repositories works similarly:
const userRepository = dataSource.getRepository(User)
const newUser = userRepository.create({ name: "Alice", email: "alice@example.com" })
await userRepository.save(newUser)
Sample Projects
The GitHub repository includes a sample/ folder with several fully working examples:
sample1-simple-entity
sample2-one-to-one
sample3-many-to-one
sample4-many-to-many
sample16-indexes
These cover basic persistence, relationships, and schema features, offering practical usage demonstrations.
Unit Tests
Initial testing includes the following use cases:
Entity Model
should save successfully and use static methods successfully
should reload given entity successfully
should reload exactly the same entity
should upsert successfully
Entity Schema > Indices
basic
Persistence
basic functionality
entity updation
insert > update-relation-columns-after-insertion
many-to-many
one-to-one
These tests are limited in scope and more coverage will be added as the project matures.
Supported Features
Entity decorators: @Entity(), @Column(), @PrimaryGeneratedColumn()
Repositories: create, save, find, delete, etc.
Schema drop and sync (experimental)
Partial support for relations and custom queries
Again, these features are early-stage and may not cover the full range of TypeORM’s capabilities.
Real-World Constraints
InterSystems IRIS Node.js Driver Limitations
Only one usable connection per process
No proper support for parallel operations or threading
Lack of native SQL API support (via SQL protocol)
Heavy reliance on message-based communication using proprietary protocol
Until InterSystems updates the official driver with support for proper SQL execution and concurrent operations, this project will be fundamentally limited in terms of performance and scalability.
Feedback & Contribution
As this is an experimental driver, your feedback is crucial. Whether you're trying it out for a small side project or evaluating it for broader use, please share issues and suggestions on GitHub:
➡️ github.com/caretdev/typeorm-iris/issues
Pull requests, test cases, and documentation improvements are welcome.
What's Next
Planned future improvements include:
Expanding test coverage for real-world queries and schema designs
Handling more TypeORM query builder features
Investigating batching optimizations
Improving schema introspection for migrations
Conclusion
typeorm-iris brings much-needed TypeORM support to InterSystems IRIS for Node.js developers. While it’s not production-ready today and inherits severe limitations from the current driver infrastructure, it provides a foundation for further experimentation and potentially wider adoption in the IRIS developer community.
If you're an IRIS developer looking to integrate with a modern Node.js backend using TypeORM, this is the starting point.
And if you found this useful, please vote for it in the InterSystems Developer Tools Contest!
Article
sween · Jul 5, 2022
How to include IRIS Data into your Google Big Query Data Warehouse and in your Data Studio data explorations. In this article we will be using Google Cloud Dataflow to connect to our InterSystems Cloud SQL Service and build a job to persist the results of an IRIS query in Big Query on an interval.
If you were lucky enough to get access to Cloud SQL at Global Summit 2022 as mentioned in "InterSystems IRIS: What's New, What's Next", it makes the example a snap, but you can pull this off with any publicly or vpc accessible listener you have provisioned instead.
Prerequisites
Provision InterSystems Cloud SQL for temporary use
You may need to make some phone calls or request access through the portal as I did to take the InterSystems Cloud SQL for a spin, but it is a very fast way to get up and running in seconds to carry out this demonstration or your IRIS Workloads.Inspecting your deployment, you can head over to the "External Connections" pane on the overview tab and build yourself a connection URL and retaining your credentials. We went wide open for public access (0.0.0.0/0) to the listener and chose not to encrypt the listener either. From above, you'll to disseminate the following information...ConnectionURL: jdbc:IRIS://k8s-c5ce7068-a4244044-265532e16d-2be47d3d6962f6cc.elb.us-east-1.amazonaws.com:1972/USERUser/Pass:SQLAdmin/Testing12!DriverClassName:com.intersystems.jdbc.IRISDriver
Setup Google Cloud Platform
Provision a GCP Project
gcloud projects create iris-2-datastudio --set-as-default
Enable Big Query
Enable DataFlow
Enable Cloud Storage
gcloud services enable bigquery.googleapis.com
gcloud services enable dataflow.googleapis.com
gcloud services enable storage.googleapis.com
Create a Cloud Storage Bucket
gsutil mb gs://iris-2-datastudio
Upload the latest connection driver to the root of the bucket
wget https://github.com/intersystems-community/iris-driver-distribution/raw/main/intersystems-jdbc-3.3.0.jar
gsutil cp intersystems-jdbc-3.3.0.jar gs://iris-2-datastudio
Create a Big Query DataSet
bq --location=us mk \
--dataset \
--description "sqlaas to big query" \
iris-2-datastudio:irisdata
Create a Big Query Destination Table
Now this is where a super powerful advantage becomes somewhat of a nuisance to us. Big Query can create tables on the fly if you supply a schema along with your payload, this is super great inside of pipelines and solutions, but in our case, we need to establish the table beforehand. The process is straightforward as you can export a CSV from the IRIS Database quite easily with something like DBeaver etc, and when you have it, you can invoke the "create table" dialog underneath the dataset you created and use the CSV to create your table. Make sure you have "auto generate schema" checked at the bottom of the dialog. This should complete your Google Cloud Platform setup, and we should be ready configure and run our Dataflow job.
Google Dataflow JobIf you followed the steps above you should have the following in your inventory to execute the job to read your InterSystems IRIS data and ingest it into Google Big Query using Google Dataflow.In the Google Cloud Console, head over to Dataflow and select "Create Job from Template" This is a rather unnecessary/exhaustive illustration on how to instruct you to fill out a form with the generated pre-requisites, but it calls out the source of the components...
... to round it out, make sure you expand the bottom section and supply your credentials for IRIS.
For the ones who found those screenshots offensive to your intelligence, here is the alternate route to go to keep you inside your comfort zone in the CLI to run the job:
gcloud dataflow jobs run iris-2-bq-dataflow \
--gcs-location gs://dataflow-templates-us-central1/latest/Jdbc_to_BigQuery \
--region us-central1 --num-workers 2 \
--staging-location gs://iris-2-datastudio/tmp \
--parameters connectionURL=jdbc:IRIS://k8s-c5ce7068-a4244044-265532e16d-2be47d3d6962f6cc.elb.us-east-1.amazonaws.com:1972/USER,driverClassName=com.intersystems.jdbc.IRISDriver,query=SELECT TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME, TABLE_TYPE, SELF_REFERENCING_COLUMN_NAME, REFERENCE_GENERATION, USER_DEFINED_TYPE_CATALOG, USER_DEFINED_TYPE_SCHEMA, USER_DEFINED_TYPE_NAME, IS_INSERTABLE_INTO, IS_TYPED, CLASSNAME, DESCRIPTION, OWNER, IS_SHARDED FROM INFORMATION_SCHEMA.TABLES;,outputTable=iris-2-datastudio:irisdata.dataflowtable,driverJars=gs://iris-2-datastudio/intersystems-jdbc-3.3.0.jar,bigQueryLoadingTemporaryDirectory=gs://iris-2-datastudio/input,username=SQLAdmin,password=Testing12!
Once you have kicked off your job, you can bask in the glory a successful job run:
ResultsTaking a look at our source data and query in InterSystems Cloud SQL...
... and then Inspecting the results in Big Query, it appears we do in fact, have InterSystems IRIS Data in Big Query.
Once we have the data in Big Query, it is trivial to include our IRIS data into Data Studio by selecting Big Query as the data source... this example below is missing some flair, but you can quickly see the IRIS data ready for manipulation in your Data Studio project.
First InterSystems IRIS Cloud use case! Whoahoo Thanks Ron Sweeney Hi Ron,
Do you have an example of doing the reverse ?... that is, have IRIS extract data from Google Big Query ?
Thanks 💡 This article is considered as InterSystems Data Platform Best Practice.
Announcement
Alberto Fuentes · 4 hr ago
#InterSystems Demo Games entry
⏯️ MyVitals: Connecting Wearable Health Data to EHRs with InterSystems
MyVitals is a Personal Health Device Hub that bridges the gap between wearable health devices and clinical workflows. In this demo, we show how healthcare organizations can enroll patients, collect real-time data from personal devices via mobile apps, and seamlessly integrate that information into EHR systems using FHIR, HL7, and InterSystems technology. Built on IRIS for Health and FHIR repositories, MyVitals turns disconnected wearable data into actionable clinical insights—securely, efficiently, and at scale.
Presenters:🗣 @Alberto.Fuentes, Sales Engineer, InterSystems🗣 @Pierre-Yves.Duquesnoy, Senior Sales Engineer, InterSystems🗣 @LuisAngel.PérezRamos, Sales Engineer, InterSystems
🗳 If you like this video, don't forget to vote for it in the Demo Games!
Article
Yuri Marx · Jun 10, 2020
If you need write your organization Data Architecture and map to the InterSystems IRIS, consider following Data Architecture Diagram and references to the intersystems iris documentation, see:
Architecture mapping:
SQL Database: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GSQL
Managed Files: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=AFL_mft and https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=SETEDIGuides
IoT Broker, Events and Sensors: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=EMQTT
Messages: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=EMQS
NoSQL: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GDOCDB
API and Web Services: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GREST, https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GSOAP, https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=AFL_iam and https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=PAGE_interoperability
ETL: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=SETAdapters, https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=EDTL, https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=EBPL and
EAI Connectors: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=SETAdapters
XEP Events: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=BJAVXEP, https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=BNETXEP,
Big Data Ingestion: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=BSPK
AI: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=PAGE_text_analytics, https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=APMML, https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=PAGE_python_native, https://www.intersystems.com/br/resources/detail/machine-learning-made-easy-intersystems-integratedml/
Processes: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=EBPL
Corporate Service: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=EESB and https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=AFL_iam
In memory: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GSCALE_ecp
Content: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GDOCDB
Textual: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=AFL_textanalytics
Protection: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=SETSecurity, https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=TSQS_Applications, https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCDI and https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCAS
Inventory: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GSA_using_portal and https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GOBJ_xdata
Privacy: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GCAS_encrypt
IT Lifecycle, Backup and Restore: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GSA_using_portal, https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GCDI_backup
Access Management: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=TSQS_Authentication, https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=TSQS_Authorization, https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=TSQS_Applications
Replication and HA: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=PAGE_high_availability
Monitoring: https://docs.intersystems.com/sam/csp/docbook/DocBook.UI.Page.cls?KEY=ASAM and https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=PAGE_monitoring
IT Operation: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=PAGE_platform_mgmt
Visualization: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=PAGE_bi
Great reference! Fantastic Article, thanks @Yuri.Gomes! This is really excellent! Where did you get the graphics? Can we use them? Send me e-mail to yurimarx@gmail.com and I will reply you with ppt slide. Could you explain to me how to understand the graphic ?
Question
Amit Kumar Thakur · May 26, 2020
Requesting assistance on Intersystems Cache Managed Key Encryption.
We have configured the KMIP Server.
The KMIP server is an external HSM box. I was not able to find any info on Key Rotation and what type of encryption does it follow i.e. 1-tier approach or 2-tier approach.
Can someone please assist on the same? Are you asking about the KMIP server and how it works? If so, I don't think this is the right place to ask, and would recommend you talk to people who know more about your KMIP system.
Or are you asking about how Cache handles key rotation for encrypted databases and/or managed key encryption? If so, this is mostly up to the user. There isn't automatic re-keying of databases on a schedule. In most of the DBs, we do Master Key rotation on time to time basis for security reasons. In my case, I have created a KMIP server, encrypted the database. Now I have to rotate the master key.
I don't find any documentation around rotating the master key.
Only Activation, listing, and deleting the master key.
For master key, do you mean the database encryption key, ie, the one Cache is using to encrypt the database? If so, you need to re-key the database manually if this is something you want to do. This should be an option available in the ^EncryptionKey utility in the %SYS namespace. (The older cvencrypt utility will also re-key, but is slower and does not have KMIP functionality.) The InterSystems IRIS docs cover using ^EncryptionKey for re-keying here:
https://irisdocs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GCAS_cvencrypt
If you mean a different key than the database encryption one, can you explain which one? For example, you authenticate to the KMIP server using a public/private key pair. Is this the one you mean? Something else?
Announcement
Anastasia Dyubaylo · Aug 5, 2020
Hey Community,
We're pleased to invite you to the first webinar dedicated to our Global Masters Advocate Hub! Please join:
➡️ InterSystems Global Masters Advocate Hub Overview webinar ⬅️
Date & Time: Friday, August 14 – 10:00 EDT
Let's see the Global Masters Advocate Hub from the inside and talk about how you can benefit from participation in this program. We'll go through:
Demo of the Hub
Recognition for your contribution to Developer Community and Open Exchange
Levels and priveledges
Rewards! Current and... tsss... upcoming!
Your questions and ideas
Also, a secret code will be announced for all webinar attendees for getting 250 GM points!
Speaker:🗣 @Olga.Zavrazhnova2637, InterSystems Customer Advocacy Manager
So!
We will be happy to talk to you at our webinar!
✅ REGISTER TODAY We will start soon!
➡️ JOIN THE WEBINAR HERE Hey Developers!
Now this webinar recording is available on InterSystems Developers YouTube Channel:
Enjoy watching this video! 👍🏼
Question
Alex Van Schoyck · Jan 31, 2019
ProblemI got a quick answer from this forum yesterday so I'm going to try my luck again today! I've hit an error in another table when trying to extract through the Cache ODBC driver, but this one gives me less details and I'm struggling to pinpoint what might be causing the error.The table I am trying to extract is called REF_TABLE_ONE.Here's the Error:
[Cache Error: <<NOLINE>%0AmBx1^%sqlcq.PRD.2249>]
[Location: <ServerLoop - Query Fetch>]
Research/Trial & Error
Based on my research, it looks like the error results from something missing in a routine. I have limited access to this system. I am able to open up the corresponding Class in Cache Management Studio and edit/compile the code there. However, I do not have access to the "Routes" section of the Management Web Portal.
When I run the SQL Query in the Management Web Portal, I get the same error listed above, and only the column headers show up, no rows are populated.
Based on my limited analysis of the Class file, I think the issue is somewhere inside the SQLMap. It looks like there is use of Subscripts that are pretty complex. I'm wondering if there's something wrong in there?
I have the whole SQLMap code copied out, but I didn't want to post it unless someone needs it, as it is a bit long and would take up a lot of room in the post. Here's a screenshot of the visual representation of the SQLMap in case that helps:
Question
Is there anything I can do with the class code in Studio to debug or get more info about this error?
Any and all help is greatly appreciated!
Additional Info
Cache Version: Cache for OpenVMS/IA64 V8.4 (Itanium) 2012.1.5 (Build 956 + Adhoc 12486) 17-APR-2013 19:49:58.07 Hey John, Thank you for the reply, I appreciate the help! I found the query in the cache and purged it. Unfortunately, when I ran it again, I encountered the same error and it recreated the query in the cache, but with today's date as the creation date obviously.Any other ideas? Thanks again, I appreciate the help :-) Please share the code from
%0AmBx1^%sqlcq.PRD.2249
To do that open %sqlcq.PRD.2249 routine (int routine, afaik mac routine wouldn't have this )
If you can't find routines check that system saves sources for cached queries. If it doesn't set the setting to save routines, purge this query and run it again. After that you should be able to see the source.
Also can you determine which part of SQL causes this error?
Perhaps there's a faulty cached query.The 2012.1 docs here describe how to use Portal to see what's in the cache, and how to purge items from it.
Question
Scott Roth · Jan 7, 2019
Can someone tell me if intersystems-ru/deepsee-sysmon-dashboards is developed for a specific version of Ensemble? Looks like it could be useful to my group but we aren't upgrading till later this year and we are on 2015.2.2.ThanksScott Hi Scott!First, thanks for mentioning Open Exchange, I appreciate :) Here is the link of Sysmon Dashboards on OEX.Also, here is the article by @Semen.Makarov, which could help.The tool just visualizes the data you have %SYSMONMGR, I believe the utility appeared at early versions of Caché.The visualization is better with DeepSee Web, this will require at least Caché 2014 for REST and JSON.Also I'll ping @Daniel.Kutac for more details who initially introduced the tool
Question
Kurt Hofman · May 13, 2019
Hello,I'm trying to get ODBC running on a Mac but I don't get it working.Has someone an overview how te install,configure and use it in Excel on MacOSX ?Regards,Kurt Hofman. Thanks, but I can(t find the correct way to install Caché ODBC. Do I have to extract it in a special folder or so and then follow the UNIX-like installation or something else ? It actually, does not matter, where to install drivers.My Caché works in docker, so, I downloaded ODBC drivers from the ftp.extracted it, just in Downloads, and run there ODBCInstall from a terminal.with ODBCInstall I got file mgr/cacheodbc.ini, which I copied to ~/Library/ODBC/odbc.ini, there will be as User DSN. /Library/ODBC/odbc.ini for System DSN.DYLD_LIBRARY_PATH should follow to bin folder in extracted ODBC Drivers folderin my case
export DYLD_LIBRARY_PATH=/Users/daimor/Downloads/ODBC-2018.1.2.309.0-macx64/bin
you can check connection with iODBC Manager, running right from terminal
DYLD_LIBRARY_PATH=/Users/daimor/Downloads/ODBC-2018.1.2.309.0-macx64/bin open /Applications/iODBC/iODBC\ Administrator64.app
and open excel same way
DYLD_LIBRARY_PATH=/Users/daimor/Downloads/ODBC-2018.1.2.309.0-macx64/bin open /Applications/Microsoft\ Excel.app
So, you'll be able to test settings without relogin What do you have already?Did you configure ODBC DSN, DYLD_LIBRARY_PATH?Some information about configuring ODBC, you can find here.First of all I think you need iODBC installed on your mac.then ODBC Caché drivers, with correct environment variable DYLD_LIBRARY_PATH, do not forget to relogin after the set variable.As proof, that's possible to do it. Done with Caché 2018.1
Question
Nagarjuna Reddy Kurmayyagari · May 6, 2020
After I loaded the intersystems/iam:0.34-1-1 image in local, I am trying to do the next step of configuration.
2) Configure your InterSystems IRIS instance
2a) Enable the /api/IAM web application 2b) Enable the IAM user 2c) Change the password for the IAM user
Do we need to change these settings in iris.cpf file? any inputs where we have to configure? I have IRIS 2019.1.1 installed in my machine.
Thanks,
Nag. Hello Nagarjuna,
IAM run on InterSystems IRIS Data Platform 2019.2 and higher. You need to upgrade your server.
Once you're done, you would find everything (the /api/iam application , IAM user) in the System Management Portal.
HTH
Dan 2019.1.1 supports IAM. 2019.1.0 does not.
So no need to upgrade.
Docs:
Web Application management
User Management
Question
Andrew Brown · Apr 7, 2021
We are looking at moving from Cache to IRIS, if we do this we will want to use Intersystems Reports. We have a lot of Crystal Reports to convert. is there a conversion tool or best practice for doing the conversion Hello! Now we are carrying out a similar project to transfer the system from Cache to IRIS, including the tasks of converting reports from Crystal Reports to JReports format (Logi Report https://www.logianalytics.com/jreport/)The project is nearing completion and we can share our experience. We are also interested in converting from SAP Business Objects Crystal, Webi, etc. reports to the JReports format. Please share any insights, tools, articles etc. that you have! We only used the official documentation. Our developments on this topic have not yet been presented in the form of an article. Perhaps, after the end of the project, we will make a publication on this topic. I just received confirmation that Logi/Jreports used to provide a converter, unfortunately they have not supported it for a few years and no longer available.
Article
Athanassios Hatzis · Feb 16, 2017
Hi,
I would like to draw your attention on a recently published article, titled "A Quick Guide on How to Prevail in the Graph Database Arena", that has been posted also at LinkedIn. Intersystems Caché has been referenced several times. In the "Multi-model Database Engine" section of this article, there is a quick description of Caché as an
object database with relational access, integrated support for JSON documents and a multidimensional key-value storage mechanism that can be easily extended to cover Graph data model
I believe strongly that Intersystems Caché is an exceptional database product. Its architecture is unique and powerful. I have emphasized this in the following section of the same article,
No matter what is their physical implementation, i.e. hash tables or trees, based on this abstract data type you can model all four NoSQL database types, (Key/Value, Tabular/Columnar, Document, Graph). For one reason or another, we are of the opinion that associative/multidimensional arrays will eventually prevail in the world of databases.
These two key points justify that graph data model can be a perfect fit on the architecture of Intersystems Caché. Although I have not seen any movement from Intersystems towards that area, I am confident there is a great potential in business value by integrating/expanding the multi-model functionality of Cache. I presume the database market is matured enough for graph databases.
But the problem with this database sector is that it is both cluttered and perplexed with many different kinds of graph databases. Let me focus on the conceptual/logical layer where my work is based. Depending on the structure of nodes and edges you get different graph topologies and stores.
Property Graph Data Model
Entity centric with Embedded Properties and edges with BIDIRECTIONAL LINKING, Directed Labeled GraphNeo4J, OrientDB, ArrangoDB, etc...Triple/Quadruple Data Model
Edge centric with UNIDIRECTIONAL LINKING on vertices, Directed Labeled GraphGraphDB, AllegroGraph, OpenLink Virtuoso,Associative Data Model
Hypernodes, Hyperedges, BIDIRECTIONAL LINKING, Hypergraph/Bipartite GraphTopic Maps, R3DM/S3DM, X10SYS (AtomicDB), HypergraphDB, Qlik
I will not get into analyzing their differences in detail here, I have covered to some extend their differentiation criteria in my article mentioned already. My research work and development has been on the third type above, the Associative Data Model also known as hypergraph or bipartite graph. Currently there are only two major players in the market with products that are based on associative technology.
In "Associative Data Modeling Demystified" series of posts that is written with a hands-on practice style, I introduce my audience to the design of R3DM/S3DM and I am making an attempt to clear the information glut of many-to-many relationships (a.k.a associations) with a thorough examination of well-known data models and graph/associative software products.
Part 6 - R3DM/S3DM: Build Powerful, Meaningful, Cohesive Relationships Easily
Part 5 - Qlik Associative Model
Part 4 - Association in RDF Data Model
Part 3 - Association in Property Graph Data Model - (Intersystems Caché is mentioned at this section)
Part 2 - Association in Topic Map Data Model
Part 1 - Relation, Relationship and Association
So what is the objective of writing in this community ?
Well there is a two pronged approach,
To promote the fundamental principles of R3DM/S3DM framework in the design of hypergraph-associative based DBMSTo implement this technology on top of suitable DBMS.
What is the ultimate goal ?
The ultimate goal is to have a single operational database that acts as a unification platform for all data from other data sources and enables a 360 degrees self-service dynamic data visualization and analysis with any part of the database without ETL and without queries.
What is the current status of R3DM/S3DM ?
There are currently two implementations :
A working prototype with OrientDB as the back-storage and Wolfram Language as the front-end APIA partial implementation with Intersystems Cache as the back-storage and the construction of R3DM/S3DM subsystems and low-level commands with Cache ObjectScript.
In the sixth and last part of our series detailed information about the architectural design and API commands of R3DM/S3DM are exposed to the public.
How you can help ?
You may actively take part in any discussion within this community about graph data modeling with Intersystems Cache. Make a start here. In particular, you may be interested in R3DM/S3DM project, then I suggest you get connected with me at LinkedIn.
I am also actively looking for partners and investors for this technology so perhaps you would like to discuss your role or contribution.
Vendors and experts of the database domain are interested in this project. This is encouraging but the plan of course is to reach production and deployment level and that is where they can help. Hello [@athanassios.hatzis] !I thought about "Graphs on Cache". I attached my point of view how to store simple graph db.slides: https://drive.google.com/open?id=0B7nnNs0_XYaSWXp1S3BoUjFXdFk
Question
Murali krishnan · Apr 13, 2017
Please let know how to invoke a java program from intersystems. Also let know how to expose / consume webservices / APIs from intersystems Various Options...1. Call out to Java on the command line using $ZFhttps://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=RCOS_fzf-12. Access POJO's directly using Jalapeñohttps://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GBJJ3. Consume a web service using the Cache soap wizard...https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=ESOAP_web_client4. Publish a web service from Cache...https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=ESOAP_web_service From InterSystems Ensemble you can also use the Java Gateway:http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=EJVG
Question
Murali krishnan · May 9, 2017
Please let know if it is feasible to integrate Security tools ( W3af and Iron Wasp ) with intersystems. If so, please let know how to do that.. Hi Murali,
from looking at both of their websites, it seems they are just web scanners?
If so, you don't need to do anything different to run it against your InterSytems powered web pages. You can use them the same as with any other webpage you're scanning.
Cheers,
Fab