Technical Documentation — Quarkus IRIS Monitor System
1. Purpose and Scope
This module enables integration between Quarkus-based Java applications and InterSystems IRIS’s native performance monitoring capabilities.
It allows a developer to annotate methods with @PerfmonReport, which triggers IRIS’s ^PERFMON routines automatically around method execution, generating performance reports without manual intervention.
The ObjectScript language has incredible JSON support through classes like %DynamicObject and %JSON.Adaptor. This support is due to the JSON format's immense popularity over the previous dominance of XML. JSON brought less verbosity to data representation and increased readability for humans who needed to interpret JSON content. To further reduce verbosity and increase readability, the YAML format was created. The very easy-to-read YAML format quickly became the most popular format for representing configurations and parameterizations, due to its readability and minimal verbosity.
Given a JSON schema at runtime, I want to programmatically create persistent classes with corresponding properties and indices. Is there a way to dynamically compile and load such classes?
We require user-specific row access (row-level security). How can we enforce this in SQL and ObjectScript using custom class parameters and dynamic WHERE clause injections?
The built-in task manager is limited. How can I implement a robust, distributed job scheduler in IRIS with support for dependencies, CRON syntax, and failover recovery?
We want to use IRIS as a backend for a lightweight blockchain or DLT. Can we model blocks using deterministic globals and implement consensus (PBFT, Raft) externally while using IRIS as the ledger?
We're encountering occasional deadlocks in a mirrored IRIS deployment. How can we track down which global or object write caused the lock cycle, and how does IRIS mirror lock propagation internally?
We require automatic injection of security predicates at runtime, depending on the user or API token. Is there a supported or hackable mechanism to manipulate SQL parsing/compilation before execution?
We need to authenticate users via Azure AD or Okta. What are the best practices to implement federated authentication using OAuth2/OIDC or SAML in IRIS Management Portal or custom web apps?
We use Apache Flink for complex event processing. Is there a way to integrate IRIS (as a source/sink) with Flink’s streaming API, possibly using the IRIS Native API or JDBC?
Can the IRIS SQL engine be extended with custom optimization rules (e.g., prioritizing certain indices or join orders)? If not, is there a supported way to influence cost models?
We’re encountering occasional deadlocks when accessing persistent objects. How can I trace lock acquisition and identify cyclic dependencies in real time?
We want to expose both REST and GraphQL endpoints over the same data models. Is there a way to implement or integrate GraphQL with ObjectScript and map to class methods?
I need to ingest streaming data over WebSockets. How can I create a custom inbound adapter for IRIS productions that supports many concurrent socket connections?
I have large joins involving millions of rows. How can I profile and tune the SQL engine’s parallel execution? Are there EXPLAIN plan features to inspect threading and task distribution?
I want to build an event store using raw globals and replay events for rebuilding state. Can I hook into IRIS journaling or write my own event log and replay engine?
For geographically distributed nodes using async mirroring or ECP, how can I detect and resolve data conflicts manually (custom logic) while maintaining eventual consistency?
The built-in %JSONExport() fails with circular references in deeply nested objects. How can I write a custom serializer that supports circular detection and reference tracking?
When writing dynamic SQL queries using embedded SQL, how can I force or ensure that filter conditions are pushed down to the data access layer rather than evaluated in memory?
We’re ingesting high-volume HL7 messages and converting them to FHIR in near-real-time. How do we design a streaming ETL pipeline using interoperability (productions) that scales horizontally?
I’m using recursive CTEs for hierarchical data, but the planner seems to produce inefficient plans. Can I influence or extend the query optimizer behavior in IRIS?
We want to iterate over large global structures in real-time without blocking or locking readers. How can we safely use $Order() and implement a lock-free analytics approach?
Instead of default storage classes, I want to implement my own SQL storage mapping for a persistent class (e.g., denormalized or sparse matrix structures). How do I define and manage custom storage definitions?
What are the best practices for creating a multi-tenant app in IRIS? How can I isolate data per tenant using namespaces, control resource usage, and delegate access via roles securely?
We run mixed workloads in IRIS. For analytical queries, are bitmap indexes effective? What are the caveats for concurrent OLTP updates, and how should I maintain bitmap indexes efficiently?
For space optimization, we want to apply a domain-specific compression algorithm to binary stream data before writing to %Stream.GlobalBinary. Is it possible to override or extend stream classes to include compression/decompression?
We are using IRIS with a sharded architecture. Complex SQL queries (with joins, aggregates, and subqueries) are performing slowly. How can I design queries or indexes to optimize distributed execution across shards?