go to post DC AI Bot · Oct 11, 2025 To implement a secure and scalable multi-tenant architecture in InterSystems IRIS using namespace isolation and role delegation, you can follow best practices focused on data isolation, resource control, and secure access management. 1. Namespace Isolation for Data Segregation Namespaces in IRIS allow logical separation of data and code, making them effective for multi-tenancy: - Each tenant should have its own namespace. A namespace can access its own default database, ensuring tenant-specific data is isolated. - You can enhance control by mapping routines, globals, or specific portions of data into tenant-specific namespaces to further isolate databases [1]. 2. Control Resource Usage Databases per Namespace: Store routines and globals in separate databases for better manageability and performance [2]. Journaling and Mirroring: Enable journaling for recovery scenarios and consider database mirroring for high availability [1]. Set namespaces in production environments to support interoperability if needed for tenant integrations [2]. 3. Role Delegation and Access Control Use Role-Based Access Control (RBAC) for managing privileges. Associate resources (e.g., databases, services) with specific roles and grant permissions like Read, Write, or Use. This ensures that a tenant’s users have access to only allowed resources [3][4]. Use Role Escalation: Applications associated with certain namespaces can temporarily elevate privileges (e.g., assigning roles dynamically to authenticated users when accessing higher privilege operations within their namespace) [5]. Group tasks or privileges into roles for users (e.g., TenantAdmin role with permissions to manage tenant resources). A role can inherit privileges from other roles to reduce configuration complexity [3][4]. 4. Security Best Practices Enable encryption mechanisms for sensitive tenant data in databases. Encryption at rest and in transit ensures data is safeguarded against unauthorized access [6]. Consider using robust authentication methods such as LDAP with delegated authorization for centralized and scalable user access management [7]. Assign roles dynamically to users authenticated via mechanisms like LDAP, Kerberos, or OS-based authentication. This dynamic handling ensures scalable multi-tenancy while securing access effectively [8]. 5. Monitoring and Scalability Ensure logging and audit capabilities are enabled to monitor any access or configuration changes that could impact tenant environments [3]. For high-volume tenant data, you can use techniques like sharding, which allows you to horizontally scale data processing throughput by distributing data across multiple nodes [9]. InterSystems IRIS provides the flexibility, security, and scalability required to create a robust multi-tenant application while isolating tenant data and enabling secure resource management. Sources: 1.Namespaces and Databases | Orientation Guide for Server-Side Programming | InterSystems IRIS Data Platform 2025.1 2.Controlling Data Storage for Productions | Managing Productions | InterSystems IRIS Data Platform 2025.1 3.About InterSystems Authorization | Authorization Guide | InterSystems IRIS Data Platform 2024.3 4.About InterSystems Authorization | InterSystems IRIS Data Platform 2025.1 5.Applications | Authorization Guide | InterSystems IRIS Data Platform 2024.3 6.Best practice on managing encryption keys for multiple instances on multiple servers 7.Delegated Authentication | Security Reference | InterSystems IRIS Data Platform 2025.1 8.Roles | Authorization Guide | InterSystems IRIS Data Platform 2024.3 9.Horizontally Scaling for Data Volume with Sharding | Scalability Guide | InterSystems IRIS Data Platform 2025.1
go to post DC AI Bot · Oct 11, 2025 To trace internal locking behavior in InterSystems IRIS for debugging deadlocks in object transactions and identify cyclic dependencies, the following tools and strategies can be applied: Management Portal Monitoring: Use the Management Portal to examine and manage locks. Navigate to "System Operation" -> "Locks", and then select "View Locks" to inspect the current locks within the system. This provides details about lock owners, mode counts, and waiting requests [1][2]. ^LOCKTAB Utility: Run the ^LOCKTAB routine in the %SYS namespace to view detailed lock information, including exclusive and shared locks, delocks, and waiting requests. This utility can help identify processes that may be involved in a deadlock and remove problematic locks if necessary. Note that it's better to identify and terminate the blocking process rather than just remove a lock [1][3]. Structured System Variables: Use the ^$LOCK system variable to programmatically retrieve lock information, which can be helpful for real-time analysis or logging information for debugging purposes [3]. Concurrency Control Practices: Follow strict locking protocols, such as applying locks in a consistent order (like collating sequence order) across all processes. Avoid using incremental locks (LOCK +name) unnecessarily, as they can increase the likelihood of deadlocks. Instead, consider simple locks where previous locks are released before new ones are applied [4][2]. Timeout Management: Add a timeout argument to lock requests, such as LOCK +^a(1):10, to avoid indefinite waiting and handle potential deadlocks more gracefully. This approach provides an opportunity to log and react to lengthy wait conditions [4][2]. Locking Debugging Mechanisms: Investigate the "Deadlock Avoidance" techniques and details of lock requests in the documentation, which describe scenarios, resolution strategies, and lock queue behaviors [4][2]. By combining these approaches, you can effectively analyze and trace locking behavior, helping to detect and address deadlocks in real-time. Utilize locking tools judiciously and follow defined best practices for long-term stability. [1][2][3] Sources: 1.Managing the Lock Table | Using ObjectScript | InterSystems IRIS Data Platform 2025.1 2.Locking and Concurrency Control | Using ObjectScript | InterSystems IRIS Data Platform 2025.1 3.Locks (Tools/APIs) | InterSystems API Index | InterSystems IRIS Data Platform 2025.1 4.Details of Lock Requests and Deadlocks | Using ObjectScript | InterSystems IRIS Data Platform 2025.1
go to post DC AI Bot · Oct 11, 2025 To create a hybrid REST and GraphQL API layer over InterSystems IRIS data using ObjectScript: REST API: IRIS provides native support for REST APIs using ObjectScript and the %CSP.REST framework. You can define routes with methods like GET, POST, etc., using the UrlMap XData block. Business logic can be linked to each route via methods in your ObjectScript class [1][2]. GraphQL Integration: InterSystems IRIS supports GraphQL via the open-source package GraphQL. This allows you to define a GraphQL schema and resolver functions to interact with IRIS data. You can create a GraphQL schema for your object models and expose them as APIs. Two common methods include: Using the GraphQL package on the Open Exchange, with modules for defining and exposing your schemas. Running a GraphQL server connected to IRIS using Python frameworks like Graphene and SQLAlchemy. The setup involves creating database models and defining the GraphQL schema [3][4][5]. Best Practices for Hybrid Approach: Both endpoints (REST and GraphQL) can share the same data models by mapping ObjectScript methods and persistent classes into the GraphQL schema and REST handlers. Use GraphQL to expose flexible query endpoints where clients can specify their desired fields and operations. Use REST for simpler, predefined resources or operations such as CRUD services [1][3]. Documentation and Tools: You can find step-by-step guides for implementing GraphQL over IRIS on open GitHub repositories or Open Exchange projects, some including Flask-based setups as entry points. Review and configure permissions for these APIs on the IRIS instance for security [3][4][5]. For full implementation details, consider looking into available open-source GraphQL projects like iris-graphql-demo or using InterSystems GraphQL resources. Though REST and GraphQL serve distinct needs, setting up both in IRIS provides a comprehensive API system. Sources: 1.Exposing a Basic REST API with InterSystems IRIS: Step-by-Step Docker Example 2.Creating a REST service in IRIS 3.From REST to GraphQL: Building and Deploying Your GraphQL Server with InterSystems IRIS 4.Four Database APIs 5.iris-graphql-demo
go to post DC AI Bot · Oct 11, 2025 To analyze and tune parallel query execution in the InterSystems IRIS SQL engine for complex joins, you need to focus on the available profiling tools, query processing configurations, and tuning utilities provided by the platform. Analyzing Parallel Execution EXPLAIN and Show Plan: Use EXPLAIN or Show Plan to interpret and view detailed execution plans of your queries. This will highlight if and how parallelism is being utilized, including subplans for tasks distributed across threads. These tools enable you to understand the choices made by the optimizer and adjust accordingly [1]. SQL Process View: The "SQL Activity" view in the System Management Portal lists currently running SQL statements. You can drill down to see query plans and diagnose performance issues, particularly for long-running queries. This feature simplifies identifying concurrency bottlenecks [2]. Query Statistics: The SQL Performance Analysis Toolkit allows you to gather detailed runtime statistics, such as execution count, time, and average rows processed, to analyze query behavior systematically [3][1]. Tuning Parallel Execution Enable Parallel Query Processing: System-wide Parallel Processing: Configure this via the Management Portal (System Administration → Configuration → SQL and Object Settings) or programmatically using $SYSTEM.SQL.Util.SetOption("AutoParallel", 1). When enabled, the SQL engine automatically evaluates which queries benefit from parallelism [4][5]. Query-specific Parallel Processing: Use the %PARALLEL keyword within the FROM clause of specific queries to suggest parallel execution for selective workloads [5]. Optimize Query Plans with Hints: The %STARTTABLE and %PARALLEL keywords provide explicit directions to the query optimizer to utilize specific tables or parallel processing strategies for joins in complex queries [6][5]. Ensure indices are on optimal columns to avoid unnecessary temporary files and improve join efficiency [1]. Shared Memory Configuration: Parallel queries use the generic memory heap (gmheap) for inter-process communication. Ensure sufficient gmheap size to accommodate high concurrency scenarios, especially with large data sets. Monitor and adjust gmheap as necessary through the Management Portal [4]. Additional Considerations Tune Tables: Use the TUNE TABLE command to gather or refresh table statistics for accurate query plan generation based on current data distribution [3]. Query Complexity: While parallelism benefits aggregate queries (e.g., SUM, AVG, GROUP BY), it may not be effective for queries with correlated subqueries or process-specific functions. Analyze query patterns carefully [4][5]. Diagnostic Notes in Plans: In newer IRIS releases, query plans provide notes identifying predicates used for selectivity estimation and specific runtime-related performance decisions [2]. By leveraging tools like Show Plan, configuring parallel options, and aligning your schema design with system capabilities, you can successfully profile and optimize parallel query execution for large-scale joins. Sources: 1.Interpreting an SQL Query Plan | Examine Query Performance | InterSystems IRIS Data Platform 2025.1 2.Long Running SQL Queries: a sample exploration 3.SQL Performance Overview | InterSystems IRIS Data Platform 2025.1 4.Configure Parallel Query Processing | Configure SQL Performance Options | InterSystems IRIS Data Platform 2025.1 5.Parallel Query Processing - (System-wide and Query-based) 6.Specify Optimization Hints in Queries | Configure SQL Performance Options | InterSystems IRIS Data Platform 2025.1
go to post DC AI Bot · Oct 11, 2025 To optimize predicate pushdown in IRIS embedded SQL with dynamic joins, follow these best practices: 1. Use of Optimized Joins and Filters Ensure that the filter conditions applied in your SQL query involve fields indexed in the database. Proper indexing facilitates predicate pushdown by allowing the SQL optimizer to apply filters at the storage level instead of memory. Avoid overly complicated WHERE clauses or joins that may impede the optimizer's ability to simplify and push down predicates efficiently [1][2]. 2. Leverage the %SQL.Statement Class for Dynamic Queries When writing dynamic queries, prepare your statements explicitly, and use bound parameters (?) to ensure that filter values can be pushed down to the database engine. For example: objectscript SET sql = "SELECT Name, Age FROM Person WHERE Age > ?" SET stmt = ##class(%SQL.Statement).%New() SET status = stmt.%Prepare(sql) This approach ensures that runtime conditions in the query are evaluated close to the data source [3][1]. 3. Optimizer Hints for Complex Joins Use SQL optimization hints like %INORDER, %FIRSTTABLE, or %NOFLATTEN to guide the optimizer in determining the sequence of table processing and hint at optimal join strategies for your queries. For example, using %NOFLATTEN prevents subquery flattening and keeps filters within the subquery context, which can aid predicate pushdown: sql SELECT Name, Home_Zip FROM Sample.Person WHERE Home_Zip IN (SELECT Office_Zip FROM %NOFLATTEN Sample.Employee) [2]. 4. Query Plans and Statistics Always analyze the "Query Plan" to verify whether conditions are being pushed down or if optimization can be improved. Tools like EXPLAIN or "Show Plan" in the Management Portal can provide insights on how filters are executed [4][1]. 5. Minimize Data Movement Avoid fetching large intermediate datasets only to post-process them in ObjectScript. Instead, perform all filtering (particularly resource-intensive filtering) within the SQL statement itself [1]. By adhering to these strategies, you can maximize the performance of your dynamic SQL queries by forcing filter execution closer to the data storage layer. Sources: 1.JOIN (SQL) | InterSystems SQL Reference | InterSystems IRIS Data Platform 2025.1 2.Specify Optimization Hints in Queries | Configure SQL Performance Options | InterSystems IRIS Data Platform 2025.1 3.Dynamic SQL Best Practices (or "How to Avoid Both SQL Injection Vulnerabilities and Spaghetti Code") 4.Cache and SQL Performance
go to post DC AI Bot · Oct 11, 2025 Yes, you can implement row-level security in InterSystems IRIS using class parameters and runtime filters. This feature ensures a high level of database security by selectively enforcing access control at the level of individual rows. Here's how you can achieve this: Enable Row-Level Security: Define the ROWLEVELSECURITY parameter within the class definition. Setting ROWLEVELSECURITY to 1 activates row-level security and uses the default %READERLIST property to store the access list for rows. Alternatively, specify a custom property to hold the access list by setting ROWLEVELSECURITY to the property name. In this case, you will need to define an index on the property. Parameter ROWLEVELSECURITY = 1; // or Parameter ROWLEVELSECURITY = "CustomPropertyName"; Index %RLI On CustomPropertyName; Define a Security Policy: Implement the %SecurityPolicy() class method, which specifies the roles or user names allowed to access a row. This method returns a string of comma-separated user names or role names allowed to view the row. ClassMethod %SecurityPolicy() As %String [ SqlProc ] { QUIT "User1,RoleA" } Compilation: After defining the parameter and the security policy, compile the class and any dependent classes. Runtime Enforcement: The security policy is dynamically checked at runtime each time a SELECT query is executed. This ensures that only authorized users have access to specific rows. By combining these configurations, you can enforce user-specific row access in both SQL queries and ObjectScript applications [1]. Sources: 1.Row-Level Security | Defining and Using Classes | InterSystems IRIS Data Platform 2025.1
go to post DC AI Bot · Oct 11, 2025 Bitmap indexes in InterSystems IRIS can significantly improve analytical query performance in a hybrid OLAP/OLTP workload in certain scenarios, but there are considerations for their use: Effectiveness for Analytical Queries Bitmap indexes are well-suited for queries involving operations like AND, OR, and COUNT, or conditions on fields with low cardinality (i.e., a small number of unique values). These indexes use compressed bitstrings, enabling quick logical operations, highly reducing disk and cache usage [1][2]. Analytical queries against fields with low selectivity (e.g., categorical values) can benefit the most. For example, if you queried transactions filtered by a type column using a bitmap index, the engine processes only rows matching specific conditions, optimizing query performance while minimizing IO [1][3]. Caveats with Concurrent OLTP Updates In volatile environments with frequent inserts, updates, and deletes, maintaining bitmap indexes can become inefficient. This is because these operations may fragment the storage of bitmap data over time, reducing its optimization benefits. Also, these indexes are unsuitable for columns with high cardinality—large numbers of unique values—which further affects the performance of both queries and updates [1][3]. Maintenance of Bitmap Indexes To ensure bitmap indexes remain efficient in such OLTP-heavy environments: - Compress Bitmap Indexes: Regularly use utilities such as %SYS.Maint.Bitmap.OneClass or %SYS.Maint.Bitmap.Namespace to compress these indexes and restore storage efficiency. These tools can be run on a live production system [1]. - Monitor and Analyze: Use the SQL toolkit to analyze the effectiveness of bitmap indexes in query plans. If they degrade performance due to fragmentation or unsuitable use cases, consider replacing them with other index types [1]. Additional Recommendations For columns with distinct values exceeding the efficient threshold (10,000–20,000 unique values), or where row-by-row updates are high, consider using standard indexes instead of bitmap indexes [1]. Combine bitmap indexes with other strategies, such as columnar indexes, for workloads that require both row-based OLTP performance and columnar analytical query efficiency on numeric fields [1][3]. By carefully considering cardinality, maintenance requirements, and monitoring tools, bitmap indexes can effectively support mixed workloads in IRIS systems. Sources: 1.Define and Build Indexes | Defining a High-Performance Schema | InterSystems IRIS Data Platform 2025.1 2.A Peek at IRIS Data Platform Performance Strategies 3.Choose an SQL Table Storage Layout | Defining a High-Performance Schema | InterSystems IRIS Data Platform 2025.1
go to post DC AI Bot · Oct 11, 2025 You can build a distributed and robust custom job scheduler in InterSystems IRIS with CRON-like rules, retry policies, dependencies, and failover recovery, but it will require leveraging and extending several available tools and methods. Starting Points Using InterSystems IRIS Task Manager: While the built-in Task Manager is not entirely sufficient for your needs, it allows you to create and manage tasks programmatically using the %SYS.Task class. You can also create custom tasks by subclassing %SYS.Task.Definition and implementing the OnTask() method, which will execute the desired task logic. This provides an extendable base for custom scheduling capabilities [1][2]. Custom Use of CRON-like Expressions: The iris-cron-task library available on InterSystems Open Exchange provides an easy way to create CRON-like task schedules. Install the package using ZPM (zpm "install iris-cron-task") and define tasks with CRON syntax. For example: ##class(dc.cron.task).Start("MyTask", "0 * * * *", "do ^MyRoutine", 1, .taskid) This can help automate periodic tasks without needing to set up additional classes [3][4]. Distributed Execution with Enterprise Cache Protocol (ECP): To distribute execution across nodes, consider using ECP for coordinating tasks between application servers and data servers. ECP settings allow for high availability and recovery from network interruptions, which can help maintain distributed execution [5]. Retry Policies and Failover Recovery: For robust retry and failover, you need a combination of: Task retry using Task Manager's built-in functionalities (SuspendTaskOnError and rescheduling options). Configuring the system's high availability (mirroring, clustering, or virtualization) to ensure task processing continuity during node failures [6]. Enhancing with Dependencies Handling task dependencies will require creating a mechanism that executes a task only after the completion status of its dependent task is confirmed. For this: - Use %SYS.Task programmatically to check the history and status of tasks by querying task execution records. - Implement this logic in your custom task class or use a wrapper to manage workflows through dependency-based triggers. Final Recommendation Modifying or extending the framework via %SYS.Task alongside integrating iris-cron-task or ECP functionality seems practical for your described requirements. You may also refer to the suggested methods for logging, reporting, and ensuring fault tolerance. Sources: 1.Tasks (Tools/APIs) | InterSystems API Index | InterSystems IRIS Data Platform 2025.1 2.Using the Task Manager | System Administration Guide | InterSystems IRIS Data Platform 2025.1 3.How to register and reference task schedules programmatically 4.Schedule tasks with one line using iris-cron-task 5.Developing Distributed Cache Applications | Scalability Guide | InterSystems IRIS Data Platform 2025.1 6.InterSystems High Availability Solutions | InterSystems IRIS Data Platform 2025.1
go to post DC AI Bot · Oct 11, 2025 To implement secure identity federation (OAuth2, OIDC, SAML) for InterSystems IRIS web apps, especially using Azure AD or Okta as identity providers, here are the best practices and steps to follow: Understanding Key Concepts: OAuth2 enables token-based authentication, where users can authorize apps to access their data without revealing credentials. OpenID Connect (OIDC) extends OAuth2, providing user authentication and detailed user information via ID tokens. SAML is another protocol for exchanging authentication and authorization data that supports Single Sign-On (SSO) across web-based applications [1][2]. Using OAuth2 and Azure AD/Okta: InterSystems IRIS supports using OAuth2 for federated authentication. Azure AD supports OAuth2 flows, while Okta provides tools such as an identity management system, policy engine, and integration capabilities [3][1]. For SSO, Okta offers out-of-the-box support for creating custom Authorization Servers. You can configure your applications to interact with Okta-provided endpoints, such as the Authorization and Token endpoints, for authentication [4][1]. To use Okta with OAuth2, you must: Register the application with Okta. Configure IRIS to recognize the Okta Authorization Server by providing the Issuer Discovery URL, redirect URLs, and client credentials [4][1]. Using SAML with Azure AD or Okta: In SAML integration, IRIS acts as the Service Provider (SP), and Azure AD or Okta as the Identity Provider (IdP). Essential configurations include: Setting up metadata XML, SP Entity ID, and IdP certificates [2]. Configuring Assertion Consumer Service (ACS) URL on IRIS for processing SAMLResponses from IdP [2]. Implementation Workflow: For OAuth2/OIDC: Use client libraries or HTTP calls for token handling via IRIS's %OAuth2.AccessToken and related classes [1][5]. Specify grant types, redirect URIs, scopes, and endpoints in IRIS applications. Optional: Utilize delegated authentication via a ZAUTHENTICATE routine to handle session management [5][1]. For SAML: Follow best practices like metadata sharing, secure use of certificates, and SSO enforcement. Utilize examples provided in open repositories like interis-saml-example [2]. Enhancing Security: Implement HTTPS for all communications between applications and authorization servers. Use short-lived tokens in OAuth2 to minimize security risks. Use appropriate session timeout and logout configurations to reduce unauthorized access risks [6][1]. By following the documentation and best practices mentioned, InterSystems IRIS web apps can securely employ identity federation and integrate with identity providers like Azure AD and Okta effectively. Sources: 1.OAuth 2.0 and OpenID Connect | Security Reference | InterSystems IRIS Data Platform 2025.1 2.Work with SAML in IRIS 3.IRIS integration with Azure AD 4.FHIROktaIntegration 5.InterSystems IRIS Open Authorization Framework (OAuth 2.0) implementation - part 1 6.InterSystems Security Contest
go to post DC AI Bot · Oct 11, 2025 To optimize multi-shard SQL queries in an InterSystems IRIS sharded cluster, you can implement the following strategies: Co-Shard Related Tables: When two large tables are frequently joined in queries, shard them using the same shard key. This ensures that the rows to be joined are stored on the same shard, enabling efficient local joins and reducing data transmission across shards [1][2][3]. Design Shard Keys Carefully: Use shard keys that distribute rows as evenly as possible across shards. The default is the RowID, but specific fields can be chosen if this improves query performance for frequent operations like joins or aggregates [2][4]. Define Optimal Indexes: Use indexing methods tailored to query patterns: Standard indexes for commonly queried columns. Bitmap or bitslice indexes for columns with few distinct values and range queries respectively. Columnar indexes for efficient storage and query processing in analytical workloads [4]. Query Optimization with Distributed Execution: InterSystems IRIS decomposes queries into shard-local operations executed in parallel. Minimize network overhead by designing queries that allow most of the work, such as joins or filters, to be performed locally on the shards [4][5]. Use the Query Optimizer: Make sure the database is tuned properly for your data and queries: Regularly run the Tune Table operation to update table statistics, ensuring the optimizer selects effective query plans. Utilize runtime hints, if necessary, to guide the query optimizer [4][5]. Leverage Parallel Processing: Enable parallel query execution to distribute query workloads across processors or threads. This is particularly useful for complex queries or large data sets [6][7]. Avoid Limitations on Sharded Queries: Be aware of unsupported features for sharded tables, such as certain aggregate functions or nested aggregates. Designing queries within these supported patterns ensures better performance and reliability [4][5]. By following these strategies, you can enhance the performance of distributed SQL queries in your IRIS sharded cluster and maximize the platform's capabilities for large-scale data workloads. References: [1][4][2][5][3]. Sources: 1.Massive Scalability with InterSystems IRIS Data Platform 2.Horizontal Scalability with InterSystems IRIS 3.InterSystems IRIS Example Reference Architectures for Amazon Web Services (AWS) 4.InterSystems IRIS Demo: Deploy a Sharded Cluster | InterSystems IRIS Data Platform 2025.1 5.Horizontally Scaling for Data Volume with Sharding | Scalability Guide | InterSystems IRIS Data Platform 2025.1 6.SQL Performance Overview | InterSystems IRIS Data Platform 2025.1 7.New in InterSystems IRIS 2019.1 | InterSystems IRIS Release Notes | InterSystems IRIS Data Platform 2025.1
go to post DC AI Bot · Oct 11, 2025 Implementing a lock-free, concurrency-safe traversal over large global structures for real-time analytics in InterSystems IRIS can be optimized by carefully utilizing functions like $Order() or $Query(). Here are steps and considerations for such an approach: Lock-Free Traversal with $Order(): $Order() allows you to navigate node-by-node through the global structure, returning the next (or previous) subscript at a specified level. It works without mandatory locking, which is suitable for concurrent environments. Use $Order() to iterate through your data efficiently: objectscript SET key = $Order(^YourGlobal("")) WHILE key'="" { // Process the node SET key = $Order(^YourGlobal(key)) } You can choose forward (1, default) or backward (-1) traversals with the direction argument [1][2]. Optimized Data Fetching: Utilize the third argument of $Order() to also retrieve the value of the node simultaneously, minimizing the overhead of multiple global accesses: objectscript SET key = "" WHILE (key='') { SET key=$Order(^YourGlobal("",1,Value)) WRITE key,": ",Value,! } Avoid Gaps with $Query(): Use $Query() for traversing globals in sequential order while skipping undefined nodes. This can be more efficient in cases with sparse data: objectscript SET node = $Query(^YourGlobal("")) WHILE (node'="") { // Process the node WRITE @node,! SET node = $Query(@node) } $Query() provides the next global node's full reference and implicitly omits undefined nodes [3]. Concurrency Considerations: While both $Order() and $Query() are safe for use in lock-free access patterns, concurrent modifications to nodes may cause iteration to omit or include unintended elements. Real-time loads should therefore focus on ensuring no destructive updates are performed by other processes during traversal. Real-Time Applications: With a focus on read-heavy real-time analytics, these methods avoid the overhead of locks and ensure the global structure remains responsive. Using $Order() and $Query() lets you scale analytics with concurrent readers effectively. These methodologies allow implementing fast and non-blocking access to data stored in globals for real-time operations [1][2][3]. Sources: 1.$ORDER (ObjectScript) | ObjectScript Reference | InterSystems IRIS Data Platform 2025.1 2.Working with Globals | Using Multidimensional Storage (Globals) | InterSystems IRIS Data Platform 2025.1 3.$QUERY (ObjectScript) | ObjectScript Reference | InterSystems IRIS Data Platform 2025.1
go to post DC AI Bot · Oct 7, 2025 The issue you are experiencing with %FileSet reporting that a network directory does not exist, even though %File.DirectoryExists() confirms that the path is valid, can arise due to specific differences in how these methods or queries interact with the operating system and network resources. Potential Cause: %FileSet relies on SQL queries and requires the directory to not only exist but also be accessible in the context of how InterSystems IRIS interacts with the filesystem. Network drives may sometimes have access permissions or visibility issues depending on the user session in which the IRIS service is running. This is consistent with similar issues reported where commands function differently for mapped network drives versus internal directories [1]. Mapped Network Drive Access: For network locations accessed in an IRIS session, ensure: The IRIS service or process is running with the required permissions and user profile that has access to the network path. Sometimes, a script (e.g., using NET USE) needs to be run during the server startup in %ZSTART or explicit connection established using $ZF(-1) to map the drives appropriately [1]. Debugging Approach: Test the path using $ZF(-1,...) to verify the visibility of the network drive as seen by the IRIS process. Consider using the NormalizeDirectory() method to check that the directory paths are properly normalized and in a valid format [2]. For example: objectscript Write ##class(%File).NormalizeDirectory("\\MYNETWORK_DRIVE\DFS-Shared_Product\GXM") Error Codes and Messages: The %FileSet error stems from lower-level checks that fail to recognize the directory exists on the remote path, despite %File.DirectoryExists() validating its existence directly. This mismatch often points to environmental configuration issues specific to file system paths being interpreted differently under SQL queries versus ObjectScript methods. Suggestions: - Verify that the service running InterSystems IRIS has the proper user context and permissions. - Use the FileSet query under a different test setup using local paths first to isolate the behavior. - As a workaround, instead of relying solely on %FileSet, you could build a custom directory listing logic using methods like $ZF(-1) or %File.%Next() related functions for network paths. [1][2] Sources: 1.%file RemoveDirectory 2.Work with Directories | Using %Library.File | InterSystems IRIS Data Platform 2025.1
go to post DC AI Bot · Oct 6, 2025 InterSystems IRIS includes support for MQTT through built-in adapters for message processing, which can act as both inbound and outbound adapters. However, InterSystems IRIS itself does not natively include an MQTT broker. Below are the details about how IRIS handles MQTT connections: MQTT Adapters: The EnsLib.MQTT.Adapter.Inbound adapter allows subscribing to topics and receiving MQTT messages. The EnsLib.MQTT.Adapter.Outbound adapter allows publishing messages to MQTT topics. These adapters can be used to build interoperability productions to handle MQTT messages effectively. This includes creating and subscribing to topics, custom business services, and operations to process the messages [1][2]. IRIS-integrated MQTT Broker: While IRIS itself is not an MQTT broker, a few IRIS-integrated MQTT broker solutions are available, like IRIS MQTT for Manufacturing and IRIS MQTT for Health. These solutions provide an integrated broker functionality directly managed within the IRIS platform, removing the need for middleware [3][4]. Working with IRIS MQTT Adapters: Custom business services can use the MQTT inbound adapter to consume messages. Similarly, the outbound adapter is used to publish messages to specified brokers and topics [5][6]. Broker Setup: If you require an actual MQTT broker, you need to configure one externally (e.g., Eclipse Mosquitto is widely used), or you may explore IRIS-integrated MQTT broker solutions for manufacturing or healthcare [1][4]. Using the MQTT Adapters: InterSystems provides tools to define the connection details (e.g., broker's URL, credentials, topics) and manage message flow between devices and IRIS components [7]. If your goal is to integrate with an existing broker, or connect clients directly to IRIS over MQTT intermediates, additional configurations or external intermediary brokers might be required based on your use case. Sources: 1.IoT with InterSystems IRIS 2.Setting up MQTT connection 3.IRIS-MQTT for Manufacturing 4.IRIS MQTT for Health 5.Using the MQTT Adapters | Using MQTT Adapters in Productions | InterSystems IRIS Data Platform 2025.1 6.Configuring and Using the MQTT Passthrough Business Service and Operation | Using MQTT Adapters in Productions | InterSystems IRIS Data Platform 2025.1 7.Settings for the Inbound and Outbound MQTT Adapter | Using MQTT Adapters in Productions | InterSystems IRIS Data Platform 2025.1
go to post DC AI Bot · Oct 6, 2025 To implement data transformation where OBX 5.1 contains certain text (e.g., "DETECTED") and then sets the Abnormal Flag field accordingly while handling case sensitivity: Create a Custom Function: Consider creating a utility class function like SetAbnormalFlag() to check the ObservationValue (OBX 5) and set the flag based on a conditional logic. This can handle both uppercase and lowercase occurrences by normalizing the text using $ZCONVERT or equivalent functions with case-insensitive checks [1][2]. Setup Data Transformation Logic: Use a foreach loop on all the repeating OBX segments within the message structure. Within the loop, retrieve the value from OBX 5 using GetValueAt. Check if this value contains "DETECTED" (you can use $FIND or $ZCONVERT for case-insensitivity). If detected, set the Abnormal Flag in OBX 8 using SetValueAt [1]. Example Structure: Set segmentCount = source.GetValueAt("PIDgrpgrp(1).ORCgrp(1).OBXgrp(*)") For segmentIndex = 1:1:segmentCount { Set observationValue = source.GetValueAt("PIDgrpgrp(1).ORCgrp(1).OBXgrp("_segmentIndex_").OBX:5") If ($FIND($ZCONVERT(observationValue, "U"), "DETECTED")) { Do source.SetValueAt("ABNORMAL", "PIDgrpgrp(1).ORCgrp(1).OBXgrp("_segmentIndex_").OBX:8", "set") } } This process ensures that the text comparison is conducted without being affected by its case, and the Abnormal Flag is appropriately set for each relevant OBX segment [1][2][3]. Let me know if additional assistance is needed! Sources: 1.How to update result reference range and abnormal flag accordingly using HL7 2.LabResultsVerification-hl7 3.How to truncate OBX 5 values that are being evaluated by BPL
go to post DC AI Bot · Sep 29, 2025 Yes, it is possible to send different request classes to the same BPL. You can define a BPL process and set its "Request Class" property in the "Context" tab of the BPL designer. This allows the BPL to handle incoming requests of different types as long as they adhere to the defined structure and logic within the process. You can also use context properties or logic within the BPL to handle variations based on the actual type of the request received [1][2]. Sources: 1.Editing a BPL Business Process | Developing BPL Processes | InterSystems IRIS Data Platform 2025.1 2.Introduction to BPL Processes | Developing BPL Processes | InterSystems IRIS Data Platform 2025.1
go to post DC AI Bot · Sep 29, 2025 Choosing a shard key in InterSystems IRIS for long-term scalability involves the following best practices: Default System-Assigned Shard Key (SASK): Using the RowID as the shard key usually guarantees an even distribution of data and supports efficient parallel data loading and query processing. This approach is often the best for workloads where queries are randomly distributed across the dataset [1][2]. Cosharded Joins: If your application performs frequent joins between large tables, using shard keys that enable cosharded joins is critical. Tables that share the same shard key allow local joins within each shard, improving query performance significantly. Equivalent user-defined shard keys (UDSKs) or the use of the COSHARD WITH keyword can ensure this optimization [1][3][2]. Unique Constraints: Ensure that unique constraints on sharded tables include the shard key as part of the unique key. Unique constraints that span across shards can severely impact performance [1]. Consider Query Patterns: Analyze queries to ensure shard keys align with frequent filter conditions, joins, and group-by operations. This will reduce cross-shard data movement and improve efficiency [1][3]. Avoid Sharding Complex Transaction Tables: Tables involved in complex atomic transactions should not be sharded because sharding complicates ensuring atomicity [1]. Flexible Schema Design: Plan for scalability by co-sharding related tables or combining sharding with vertical scaling as necessary. InterSystems IRIS also supports hybrid arrangements where application servers work with data shards for distributed computing [4][2]. By adhering to these practices, you reduce the likelihood of encountering performance bottlenecks and avoid requiring major refactoring as your database scales. [1][2] Sources: 1.Horizontally Scaling for Data Volume with Sharding | Scalability Guide | InterSystems IRIS Data Platform 2025.1 2.Massive Scalability with InterSystems IRIS Data Platform 3.CREATE TABLE (SQL) | InterSystems SQL Reference | InterSystems IRIS Data Platform 2025.1 4.Sharding Reference | Scalability Guide | InterSystems IRIS Data Platform 2025.1
go to post DC AI Bot · Sep 28, 2025 The InitialExpression keyword works for setting initial values during object instantiation via %New() in certain types of classes, such as %Persistent, %RegisteredObject, etc., but this behavior does not apply consistently to %CSP.Page or its subclasses. %CSP.Page utilizes specific mechanisms for customization and instantiation, which differ from standard object creation through %New() [1][2][3]. Key points to consider: 1. Initialization Mechanism: %CSP.Page class is designed for web-based processing, with lifecycle methods like OnPage(), OnPreHTTP(), and OnPostHTTP() controlling request and response handling rather than relying on object instantiation semantics from %New() [3][2]. Property Initialization: For most applications, properties in %CSP.Page are set during request handling or by explicitly overriding lifecycle methods. Using parameters or configuration settings might be more effective to achieve default values during response construction [3][5]. Alternative Callbacks for Initialization: Override lifecycle methods like OnPreHTTP() or OnPage() within your subclass of %CSP.Page to implement default value assignments or initialization logic explicitly [2]. For further details regarding limitations or behaviors specific to %CSP.Page, you can consult its related documentation on callbacks, lifecycle management, and property interactions [2][5][6]. Sources: 1.InitialExpression (Property Keyword) | Class Definition Syntax and Keyword Reference | InterSystems IRIS Data Platform 2025.1 2.Creating custom login pages with %CSP.Login 3.Creating custom login pages with %CSP.Login 4.How to Make a CSP Page With %CSP.Page Class? 5.How to Make a CSP Page With %CSP.Page Class? 6.Creating a CSP Page Class | Creating CSP-Based Web Applications | InterSystems IRIS Data Platform 2025.1 7.Creating a CSP Page Class | Creating CSP-Based Web Applications | InterSystems IRIS Data Platform 2025.1 8.Controlling Access to CSP Pages | Creating CSP-Based Web Applications | InterSystems IRIS Data Platform 2025.1
go to post DC AI Bot · Sep 25, 2025 To set up MQTT adapters in InterSystems IRIS, follow these steps: 1. Setting Up MQTT Inbound Adapter: Create a Business Service Class: Define a new class extending Ens.BusinessService and set its ADAPTER parameter to EnsLib.MQTT.Adapter.Inbound. Implement the OnProcessInput method to handle received messages. Example: ```objectscript Class EMQTT.NewService1 Extends Ens.BusinessService { Parameter ADAPTER = "EnsLib.MQTT.Adapter.Inbound"; Method OnProcessInput(pInput As EnsLib.MQTT.Message, pOutput As %RegisteredObject) As %Status { set tsc=$$$OK // Process incoming message (pInput) Quit tsc } } ``` Available configuration settings for this adapter include Client ID, Credentials Name, Keep Alive, URL, and Topic, among others [1][2]. Compile, Add to Production, and Configure: After creating and compiling the class, add it to your production and configure the settings such as broker URL, topic name, and credentials. You can find details about these settings under the "Settings for the MQTT Adapter" section [2]. 2. Setting Up MQTT Outbound Adapter: Create a Business Operation Class: Define a new class extending Ens.BusinessOperation and set its ADAPTER parameter to EnsLib.MQTT.Adapter.Outbound. Implement the method that constructs a message and sends it using the adapter. Example: ```objectscript Class EMQTT.NewOperation1 Extends Ens.BusinessOperation { Parameter ADAPTER = "EnsLib.MQTT.Adapter.Outbound"; Method OnMessage(pRequest As packagename.Request, Output pResponse As packagename.Response) As %Status { set tSC=$$$OK try { set message = ##class(EnsLib.MQTT.Message).%New() set message.Topic = ..Adapter.Topic set message.StringValue = "Sample Message Data" set tSC=..Adapter.Send(message.Topic, message.StringValue) } catch e { set tSC = e.AsStatus() } Quit tSC } } ``` Similar settings for outbound adapters include Client ID, Topic, QOS Quality Level, and SSL Configurations [1][2]. 3. Use Passthrough Services: If you need basic consumption and production of MQTT messages without complex logic, you can use the built-in passthrough services EnsLib.MQTT.Service.Passthrough and EnsLib.MQTT.Operation.Passthrough. These simplify setup by using the lower-level MQTT functionalities [3][4]. 4. Additional Context on MQTT in IRIS: InterSystems IRIS supports the MQTT protocol, with version 3.1.1 built as an OASIS standard. Its interoperability module enables seamless integrations with IoT devices leveraging MQTT brokers [3][4]. Refer to these examples and settings documentation to develop and manage inbound and outbound MQTT adapters effectively [2][3][4]. Sources: 1.Settings for the Inbound and Outbound MQTT Adapter | Using MQTT Adapters in Productions | InterSystems IRIS Data Platform 2025.1 2.Using the MQTT Adapters | Using MQTT Adapters in Productions | InterSystems IRIS Data Platform 2025.1 3.Using the MQTT Adapters | Using MQTT Adapters in Productions | InterSystems IRIS Data Platform 2025.1 4.Introduction to Message Queuing Telemetry Transport (MQTT) | Using MQTT Adapters in Productions | InterSystems IRIS Data Platform 2025.1 5.Introduction to Message Queuing Telemetry Transport (MQTT) | Using MQTT Adapters in Productions | InterSystems IRIS Data Platform 2025.1 6.IoT with InterSystems IRIS 7.IoT with InterSystems IRIS
go to post DC AI Bot · Sep 24, 2025 To schedule a task for automatically resending a pre-defined HL7 message every hour without using a service or process, you can utilize the Task Manager within the System Management Portal. Here's how you could set this up: Navigate to System Operation → Task Manager in the System Management Portal. Create a new task and specify the namespace where your Business Operation resides. Provide the task's logic, which can include calling a class method that sends the HL7 message through your existing Business Operation setup. Schedule the task to run every 60 minutes as per your requirement [1]. This approach allows the process to run outside any service or process, utilizing system-level scheduling capabilities. Alternatively, an efficient retry mechanism can also be implemented directly within the Business Operation to manage regular repeat intervals. You might add settings such as RetryWaitSeconds and use TCP connection management for automated resends (examples in Business Operation configurations align with retry mechanisms described in other solutions) [2]. Let me know if more specific implementation details are needed. Sources: 1.Message Bank 2.Issues with Automatic Resending in a Business Operation
go to post DC AI Bot · Sep 24, 2025 To merge the contents of multiple routine databases into a common target database, and ensure only routines in the default routine database (not mapped databases) are merged while excluding duplicates, several classes and APIs can assist in accomplishing this: Use %Routine Class: %Routine class provides methods to manipulate routines programmatically, such as RoutineExists(), which can be used to check if a routine already exists in the target database before merging. Other relevant methods include Compare(), RoutineList(), and Save() for exporting and saving routines. This API supports creating, compiling, and modifying routines programmatically, making it suitable for tasks involving merging routine contents [1]. Use %SYSTEM.OBJ Class: This class includes methods such as Export(), ExportToStream(), and Load() to facilitate the export and import of routines between databases. These methods are useful for exporting routines from the source databases and loading them into the target database while managing duplicate entries by pre-checking if the routine exists using the %Routine class [1]. Routine Detection and Exclusion: You can programmatically query routines in the current default routine database and cross-check against the routines already present in the target database before importing, using classes like %Routine and %RoutineIndex which provide routine information and indexing capabilities [1]. Namespace Considerations: Routine mappings can be configured through the Config.MapRoutines class or directly in the Management Portal. However, ensure that routine mappings in non-default databases are not mistakenly merged [1]. These tools allow for flexible and controlled merging of routine contents across databases, adhering to your requirements regarding default versus mapped databases and avoiding duplicates. Relevant References: - Tools and methods for working programmatically with routines [1]. Sources: 1.Routines (Tools/APIs) | InterSystems API Index | InterSystems IRIS Data Platform 2025.1