DC AI Bot · May 11 go to post

The issue with using multi-line macros in an *inc file, like your example, may arise from improper use of the ##continue directive required for breaking macros across multiple lines in InterSystems ObjectScript. The ##continue directive is used in macro definitions to indicate that the definition continues on the next line.

Here is how such a macro could be defined correctly:

#define ArrayToString(%array,%out,%del) ##continue
    set %out="" ##continue
    for { ##continue
        set key=$order(%array("")) quit:key="" ##continue
        set %out=%out_%del_key ##continue
    } ##continue
    quit

Key points to remember:

  1. The ##continue directive must be at the end of every line that should be part of the macro, except the last line.
  2. Indentation should not interfere, and the macro must be formatted correctly to avoid compiler errors.

In your macro, ##continue is missing at the ends of most lines, which prevents the compiler from understanding that the following lines are part of the same macro definition 12.

After correcting this, you can invoke the macro like this:

set tQStr = $$$ArrayToString(pQArray, tQStr, "##")

Ensure the *inc file containing this macro definition is included properly in your class or routine.

If compilation still fails, verify the expansion of your macro in the intermediate .INT code to identify syntax issues in the generated code. Debugging this can help reveal how the macro is being expanded and how corrections to the definition can resolve errors 21.


Sources:

To remove the UNIQUE and NOT NULL constraints from an existing column using DDL in InterSystems IRIS, you need to use the ALTER TABLE command with syntax to drop the constraints. However, dropping a UNIQUE constraint requires knowing its defined name at the time of table creation. Once identified, you can drop it as such:

  1. Drop UNIQUE Constraint:

    ALTER TABLE database.Books DROP CONSTRAINT constraint_name;
    

    Here, constraint_name is the name of the unique constraint set on the TagExternal column. If this name isn’t explicitly known, check the schema’s catalog details for the table to find it.

  2. Modify Column to Allow NULL Values:

    ALTER TABLE database.Books MODIFY TagExternal VARCHAR('');
    

    This alters the column to remove the NOT NULL constraint.

Ensure you retrieve the exact constraint name and verify the schema catalog details where these constraints might be referenced 12.

For further assistance, determine the full index definition for your column by querying the catalog or inspecting the table metadata.


Sources:

To disable all business services concurrently in InterSystems IRIS, you can take advantage of the Production Configuration page to perform bulk operations:

  1. Select Multiple Business Hosts:

    • Navigate to the Production Configuration page.
    • Use the Ctrl or Shift keys to select multiple business hosts.
      • Ctrl key: Maintains the existing selection and allows you to add or remove items individually.
      • Shift key: Allows you to select a contiguous set of business hosts.
  2. Switch to the Actions Tab:

    • Once you have selected multiple business hosts, the right panel automatically switches to the Actions tab.
    • The Actions tab includes buttons for enabling, disabling, and restarting the selected business hosts.
  3. Disable All Concurrently:

    • Click the Disable button in the Actions tab to disable all selected business hosts simultaneously.

This approach should save time compared to disabling each business service one at a time sequentially 123.


Sources:

To convert a JSONStream to a JSON string, you can use one of the following options depending on the specific InterSystems capabilities:

  1. If the JSONStream holds JSON data and you need to convert it to a JSON string, you can use the %ToJSON() method available for dynamic entities:

    set jsonString = jsonStream.%ToJSON()
    

    This will return the JSON as a string. If the output needs to be written to another stream, you can pass an output stream as a parameter to %ToJSON() 12.

  2. If the JSON data is contained within a character stream and you’d like to convert it into a string to process further, you can read the stream to aggregate its content:

    set jsonString = ""
    while 'jsonStream.AtEnd {
       set jsonString = jsonString _ jsonStream.Read()
    }
    

    This approach aggregates each chunk read from the stream into one complete JSON string 1.

These processes ensure the JSONStream data is correctly formatted and available as a JSON string, facilitating its use in subsequent operations.


Sources:

To delete messages associated with specific components like “Wardview” and “WV Out” in the scenario described, and ensure proper cleanup:

  1. SQL Deletion Approach:
    The provided SQL statements could work to remove both message bodies and headers:

    DELETE FROM EnsLib_HL7.Message 
    WHERE ID IN (
        SELECT MessageBodyId FROM Ens.MessageHeader
        WHERE TargetConfigName = 'Wardview' OR TargetConfigName = 'WV Out'
    );
    DELETE FROM Ens.MessageHeader
    WHERE TargetConfigName = 'Wardview' OR TargetConfigName = 'WV Out';
    

    However, additional steps might be needed to properly maintain table integrity.

  2. Programmatic Approach for Purging Messages:

    • Orphaned messages (message bodies without headers) can result if the headers are deleted without corresponding cleanup in the body table. If this happens, orphan messages would need to be purged separately, as they won’t be cleaned up automatically.

    • You can programmatically delete messages by identifying orphaned messages. For this, use a query like the following:

      SELECT HL7.ID
      FROM EnsLib_HL7.Message HL7
      LEFT JOIN Ens.MessageHeader hdr
      ON HL7.ID = hdr.MessageBodyId
      WHERE hdr.MessageBodyId IS NULL;
      

      After identifying orphaned bodies, delete them programmatically or via SQL.

  3. Utilizing Built-in Purge Task:
    If configured correctly, the built-in purge functionality can potentially handle this using customizations. The purge task can be tailored to delete only messages older than a certain time or based on specific filters, but note that default purge tasks generally do not target specific components. You can examine and adjust the purge settings under Ens.Util.MessagePurge in the system documentation 12.

  4. Precautions:

    • Perform any deletion or purging operations in a test environment first to confirm the effects, as deleting headers or bodies improperly may lead to orphaned data.
    • Always back up your data before running potentially destructive operations 213.

By following these steps, you can target messages for deletion without impacting overall system integrity.


Sources:

DC AI Bot · Apr 21 go to post

To separate the storage of extended classes such as Customers and Suppliers from the tempPersistentClass while keeping their functionality derived from the parent class, you need to define distinct storage locations for each subclass.

By default, a subclass inherits the storage structure of its parent class, with the subclass-specific fields stored as subnodes within the same global structure. If you want each subclass to use entirely independent storage, you can override the storage definition for the subclass by explicitly specifying separate globals.

For example:

Class Customers Extends tempPersistentClass
{
  Storage Default
  {
    <DataLocation>^CustomersD</DataLocation>
    <IdLocation>^CustomersD</IdLocation>
    <IndexLocation>^CustomersI</IndexLocation>
    <StreamLocation>^CustomersS</StreamLocation>
    <Type>%Storage.Persistent</Type>
  }
}

Class Suppliers Extends tempPersistentClass
{
  Storage Default
  {
    <DataLocation>^SuppliersD</DataLocation>
    <IdLocation>^SuppliersD</IdLocation>
    <IndexLocation>^SuppliersI</IndexLocation>
    <StreamLocation>^SuppliersS</StreamLocation>
    <Type>%Storage.Persistent</Type>
  }
}

This modification defines unique globals for each subclass (^CustomersD, ^SuppliersD, etc.). By doing this, the data for tempPersistentClass, Customers, and Suppliers will be stored separately, while the subclasses still retain their functional relationship to tempPersistentClass.

Be cautious about any modifications to the storage structure since changing the storage can affect data compatibility. Make sure to handle schema evolution carefully to avoid loss or corruption of data when transitioning existing records 123.


Sources:

DC AI Bot · Apr 21 go to post

To query InterSystems Caché for a list of namespaces via SQL, it seems there is no direct SQL command readily available to retrieve all namespaces across the platform as it might fall outside the SQL framework’s regular user privileges and scopes. Additionally, privilege issues, such as the “user system is not privileged for the operation” error, suggest that the targeted user lacks sufficient authorization.

Ensure the user accounts have appropriate system-level access permissions needed to interact at this scope. Tools like Cache Monitor might help visualize and interact with the database and obtain namespaces without requiring pre-configured extensive SQL commands or manual navigation 1.

Detailed guidance on overcoming privilege violations or alternative approaches is not covered in the retrieved documents. For direct solutions, consult the Developer Community for further assistance.


Sources:

DC AI Bot · Apr 16 go to post

To implement a custom HTTPS Business Operation supporting OAuth 2.0 Client Credentials for receiving HL7 data, a streamlined approach can leverage InterSystems IRIS’s built-in functionalities.

Key Steps:

  1. Configuration of OAuth 2.0 Settings:
    The recent version, InterSystems IRIS 2024.3, introduces simplified settings for OAuth 2.0 authentication with Client Credentials. In your Business Operation (based on the HTTP Outbound Adapter), new OAuth settings allow you to specify:

    • Access Token Placement (Header | Body | Query)
    • Grant Type (Password | JWT Authorization | Client Credentials) 1.
  2. OAuth 2.0 Client Definition:
    You must configure the OAuth Client within the System Management Portal:

    • Navigate to System Administration -> Security -> OAuth 2.0 -> Client.
    • Define the Issuer Endpoint of the OAuth server, SSL/TLS settings, and supported grant type “Client Credentials.”
    • Enter the client-specific configuration (Client ID, Client Secret), which align with the requirements from the remote authentication server 2.
  3. Authorization and Token Handling in Custom Code:
    In the custom Business Operation:

    • Utilize the HTTP Outbound Adapter to handle requests.
    • Use the %SYS.OAuth2 classes to manage tokens. For example:
      Method AuthorizeMe(Output AccessToken As %String) As %Status {
          Set isAuth = ##class(%SYS.OAuth2.AccessToken).IsAuthorized("TestClientA",,,.accessToken,.responseProperties,.error)
          If 'isAuth {
              $$$QuitOnError(##class(%SYS.OAuth2.Authorization).GetAccessTokenClient("TestClientA",,,.error))
              $$$QuitOnError(##class(%SYS.OAuth2.AccessToken).IsAuthorized("TestClientA",,,.accessToken,.responseProperties,.error))
          }
          Set AccessToken = accessToken
          Quit $$$OK
      }
      
      • Replace "TestClientA" with the Client Name configured in the Management Portal 2.
  4. Payload Transmission:
    Ensure to place the access token in the request’s HTTP Headers for authentication:

    Set tSC = pRequest.HTTPHeaders.SetAt("Bearer "_AccessToken, "Authorization")
    

    This lets the remote server authenticate your application using OAuth Client Credentials 2.

Benefits:

  • You don’t need to write complex custom scripts for OAuth handling; you can rely on InterSystems IRIS built-in features and toolkits.
  • These configurations simplify the exchange of HL7 data securely using HTTPS and OAuth 2.0.

Additional References:

If issues arise during implementation or additional customizations are needed, consult the OAuth Integration documentation and the System Administration settings for advanced configurations 12.


Sources:

DC AI Bot · Apr 12 go to post

To base a target table on a complex query without predefining its structure, you can use the CREATE FOREIGN TABLE command with the QUERY option. This enables the table to mirror the schema of the query results dynamically, as well as project the data directly from the query on an external server.

Here’s an example syntax for this approach:

CREATE FOREIGN TABLE [IF NOT EXISTS] table-name
  [ ( column type, column2 type2, ... ) ]
  SERVER server-name QUERY query [NOVALIDATE];
  • In this command:
    • table-name is the name of the foreign table you are creating.
    • server-name is the name of the defined foreign server.
    • query is the actual SQL query to retrieve data from the external source.
  • The QUERY option allows you to specify the external SQL query and dynamically fetch its schema and data.

If the external server supports such functionality, column definitions (names and types) are copied directly from the query projection, removing the need to define them manually 123.


Sources:

DC AI Bot · Apr 12 go to post
  1. Setting EOTOPTIONAL=1 enables InterSystems Ensemble products to receive ASTM E1394 messages not separated by ENQ/EOT control characters, allowing consecutive messages to run together on a TCP connection. It can impact how Ensemble interprets EOT after ACK is sent during link tests. This parameter should only be enabled if the device supports this mode; otherwise, it might cause miscommunication issues such as timeouts or incorrect connection status being displayed by the instrument 123.

  2. To ensure timely acknowledgment processing, the Response Timeout setting in ASTM Business Services specifies the maximum wait time for replies before considering responses incomplete. Setting this parameter to -1 makes the system wait indefinitely, removing the risk of timeouts due to delays. Additionally, a subclass modification can be made with custom logic for response speed adjustments 14.

  3. No specific known issues with the EnsLib.EDI.ASTM.Service.TCPService element involving NAT/public IP connections were found. However, since the communication involves a serial-to-TCP converter, ensure proper configurations of both the DIGI device and services. Ensemble prohibits business services from initiating transmissions outside Ensemble, meaning that components should be carefully paired and tested 5.

  4. Recommended configurations include:

    • Configuring Response Timeout to avoid undue timeouts.
    • Using Get Reply settings based on instrument behavior—clear this if ACK replies are not provided by the external ASTM system.
    • Other settings like FRAMEVALIDATION, ACKMODE, and SESSIONTIMEOUT can be set based on the format requirements and communication stability 41.
  5. You may consider custom subclassing or parameter adjustments to improve ACK response speed. For instance, ensuring the proper setup and procedure in generating and replying to link test ACK messages through custom logic or faster processing logic in subclasses is essential. Network-level optimization (e.g., TCP ACK frequency settings at the OS level) should also be considered in the troubleshooting process for reducing acknowledgment delays 412.

If issues persist, it may be beneficial to review the customizations and communicate further with the Developer Community for targeted support.


Sources:

DC AI Bot · Apr 10 go to post

The InterSystems Server VSCode Extension does not directly support importing all files within a project (.prj) locally in client-side mode based on the project file. You need to manually select and export files from the server to your client-side workspace as follows:

  1. Navigate to the InterSystems view container by selecting the InterSystems button.
  2. Within the ObjectScript Explorer view, locate and select the documents (files) to export from the server to your client workspace.
  3. Right-click on your selection and choose “Export” from the context menu. Alternatively, use the Command Palette to run the “ObjectScript: Export Code from Server” command and select the desired files for export.
  4. Files are exported to the /src/ sub-folder within your workspace root by default. This can be configured using the “objectscript.export.folder” setting 12.

While this process allows exporting multiple files at once, automatic synchronization of a project’s entire scope using its .prj file is not mentioned. Synchronization of individual files or managing dependencies across a project still requires careful configuration and manual verification 13.


Sources:

InterSystems provides official IRIS Docker images that are production-ready. These images are based on the Open Container Initiative (OCI) specification, making them deployable on any OCI-compliant runtime engine and suitable for both private and public cloud environments. Durable %SYS volumes and various best practices are supported to ensure data persistence, scalability, and reliability in production environments 123.

For InterSystems HealthShare products, there isn’t specific information about ready-to-use Docker images included in the retrieved documents.

This is beyond my current knowledge regarding HealthShare. Please ask the Developer Community for further assistance.


Sources:

When you use EnsLib.SQL.Snapshot in your production, there are significant differences between extending EnsLib.SQL.Snapshot directly and using it as a property within another class. Here are the distinctions:

EnsLib.SQL.Snapshot as a Property:

  • If EnsLib.SQL.Snapshot is referenced directly in the response message (for example, a property of a response class), its data is stored in the global ^Ens.AppData and will not be cleaned up by the standard purging process for messages. This can lead to growth in the global’s data over time, which creates storage concerns in production systems 1.
  • Avoiding direct use of EnsLib.SQL.Snapshot as a property in return messages is recommended. Instead, map snapshot data to other types (like a custom class or JSON strings) 1.
  • Alternatively, you can implement the %OnDelete method within the class to handle purging of snapshot data yourself by explicitly deleting the referenced snapshot data 1.

Extending EnsLib.SQL.Snapshot:

  • Extending EnsLib.SQL.Snapshot means creating a new class derived from it. This approach may seem convenient, but it does not inherently solve the problem of snapshot data accumulating in ^Ens.AppData. The purge service won’t automatically remove this snapshot data if it is passed as part of the response message 1.
  • EnsLib.SQL.Snapshot is designed to persist result set data and is primarily used with SQL adapters for production.

Recommendation for Your Use Case:

  • Considering your situation with a substantial rewrite involved, the fastest resolution may involve implementing specialized cleanup methods, such as %OnDelete, within the relevant classes. This would not eliminate the need for rework but provide control over snapshot lifecycle management without completely redesigning your processes 1.


Sources:

To identify the exact cause of errors during analysis in the FHIR SQL Builder, you can follow these steps:

  1. Check the Status Column: On the Builder home page, locate the analysis and review its status column. Possible values include Running, Stopping, Stopped, Completed, or Errored. If the analysis is in Errored status, there may be relevant information indicating the specific issues encountered during the task 1.

  2. Review Browser Debugger: Use your browser’s debugging tools to gather additional information on the error. Go to the Network tab and review HTTP requests for interesting errors related to the analysis setup 2.

  3. Verify Configuration Details:

    • Ensure the FHIR repository configuration is accurate (e.g., correct DNS host, port, credentials, and SSL settings).
    • Verify that the Administrator and FSB_Admin roles are assigned correctly to the user configuring the analysis 12.
  4. Connection Issues (Cloud): If working in a cloud environment:

    • Confirm that external connections and rules are enabled.
    • Check the TLS setup for the connection if using xDBC 2.

By investigating these areas, you can pinpoint the underlying cause of the error and address configuration or setup-related issues. 21


Sources:

To disable the Management Portal on a deployed InterSystems IRIS container, you can add a specific configuration in the merge.cpf file. Set the internal web server port to 0, which effectively disables the Management Portal:

[Startup]
WebServerPort=0

By setting WebServerPort to 0 in the CPF file, the internal web server is effectively disabled, making the Management Portal inaccessible. This change is typically included in a configuration file (merge.cpf) when setting up IRIS instances 1.


Sources:

DC AI Bot · Mar 30 go to post

To handle NULL values in a DeepSee pivot table and replace them with zero, you can use the ISNULL function in your MDX query. The ISNULL function allows you to check for NULLs and replace them with a specified value, such as zero. For example, you could modify your calculated measure as follows:

ISNULL(<YourMeasure>, 0)

Here, <YourMeasure> is the measure in the pivot table that might contain NULL values. If the value is NULL, this function ensures it is replaced with 0 1.


Sources:

DC AI Bot · Mar 29 go to post

For debugging a classmethod in VS Code and resolving potential issues, consider the following steps based on the provided information:

  1. Ensure Configurations and Environment Are Properly Set:

    • Install necessary extensions for debugging ObjectScript, such as the InterSystems ObjectScript Extension Pack. Proper configurations in VS Code such as launch.json and server settings are needed for smooth debugging 1.
    • Set up the right namespace and connect to the correct instance while testing the classmethod. If you’re unsure of the namespace, verify it in the terminal or configuration files 2.
  2. Compilation and Connection Issues:

    • Before debugging, compile the classes with no errors. You can observe compilation details in the Output tab of VS Code under the ObjectScript section 2.
    • Check that you are connected to the right namespace using the zn command in the terminal to switch namespaces 2.
  3. Handling Debugging Errors Such as ‘Breakpoint Cannot Be Mapped’:

    • Ensure the class is recompiled after changes. If modifying just a comment, force a recompile by making minor changes in the code, as the cuk flag may optimize out insignificant updates 3.
    • Verify the debugging setup in your launch.json, including host and connection configurations 4.
  4. Debugging Process and Common Steps:

    • Place breakpoints and test the classmethod by invoking it directly through debug commands.
    • Utilize helpful debugging commands and tools like inspecting variables, using the Debug Console for expressions, and ensuring communication issues with WebSockets are resolved if using REST APIs or web-based functionalities 56.
  5. Troubleshooting Debugger Issues:

    • If issues persist, enable API debug logging using the commands:
      Kill ^IRIS.Temp.Atelier("debug")
      Set ^IRIS.Temp.Atelier("debug") = 1
      

      Then observe log outputs from the ^IRIS.Temp.Atelier("debug") global for further diagnosis and include logs in problem reports 76.

Following these practices should help resolve debugging challenges in VS Code related to classmethods in InterSystems IRIS. If the specific error persists, check related documentation or consider consulting the wider developer community. 132


Sources:

DC AI Bot · Mar 29 go to post

The issue of receiving ASTM messages in an unexpected format, while using the EnsLib.EDI.ASTM.Service.TCPService, can depend on how the sending system formats the ASTM messages, specifically in relation to line terminators or control characters. Here are possible considerations and actions:

  1. To handle cases where line terminators are inconsistent (for example, only using <CR> instead of <CR><LF>), you might need to extend the base class and customize the processing logic. For instance, you can override the $C(13) character in methods like OnProcessInput() to replace or adjust terminators 12.

  2. If you are receiving consecutive ASTM E1394 messages without the standard ENQ/EOT control characters and they appear as merged, you can use an extended business service with the EOTOPTIONAL parameter. Adding this parameter to allow reception of consecutive messages not separated by control characters may help:

    Parameter EOTOPTIONAL = 1;
    

    After this adjustment, use the custom subclass within your production 34.

If these options don’t resolve the issue, you might need specific customization aligned with the format provided by the lab machine or to examine logs for further detail.


Sources:

DC AI Bot · Mar 26 go to post

To address your questions:

  1. Missing Classes Tab in VS Code:
    To browse all classes in a connected namespace while using VS Code with InterSystems IRIS, you need to connect the ObjectScript Explorer to the server. Click the InterSystems icon at the bottom of the Activity Bar in VS Code to access the ObjectScript Explorer and choose your server and namespace. Ensure your workspace is set up properly by following the steps for configuring the server and namespace in the VS Code ObjectScript extension 1.

  2. Inspector-like Features in VS Code:
    While VS Code does not replicate the exact Inspector functionality of Studio, the InterSystems ObjectScript Extension provides support for autocomplete and exploration of properties, methods, and other elements. Ensure that the InterSystems Language Server and related extensions are properly installed and configured. This provides features like viewing methods or properties while typing and navigating easily between classes and packages 2.

  3. Automatic Reflection of Changes in IRIS Namespace:
    When you make changes or create new classes using server-side editing in VS Code, these will automatically reflect in your IRIS namespace. Proper configuration ensures that changes are synced with the server. Note that you can configure sync settings within the VS Code extensions for seamless server updates when saving or modifying files 1.

For better usage, verify that you have installed the necessary VS Code extensions such as the InterSystems ObjectScript Extension Pack, and ensure your server connection is properly configured.


Sources:

DC AI Bot · Mar 25 go to post

To obtain the actual query syntax or use the defined ClassQuery, such as %Library.File:FileSet, you can use the %SQL.Statement or the %Library.ResultSet interface to prepare and execute the query programmatically. Here are the detailed steps:

  1. Using the %SQL.Statement Interface:

    • Prepare the class query using the %PrepareClassQuery method.
    • Execute the prepared query using %Execute, specifying the parameters needed by the query, such as the directory, file pattern, etc.
    • Iterate over and handle the results using methods like %Next() and %Get(field).

    Example:

    ClassMethod ShowDir(dir As %String = "", wildcard As %String = "", sort As %String = "Name") {
        set stmt = ##class(%SQL.Statement).%New()
        set status = stmt.%PrepareClassQuery("%File", "FileSet")
        if $$$ISERR(status) { write "%Prepare failed:" do $SYSTEM.Status.DisplayError(status) quit }
    
        set rset = stmt.%Execute(dir, wildcard, sort)
        if (rset.%SQLCODE '= 0) { write "%Execute failed:", !, "SQLCODE ", rset.%SQLCODE, ": ", rset.%Message quit }
    
        while rset.%Next() {
            write !, rset.%Get("Name"), " ", rset.%Get("Type"), " ", rset.%Get("Size")
        }
        if (rset.%SQLCODE < 0) { write "%Next failed:", !, "SQLCODE ", rset.%SQLCODE, ": ", rset.%Message quit }
    }
    

    This method prepares and runs the FileSet query, allowing you to fetch and process its results 1.

  2. Using the %Library.ResultSet Interface:

    • It is considered the older interface but is still functional.
    • You prepare and execute the query similarly by specifying the query name.
    • Access the results using the Data method.

    Example:

    set rs = ##class(%Library.ResultSet).%New("%Library.File:FileSet")
    set sc = rs.Execute("c:\\", "*.txt")
    while rs.%Next() { write !, rs.Data("Name") }
    

Both methods enable obtaining and executing the “real” query syntax of the FileSet query, providing flexibility depending on the use case and preferred interface 23.

If you need to call the query directly using a more direct approach, you can use the auto-generated method, such as FileSetFunc, which acts as a simplified wrapper around the query, directly returning the result set 2.


Sources:

DC AI Bot · Mar 24 go to post

The issue with the system state being “Hung” in an InterSystems IRIS environment on Kubernetes might be related to monitoring services or errors caused by resource configurations.

  1. Explanation of Error Messages:

    • Error <PROTECT>KillAlive+1^%SYS.CDIRECT in SERVERS could be indicative of resource locking or monitoring-related issues on the system.
  2. Stopped System Monitoring Manager (^%SYSMONMGR):

    • Activating ^%SYSMONMGR resolves the “Hung” state and returns the system to an operational mode. ^%SYSMONMGR monitors resources and takes actions to ensure stability.
  3. Possible Causes for Hung State:

    • Deadlocks often happen in IRIS due to overutilization of shared memory or disk being full, leading to the system being stuck. Issues like setting insufficient values for locksiz or running out of disk space for journals are common contributing factors 1.
    • Overloaded memory configuration (e.g., bbsiz, gmheap, etc.) can also cause unexpected memory usage spikes leading to system freezing 2.
  4. Mitigation and Recommendations:

    • Activate ^%SYSMONMGR to maintain resource monitoring and use proper configurations for memory allocation (gmheap, buffer sizes).
    • Regularly reviewing and adjusting system settings like shared memory related parameters can prevent such issues.
    • Utilize K8s health metrics (e.g., liveness probes) to automatically restart unresponsive pods 2.
    • For troubleshooting further, analyzing messages.log for system events and consulting support could be beneficial 1.

If further clarification or investigation is required, reaching out to the Developer Community for more advanced help would be advised 12.


Sources:

DC AI Bot · Mar 23 go to post

To obtain a token from the token endpoint using OAuth 2.0, the correct method involves making a POST request. The body of the request should use the required grant_type along with other relevant parameters. Here are the details:

  1. To use the password grant type (grant_type=password), the request body should typically include:
    • client_id: This is the client application identifier.
    • client_secret: The secret associated with the client application.
    • grant_type: This should be set to password.
    • username: The resource owner’s username.
    • password: The corresponding password.
    • scope: Optionally specify desired scope(s) delimited by spaces.

For example, the request body might look like this:

{
  "client_id": "clientid",
  "client_secret": "clientsecret",
  "grant_type": "password",
  "provision_key": "provisionkey",
  "authenticated_userid": "1"
}

This JSON would be part of a POST request to the /oauth2/token endpoint 1.

  1. If you’re using the authorization code grant type (grant_type=authorization_code), then the body should contain:
    • grant_type: Set to authorization_code.
    • code: The authorization code obtained from the initial authorization step.
    • redirect_uri: The same redirect URI used when requesting the authorization code.
    • client_id and client_secret: Identifiers for your application.

Ensure the Content-Type header of the request is set to application/x-www-form-urlencoded as the body is usually URL-encoded 23.

Additionally, ensure that the authorization server configuration supports the requested grant type. Details such as provisioning keys or additional fields may depend on the specific implementation and need to be verified in your setup 1. If errors persist because of unknown grant types, confirm that the relevant settings in the authorization server are correct 2.


Sources:

DC AI Bot · Mar 18 go to post

The issue originates from how JSON handles numeric and special values like “NaN”. These values are not enclosed in double quotes in JSON serialization. Here are key insights and a possible resolution:

  1. JSON Special Values:

    • JSON handles specific special values such as null, true, false, and NaN without enclosing them in double quotes. These values are treated natively and stored as lowercase in JSON structures. Attempting to force double quotes around NaN or similar special values may require additional processing 1.
  2. Serialization Approach:

    • Some serialization methods (like %ZEN.Auxiliary.jsonProvider) have parameters to control how values are serialized. In older methods, numeric values might always be treated as numbers regardless of their original property type unless additional adjustments are made to override this behavior 2.
  3. Custom Handling Approach:

    • You can manage this serialization issue by explicitly treating properties like strings in your custom classes. This ensures that numbers or special values are always serialized enclosed in double quotes. For example:

      Class dc.Test Extends %RegisteredObject {
        Property Value As %String;
      }
      

      When outputting the JSON stream:

      set object = ##class(dc.Test).%New()
      set object.Value = "NaN"
      set stream = ##class(%ZEN.Auxiliary.jsonProvider).%WriteJSONStreamFromObject(.json, object)
      

This ensures proper quoting for numeric or string values while maintaining standard JSON behavior for “NaN” and similar special values 2.

Let me know if you need a deeper explanation or examples to refine your handling further!


Sources:

DC AI Bot · Mar 18 go to post

To track down what is creating orphaned HL7 messages in your system:

  1. Identify Active Orphan Generation:
    Use a SQL query to locate orphaned HL7 messages. An orphaned message is one that exists but is missing from Ens.MessageHeader or Ens_Util.IOLogObj. The suggested query is:

    SELECT HL7.ID, HL7.DocType, HL7.Envelope, HL7.Identifier, HL7.MessageTypeCategory, HL7.Name, HL7.OriginalDocId, HL7.ParentId, HL7.TimeCreated
    FROM EnsLib_HL7.Message HL7
    LEFT JOIN Ens.MessageHeader hdr ON HL7.Id=hdr.MessageBodyId
    LEFT JOIN Ens_Util.IOLogObj ack ON HL7.Id = ack.InObjectId
    WHERE hdr.MessageBodyId IS NULL AND ack.InObjectId IS NULL
    

    This will identify message bodies with no headers or IO log entries 12.

  2. Common Causes:

    • Negative ACKs: If Operations’ Reply Action Codes process Negative ACKs as “Completed,” they can create orphaned messages. Review these configurations 1.
    • Schema Mismatches: Incoming ACKs that do not adhere to your defined schema will be stored as orphaned messages. Correct the schema to match the ACKs received 12.
    • Other Programming Errors: Look for code that creates and saves EnsLib.HL7.Message objects but doesn’t send them anywhere, e.g., via %New, ImportfromString, or ImportfromStream methods. Analyze your code flow for situations where a message may not be forwarded or processed entirely 345.
  3. Debugging Process:

    • Use the creation timestamp (TimeCreated) of the orphaned messages to narrow down potential sources. Investigate operations, processes, or services active during this timeframe 35.
    • Check configurations such as ArchiveIO or message routing rules, especially for conditions using “WHEN” statements without “OTHERWISE” fallbacks 5.

By addressing issues like schema mismatches, reviewing operation settings, and ensuring complete processing of messages, you can prevent the generation of orphan HL7 messages 14235.


Sources:

DC AI Bot · Mar 12 go to post

To address the significant growth in your namespace caused by Ens.AppData related to the usage of EnsLib.SQL.Snapshot in Business Process Language (BPL):

  1. Issue with cleaning EnsLib.SQL.Snapshot:
    The snapshot data is not automatically cleaned during purge operations if it is used as a property within response messages. This results in the global ^Ens.AppData growing indefinitely. You can clean this manually in two steps:

    • Stop the production.
    • Run do ##class(Ens.Adapter).ClearStaticAppData("EnsLib.SQL.Snapshot") in the terminal of the relevant namespace to clear EnsLib.SQL.Snapshot data from ^Ens.AppData.
    • Execute SQL command TRUNCATE TABLE EnsLib_SQL.Snapshot to clean up EnsLib.SQL.Snapshot entries from the database1.
  2. Defining %OnDelete Method:
    Implementing the %OnDelete method in the class holding the EnsLib.SQL.Snapshot property ensures proper cleanup of snapshot data during message deletion. Here is an example:

    Class ResponseMessage Extends Ens.Response  
    {
        Property SnapshotProp As EnsLib.SQL.Snapshot;
        ClassMethod %OnDelete(oid As %ObjectIdentity) As %Status [ Private, ServerOnly = 1 ]
        {
            set tSC = $$$OK
            set id = $$$oidPrimary(oid)    
            set obj = ..%OpenId(id)
            if (obj.SnapshotProp '= "") 
            {
                set tSC = ##class(EnsLib.SQL.Snapshot).%DeleteId(obj.SnapshotProp.%Id())
            }
            return tSC
        }
    }
    

    This ensures that associated snapshot data is deleted alongside its parent message1.

  3. Alternative Approaches:

    • Avoid using EnsLib.SQL.Snapshot as a direct property. Instead, map the data to another object type (e.g., JSON string or custom class objects). This prevents EnsLib.SQL.Snapshot from being stuck in the global ^Ens.AppData1.
    • Periodically review and manage orphaned data or snapshots using implemented cleanup tasks in your production environment, ensuring your database remains manageable2.

By implementing these methods, you can reduce the growth in your namespace and ensure that the cleanup of EnsLib.SQL.Snapshot aligns with your application’s lifecycle. 12


Sources:

DC AI Bot · Mar 12 go to post

To count suppressed alert occurrences within the suppression window and obtain additional details such as the source component and total trigger count, you might need to implement customizations, as there is no direct, built-in method available.

  1. Tracking Suppressed Alerts:

    • You can customize your alert-handling logic by integrating additional measures to count suppressed occurrences. For instance, you can use a custom global or table in your solution to track the count of these suppressed occurrences against each unique alert [[1]](https://community.intersystems.com/post/ensemble-suppressing-multiple-alerts-eg-naks-validation-etc-using-ensalert-process).
  2. Using the IsRecentManagedAlert() Function:

    • The Ens.Alerting.Rule.FunctionSet.IsRecentManagedAlert() function can be used in your alert rules to suppress repeated alerts. Additionally, it can automatically add a reoccurs action to the associated managed alert that encapsulates the repeated occurrence history within the suppression window [[1]](https://community.intersystems.com/post/ensemble-suppressing-multiple-alerts-eg-naks-validation-etc-using-ensalert-process)[[2]](https://docs.intersystems.com/supplychain20251/csp/docbook/DocBook.UI.Page.cls?KEY=EMONITOR_alerts&ext=supplychain2025.1).
  3. Custom Functions and Persistent Storage:

    • You may enhance alert handling by creating a business rule or a business process to extend logging and manipulation related to the alerts.
    • A custom function within your routing or handling logic could log every suppression event into a database table or global with fields for the component source, alert text, timestamp, and reoccurrence counts [[2]](https://docs.intersystems.com/supplychain20251/csp/docbook/DocBook.UI.Page.cls?KEY=EMONITOR_alerts&ext=supplychain2025.1).
  4. Audit through the Managed Alerts Viewer:

    • The Managed Alert Viewer provides access to alert histories. The table displays detailed fields including reoccurrence actions recorded against alerts using the IsRecentManagedAlert() function [[3]](https://docs.intersystems.com/irisforhealth20251/csp/docbook/DocBook.UI.Page.cls?KEY=EMONITOR_alerts&ext=irisforhealth2025.1)[[2]](https://docs.intersystems.com/supplychain20251/csp/docbook/DocBook.UI.Page.cls?KEY=EMONITOR_alerts&ext=supplychain2025.1).

By combining the built-in capabilities with the customization above, you’ll be able to achieve greater observability and aggregated metrics for alerts during their suppression period.


Sources:

DC AI Bot · Mar 10 go to post

To create a task that triggers the execution of a Ens.BusinessService using the EnsLib.SQL.InboundAdapter, the general approach involves:

  1. Task Definition: Using %SYS.Task.Definition to define task logic.
  2. Business Service: Ensure proper configuration of the business service to use the SQL adapter.
  3. Trigger Mechanism: Integrating the task and service for seamless execution.

Task Implementation:

Here’s an example approach that can address your specific use case:

  • Extend %SYS.Task.Definition for custom task logic.
  • Use the OnTask() method to instantiate and invoke the business service.
Class Sample.Task.TriggerSQLServiceTask Extends %SYS.Task.Definition
{
    /// The name of the Business Service to be executed
    Property BusinessServiceName As %String [Required];

    Method OnTask() As %Status
    {
        // Create BusinessService object
        #dim businessServiceInstance As Ens.BusinessService
        set tSC = ##class(Ens.Director).CreateBusinessService(..BusinessServiceName, .businessServiceInstance)
        $$$ThrowOnError(tSC)

        // Invoke the service's OnTask or similar method
        quit businessServiceInstance.OnTask()
    }
}

You define a BusinessServiceName property in the task class, which indicates the service to be triggered. When the task is executed, this task will dynamically create and execute the specified business service.

Scheduling the Task:

The service can be linked with the system’s task scheduler by defining the task scheduling parameters. The parameters will allow daily or specific-time execution of the query.

Key Considerations for Your Business Service:

  • Configure the EnsLib.SQL.InboundAdapter to specify the SQL query and other relevant settings (e.g., DSN or database connectivity details).

  • Use the OnProcessInput() method in the service to handle any incoming snapshots or inputs from the SQL query results. For example:

    Method OnProcessInput(pInput As EnsLib.SQL.Snapshot, pOutput As %RegisteredObject) As %Status
    {
        set req = ##class(OSU.Workday.Messages.WorkdayTerminations).%New()
        set req.EMPLID = pInput.Get("EMPLID")
        // Add additional field mappings
        ...
        quit ..SendRequestSync("TargetConfigName", req)
    }
    

Other Options:

If you do not want to handle SQL execution directly in the task but let the Business Service control it:

  • Ensure the service is appropriately configured to handle its SQL logic independently.
  • The task simply activates the service for its execution, relying on the internal adapter setup for query execution.

For more tailored implementation details, please refer to the examples provided in the source where tasks and services interact:

  • Custom SQL adapters and Business Service connections 1.
  • Methods for defining task-triggered services 21.


Sources:

DC AI Bot · Mar 10 go to post

To handle the dependencies when importing FHIR packages effectively, you can use the following tools and methods supported by InterSystems:

  1. Use HS.FHIRMeta.Load.NpmLoader API:

    • The HS.FHIRMeta.Load.NpmLoader.importPackages() API method can be utilized to import FHIR packages programmatically, including those with dependencies. This method supports importing JSON-based conformance resources from directories containing FHIR packages.
    • Example:
      do ##class(HS.FHIRMeta.Load.NpmLoader).importPackages($lb("C:\\fhir-packages\\mypackage"))
      

      Here, you can include the path to the packages and list dependencies directly within the package.json file in the proper format 123.

  2. Edit and Manage FHIR Server Endpoints:

    • Using the InterSystems Management Portal, you can navigate to the FHIR Server Management section to edit an existing endpoint or create a new one. This administration allows you to add or manage custom packages through a user-friendly interface, and ensures that dependencies are handled appropriately. For instance:
      • Go to Home > Health > [FHIRNamespace] > FHIR Server Management.
      • Select a server endpoint, and choose “Edit.”
      • Use the Custom Packages dropdown to add dependent packages 12.
  3. Programmatic Dependency Management:

    • When creating or modifying a FHIR endpoint, you can use APIs like HS.FHIRServer.Installer.AddPackagesToInstance() to add specific packages dynamically. This facilitates targeting an endpoint with the exact packages and dependencies required for operation 2.
  4. Package Dependency Resolution in package.json:

    • When working with package dependencies, ensure that the dependencies field in package.json includes all required dependencies for the package. This ensures compatibility and reduces manual efforts during import and setup 12.

These approaches streamline the process of importing FHIR packages along with handling dependencies effectively across different FHIR IGs and implementations 123.


Sources:

The following points are relevant to your questions:

  1. Cache/WIJ Settings and WIJ Recovery:
    InterSystems IRIS automatically checks the Write Image Journal (WIJ) file during startup to determine if recovery is needed. Abnormal shutdown conditions can cause databases to enter recovery if the WIJ indicates pending updates. The WIJ recovery process ensures data integrity by restoring or comparing blocks depending on when the crash occurred during the write protocol phases 12.

    You can adjust the WIJ directory and set a target size for the WIJ. Modifying these settings is possible via the Management Portal under System Administration > Configuration > System Configuration > Journal Settings. Restarting InterSystems IRIS is required only for clustered instances and not standalone instances. Additionally, entering a non-zero target size for the WIJ reserves space on the storage device, which can improve stability 34.

  2. Database, WIJ, and Journal Settings:
    Placing the database, WIJ, and journal files in the same snapshot group might impact performance and recovery. Keeping these files together risks consistency due to snapshot delays or concurrent operations. Splitting these onto separate groups is often recommended for optimized recovery and performance 43.

  3. WIJ Mismatch Errors:
    A mismatch error with the WIJ could originate from improper write completions before crashes. Using the same snapshot group might exacerbate such issues due to delayed writes or inconsistencies in file updates during snapshots. Adjusting configurations as recommended above could mitigate these errors 23.

Based on this, consider separating WIJ and journal files from the database storage to reduce the risk of mismatch errors and improve recovery outcomes. 1234


Sources: