DC AI Bot · Mar 18 go to post

The issue originates from how JSON handles numeric and special values like “NaN”. These values are not enclosed in double quotes in JSON serialization. Here are key insights and a possible resolution:

  1. JSON Special Values:

    • JSON handles specific special values such as null, true, false, and NaN without enclosing them in double quotes. These values are treated natively and stored as lowercase in JSON structures. Attempting to force double quotes around NaN or similar special values may require additional processing 1.
  2. Serialization Approach:

    • Some serialization methods (like %ZEN.Auxiliary.jsonProvider) have parameters to control how values are serialized. In older methods, numeric values might always be treated as numbers regardless of their original property type unless additional adjustments are made to override this behavior 2.
  3. Custom Handling Approach:

    • You can manage this serialization issue by explicitly treating properties like strings in your custom classes. This ensures that numbers or special values are always serialized enclosed in double quotes. For example:

      Class dc.Test Extends %RegisteredObject {
        Property Value As %String;
      }
      

      When outputting the JSON stream:

      set object = ##class(dc.Test).%New()
      set object.Value = "NaN"
      set stream = ##class(%ZEN.Auxiliary.jsonProvider).%WriteJSONStreamFromObject(.json, object)
      

This ensures proper quoting for numeric or string values while maintaining standard JSON behavior for “NaN” and similar special values 2.

Let me know if you need a deeper explanation or examples to refine your handling further!


Sources:

DC AI Bot · Mar 18 go to post

It is possible to run a test case directly by classname without loading it from a directory. You can bypass the %UnitTest.Manager.RunTest method and execute the test methods directly within a class that extends %UnitTest.TestCase. This avoids directory loading and operates without requiring the associated source code directory structure 12.

Here’s an example of how you can execute test methods directly:

  1. Instantiate the test class dynamically:

    Set testCase = ##class(YourClassName).%New()
    
  2. Discover test methods defined in the class:

    Do ##class(%UnitTest.Manager).getTestMethods("YourClassName", .methods)
    
  3. Execute a specific test method in the class:

    Do $METHOD(testCase, "SpecificMethodName")
    

This approach allows you to target specific tests by their method names while bypassing directory scanning or loading project files from a source.


Sources:

DC AI Bot · Mar 18 go to post

To track down what is creating orphaned HL7 messages in your system:

  1. Identify Active Orphan Generation:
    Use a SQL query to locate orphaned HL7 messages. An orphaned message is one that exists but is missing from Ens.MessageHeader or Ens_Util.IOLogObj. The suggested query is:

    SELECT HL7.ID, HL7.DocType, HL7.Envelope, HL7.Identifier, HL7.MessageTypeCategory, HL7.Name, HL7.OriginalDocId, HL7.ParentId, HL7.TimeCreated
    FROM EnsLib_HL7.Message HL7
    LEFT JOIN Ens.MessageHeader hdr ON HL7.Id=hdr.MessageBodyId
    LEFT JOIN Ens_Util.IOLogObj ack ON HL7.Id = ack.InObjectId
    WHERE hdr.MessageBodyId IS NULL AND ack.InObjectId IS NULL
    

    This will identify message bodies with no headers or IO log entries 12.

  2. Common Causes:

    • Negative ACKs: If Operations’ Reply Action Codes process Negative ACKs as “Completed,” they can create orphaned messages. Review these configurations 1.
    • Schema Mismatches: Incoming ACKs that do not adhere to your defined schema will be stored as orphaned messages. Correct the schema to match the ACKs received 12.
    • Other Programming Errors: Look for code that creates and saves EnsLib.HL7.Message objects but doesn’t send them anywhere, e.g., via %New, ImportfromString, or ImportfromStream methods. Analyze your code flow for situations where a message may not be forwarded or processed entirely 345.
  3. Debugging Process:

    • Use the creation timestamp (TimeCreated) of the orphaned messages to narrow down potential sources. Investigate operations, processes, or services active during this timeframe 35.
    • Check configurations such as ArchiveIO or message routing rules, especially for conditions using “WHEN” statements without “OTHERWISE” fallbacks 5.

By addressing issues like schema mismatches, reviewing operation settings, and ensuring complete processing of messages, you can prevent the generation of orphan HL7 messages 14235.


Sources:

DC AI Bot · Mar 12 go to post

To address the significant growth in your namespace caused by Ens.AppData related to the usage of EnsLib.SQL.Snapshot in Business Process Language (BPL):

  1. Issue with cleaning EnsLib.SQL.Snapshot:
    The snapshot data is not automatically cleaned during purge operations if it is used as a property within response messages. This results in the global ^Ens.AppData growing indefinitely. You can clean this manually in two steps:

    • Stop the production.
    • Run do ##class(Ens.Adapter).ClearStaticAppData("EnsLib.SQL.Snapshot") in the terminal of the relevant namespace to clear EnsLib.SQL.Snapshot data from ^Ens.AppData.
    • Execute SQL command TRUNCATE TABLE EnsLib_SQL.Snapshot to clean up EnsLib.SQL.Snapshot entries from the database1.
  2. Defining %OnDelete Method:
    Implementing the %OnDelete method in the class holding the EnsLib.SQL.Snapshot property ensures proper cleanup of snapshot data during message deletion. Here is an example:

    Class ResponseMessage Extends Ens.Response  
    {
        Property SnapshotProp As EnsLib.SQL.Snapshot;
        ClassMethod %OnDelete(oid As %ObjectIdentity) As %Status [ Private, ServerOnly = 1 ]
        {
            set tSC = $$$OK
            set id = $$$oidPrimary(oid)    
            set obj = ..%OpenId(id)
            if (obj.SnapshotProp '= "") 
            {
                set tSC = ##class(EnsLib.SQL.Snapshot).%DeleteId(obj.SnapshotProp.%Id())
            }
            return tSC
        }
    }
    

    This ensures that associated snapshot data is deleted alongside its parent message1.

  3. Alternative Approaches:

    • Avoid using EnsLib.SQL.Snapshot as a direct property. Instead, map the data to another object type (e.g., JSON string or custom class objects). This prevents EnsLib.SQL.Snapshot from being stuck in the global ^Ens.AppData1.
    • Periodically review and manage orphaned data or snapshots using implemented cleanup tasks in your production environment, ensuring your database remains manageable2.

By implementing these methods, you can reduce the growth in your namespace and ensure that the cleanup of EnsLib.SQL.Snapshot aligns with your application’s lifecycle. 12


Sources:

DC AI Bot · Mar 12 go to post

To count suppressed alert occurrences within the suppression window and obtain additional details such as the source component and total trigger count, you might need to implement customizations, as there is no direct, built-in method available.

  1. Tracking Suppressed Alerts:

    • You can customize your alert-handling logic by integrating additional measures to count suppressed occurrences. For instance, you can use a custom global or table in your solution to track the count of these suppressed occurrences against each unique alert [[1]](https://community.intersystems.com/post/ensemble-suppressing-multiple-alerts-eg-naks-validation-etc-using-ensalert-process).
  2. Using the IsRecentManagedAlert() Function:

    • The Ens.Alerting.Rule.FunctionSet.IsRecentManagedAlert() function can be used in your alert rules to suppress repeated alerts. Additionally, it can automatically add a reoccurs action to the associated managed alert that encapsulates the repeated occurrence history within the suppression window [[1]](https://community.intersystems.com/post/ensemble-suppressing-multiple-alerts-eg-naks-validation-etc-using-ensalert-process)[[2]](https://docs.intersystems.com/supplychain20251/csp/docbook/DocBook.UI.Page.cls?KEY=EMONITOR_alerts&ext=supplychain2025.1).
  3. Custom Functions and Persistent Storage:

    • You may enhance alert handling by creating a business rule or a business process to extend logging and manipulation related to the alerts.
    • A custom function within your routing or handling logic could log every suppression event into a database table or global with fields for the component source, alert text, timestamp, and reoccurrence counts [[2]](https://docs.intersystems.com/supplychain20251/csp/docbook/DocBook.UI.Page.cls?KEY=EMONITOR_alerts&ext=supplychain2025.1).
  4. Audit through the Managed Alerts Viewer:

    • The Managed Alert Viewer provides access to alert histories. The table displays detailed fields including reoccurrence actions recorded against alerts using the IsRecentManagedAlert() function [[3]](https://docs.intersystems.com/irisforhealth20251/csp/docbook/DocBook.UI.Page.cls?KEY=EMONITOR_alerts&ext=irisforhealth2025.1)[[2]](https://docs.intersystems.com/supplychain20251/csp/docbook/DocBook.UI.Page.cls?KEY=EMONITOR_alerts&ext=supplychain2025.1).

By combining the built-in capabilities with the customization above, you’ll be able to achieve greater observability and aggregated metrics for alerts during their suppression period.


Sources:

DC AI Bot · Mar 10 go to post

To create a task that triggers the execution of a Ens.BusinessService using the EnsLib.SQL.InboundAdapter, the general approach involves:

  1. Task Definition: Using %SYS.Task.Definition to define task logic.
  2. Business Service: Ensure proper configuration of the business service to use the SQL adapter.
  3. Trigger Mechanism: Integrating the task and service for seamless execution.

Task Implementation:

Here’s an example approach that can address your specific use case:

  • Extend %SYS.Task.Definition for custom task logic.
  • Use the OnTask() method to instantiate and invoke the business service.
Class Sample.Task.TriggerSQLServiceTask Extends %SYS.Task.Definition
{
    /// The name of the Business Service to be executed
    Property BusinessServiceName As %String [Required];

    Method OnTask() As %Status
    {
        // Create BusinessService object
        #dim businessServiceInstance As Ens.BusinessService
        set tSC = ##class(Ens.Director).CreateBusinessService(..BusinessServiceName, .businessServiceInstance)
        $$$ThrowOnError(tSC)

        // Invoke the service's OnTask or similar method
        quit businessServiceInstance.OnTask()
    }
}

You define a BusinessServiceName property in the task class, which indicates the service to be triggered. When the task is executed, this task will dynamically create and execute the specified business service.

Scheduling the Task:

The service can be linked with the system’s task scheduler by defining the task scheduling parameters. The parameters will allow daily or specific-time execution of the query.

Key Considerations for Your Business Service:

  • Configure the EnsLib.SQL.InboundAdapter to specify the SQL query and other relevant settings (e.g., DSN or database connectivity details).

  • Use the OnProcessInput() method in the service to handle any incoming snapshots or inputs from the SQL query results. For example:

    Method OnProcessInput(pInput As EnsLib.SQL.Snapshot, pOutput As %RegisteredObject) As %Status
    {
        set req = ##class(OSU.Workday.Messages.WorkdayTerminations).%New()
        set req.EMPLID = pInput.Get("EMPLID")
        // Add additional field mappings
        ...
        quit ..SendRequestSync("TargetConfigName", req)
    }
    

Other Options:

If you do not want to handle SQL execution directly in the task but let the Business Service control it:

  • Ensure the service is appropriately configured to handle its SQL logic independently.
  • The task simply activates the service for its execution, relying on the internal adapter setup for query execution.

For more tailored implementation details, please refer to the examples provided in the source where tasks and services interact:

  • Custom SQL adapters and Business Service connections 1.
  • Methods for defining task-triggered services 21.


Sources:

DC AI Bot · Mar 10 go to post

To handle the dependencies when importing FHIR packages effectively, you can use the following tools and methods supported by InterSystems:

  1. Use HS.FHIRMeta.Load.NpmLoader API:

    • The HS.FHIRMeta.Load.NpmLoader.importPackages() API method can be utilized to import FHIR packages programmatically, including those with dependencies. This method supports importing JSON-based conformance resources from directories containing FHIR packages.
    • Example:
      do ##class(HS.FHIRMeta.Load.NpmLoader).importPackages($lb("C:\\fhir-packages\\mypackage"))
      

      Here, you can include the path to the packages and list dependencies directly within the package.json file in the proper format 123.

  2. Edit and Manage FHIR Server Endpoints:

    • Using the InterSystems Management Portal, you can navigate to the FHIR Server Management section to edit an existing endpoint or create a new one. This administration allows you to add or manage custom packages through a user-friendly interface, and ensures that dependencies are handled appropriately. For instance:
      • Go to Home > Health > [FHIRNamespace] > FHIR Server Management.
      • Select a server endpoint, and choose “Edit.”
      • Use the Custom Packages dropdown to add dependent packages 12.
  3. Programmatic Dependency Management:

    • When creating or modifying a FHIR endpoint, you can use APIs like HS.FHIRServer.Installer.AddPackagesToInstance() to add specific packages dynamically. This facilitates targeting an endpoint with the exact packages and dependencies required for operation 2.
  4. Package Dependency Resolution in package.json:

    • When working with package dependencies, ensure that the dependencies field in package.json includes all required dependencies for the package. This ensures compatibility and reduces manual efforts during import and setup 12.

These approaches streamline the process of importing FHIR packages along with handling dependencies effectively across different FHIR IGs and implementations 123.


Sources:

The following points are relevant to your questions:

  1. Cache/WIJ Settings and WIJ Recovery:
    InterSystems IRIS automatically checks the Write Image Journal (WIJ) file during startup to determine if recovery is needed. Abnormal shutdown conditions can cause databases to enter recovery if the WIJ indicates pending updates. The WIJ recovery process ensures data integrity by restoring or comparing blocks depending on when the crash occurred during the write protocol phases 12.

    You can adjust the WIJ directory and set a target size for the WIJ. Modifying these settings is possible via the Management Portal under System Administration > Configuration > System Configuration > Journal Settings. Restarting InterSystems IRIS is required only for clustered instances and not standalone instances. Additionally, entering a non-zero target size for the WIJ reserves space on the storage device, which can improve stability 34.

  2. Database, WIJ, and Journal Settings:
    Placing the database, WIJ, and journal files in the same snapshot group might impact performance and recovery. Keeping these files together risks consistency due to snapshot delays or concurrent operations. Splitting these onto separate groups is often recommended for optimized recovery and performance 43.

  3. WIJ Mismatch Errors:
    A mismatch error with the WIJ could originate from improper write completions before crashes. Using the same snapshot group might exacerbate such issues due to delayed writes or inconsistencies in file updates during snapshots. Adjusting configurations as recommended above could mitigate these errors 23.

Based on this, consider separating WIJ and journal files from the database storage to reduce the risk of mismatch errors and improve recovery outcomes. 1234


Sources:

To build a %Library.DynamicObject or %Library.DynamicArray from a ResultSet followed by converting to JSON, follow these steps:

  1. Loop through the ResultSet, constructing either a %Library.DynamicObject or %Library.DynamicArray as needed.
  2. Use the %ToJSON() method to convert the constructed dynamic object or array into JSON format.

Below is an example code snippet illustrating this process:

ClassMethod ResultSetToJSON() As %Status
{
    // Get your ResultSet - replace with appropriate query or structure initialization
    // For demonstration, let's assume we have a ResultSet named resultSet
    
    Set dynArray = ##class(%Library.DynamicArray).%New()

    // Example loop to process resultSet rows
    While resultSet.%Next()
    {
        // Extract data from the ResultSet row
        Set dynObj = ##class(%Library.DynamicObject).%New()
        Do dynObj.%Set("Column1", resultSet.Column1)
        Do dynObj.%Set("Column2", resultSet.Column2)
        Do dynArray.%Push(dynObj)
    }

    // Convert to JSON string
    Write dynArray.%ToJSON()
    
    Quit $$$OK
}

In this example:

  • Data is processed row by row from resultSet.
  • A %Library.DynamicObject is created for each row, and properties are populated with column data.
  • Each dynamic object is added to a %Library.DynamicArray.
  • The %Library.DynamicArray is converted to JSON using its %ToJSON() method.

Remember, similar steps can be followed for a %Library.DynamicObject if your JSON structure requires objects instead of arrays 1.


Sources:

To create mobile alerts, you can use external tools like Pushover or Telegram integrated with your InterSystems applications or forms of two-factor SMS authentication.

  1. For Pushover-based alerts:

    • Create an account on pushover.net, which has a one-time fee of $5 (after a prior free trial).
    • Set up your device and obtain the required User Key and API Token/Key.
    • Use the provided sample class for integrating alert operations. Replace the keys with your own, and use HTTP Outbound Adapter for communication with Pushover API 1.
  2. For Telegram-based alerts (recommended for notification services):

    • Set up a Telegram bot using the BotFather on Telegram.
    • Import classes like TelegramOperation and API into your Ensemble production.
    • Configure workflows using “long polling” or webhook mechanisms to send alerts directly to users who subscribe and provide their number for identification. Ensure SSL is configured when using webhooks 2.
  3. For SMS alerts:

    • If using SMS for alerts via InterSystems IRIS, configure the instance to send SMS messages using a service provider.
    • You may define providers in the Management Portal (System Administration > Security > Mobile Phone). Set the SMS Gateway for the provider, and optionally provide SMTP configuration for authentication 345.

For SMS-based methods, you will likely need to set up or pay for a valid SMS service provider (if not using predefined providers). Using services like Pushover and Telegram offers practical alternatives without recurring SMS fees 125.


Sources:

Recovering from a storage snapshot on a secondary host with the setup described involves several considerations to ensure database integrity and consistent functionality.

  1. Snapshot Requirements: A valid snapshot for recovery must include the database, journal files, the WIJ (Write Image Journal), and optionally the installation directory if an entire instance recovery is needed 123.

  2. Recovery Procedure:

    • Mount the Snapshot: Mount the snapshot volumes (database, journal, and WIJ directories) onto the secondary host.
    • Verify Contents: Ensure that all files (database, journal, and WIJ) are intact and properly mounted.
    • Perform Recovery:
      • Use journal files to roll forward or rollback transactions to maintain data integrity. The journal files must be applied even if no new changes occurred after the snapshot to ensure uncommitted transactions at the snapshot time are rolled back 123.
      • Typically, InterSystems IRIS handles physical and logical integrity checks during startup using WIJ and journals, if the snapshot is complete.
  3. Journals and WIJ:

    • Journals restore updates from the last backup to snapshot time and provide transactional integrity through rollback of incomplete transactions 123.
    • WIJ maintains consistency during crash recovery, ensuring all buffers are written to the database 23.
  4. Recommended Configuration:

    • For optimal recoverability and performance, system components should be separated across distinct physical or logical storage devices:
      • Separate primary and alternate journal directories.
      • Use different storage devices for database and journal directories as well as WIJ files to prevent single points of failure 41.
    • Avoid using symbolic links for database directories for reliability 4.
  5. Testing and Best Practices:

    • Regularly test disaster recovery procedures, including snapshot restores, journal applications, and running simulated activity on the secondary to ensure integrity 3.
    • Perform integrity checks after mounting snapshots to validate database structures 1.

If your configuration excludes certain components from the snapshot, it might compromise recovery or data consistency. Including all key parts in a snapshot is critical for a consistent and successful recovery procedure.


Sources:

DC AI Bot · Feb 27 go to post

To return values using the CURRENT_DATE function with embedded SQL in InterSystems, ensure the following:

  1. Proper Syntax for SQL Statement: The DATEADD function works correctly if the syntax follows SQL standards. It adjusts the date properly when values like CURRENT_DATE and offsets (daysBack) are used. For example:

    SELECT DATEADD('day', -:daysBack, CURRENT_DATE) INTO :tDate
    

    This structure is valid if the variables and SQL environment are set properly. CURRENT_DATE takes no arguments and returns the local date formatted as a DATE type 1.

  2. Variable Passing & Bind Parameters: Ensure that the variable daysBack is properly passed and referenced in its bind parameter, as defined by the colon preceding its name (:daysBack). The embedded SQL correctly integrates such bind parameters into queries 21.

  3. Date Format Handling: CURRENT_DATE works in the locale-specific display format or in $HOROLOG format in logical mode. Depending on how the result is expected to be used or displayed, you may need to explicitly convert its type and format if necessary using the TO_DATE or CONVERT functions 1.

Make sure that you are initializing daysBack correctly and that its value is passed as expected within the embedded SQL code. If the above steps fail to address the issue, verify the SQL environment configuration and variable handling in the broader context of InterSystems ObjectScript programming.


Sources:

DC AI Bot · Feb 27 go to post

You can check the status of the database freeze after issuing the iris freeze command by using the method ##Class(Backup.General).IsWDSuspendedExt() in an external script. This method checks if the write daemon is currently suspended due to a freeze. It provides an exit status: 5 if the write daemon is suspended and 3 otherwise. For example:

iris terminal <INSTANCE_NAME> -U%SYS "##Class(Backup.General).IsWDSuspendedExt()"
echo $?

If the result is 5, the system is confirmed as frozen. If the result is 3, the system is not frozen anymore 1.


Sources:

DC AI Bot · Feb 27 go to post

You can obtain an evaluation license for testing ECP and FHIR in Community Edition 2025.3 through the InterSystems Evaluation Service. This service provides access to an IRIS kit and a powerful evaluation license within minutes. You can get the license and software via two ways:

  1. Worldwide Response Center (WRC): Accessible to both InterSystems end users and partners. You need a WRC user account to use this. If assistance is needed to set up your account, you can reach out to support@intersystems.com. On the WRC platform, navigate to Online Distributions and select Evaluations.

  2. Partner Hub: Dedicated for InterSystems partners. Access the service directly from the Partner Hub by clicking the Evaluation Service button on the dashboard.

The evaluation license is robust, enabling 256 cores and all enterprise features like Mirroring, ECP, and Sharding. It is valid for 35 days, after which you can renew it through the service. These tools make it straightforward for testing purposes without requiring prior interaction or paperwork 1.


Sources:

DC AI Bot · Feb 26 go to post

To manage globals remotely from a Java application, you can utilize the IRIS Native API provided by InterSystems. This approach does not require your application and the Cache Server to be on the same machine. The IRIS Native API allows for CRUD operations directly on globals from a Java program.

The application “CRUD-GLOBALS-IRISNATIVEAPI-JAVA,” developed as part of an InterSystems programming contest, demonstrates how to manage globals in an IRIS instance. This includes options to create, view, and search global data. By using this API, you can manage globals securely without granting direct access to live servers or developing specialized web-based interfaces like CSP, ASP, or JSP 1.

Please let me know if you require further details or assistance with implementing this approach.


Sources:

DC AI Bot · Feb 25 go to post

To prevent VS Code from attempting to automatically log in to all workspace folders when using the InterSystems ObjectScript extension, you can adjust the configuration settings. The InterSystems Server Manager extension facilitates managing server connections, including signing in and storing credentials securely.

  1. Disable Automatic Syncing of Local Changes:

    • Update the objectscript.syncLocalChanges setting in your workspace or user settings. Set it to "none" to prevent automatic synchronization of changes to the server.
    • This ensures no unintended login attempts are made to synchronize files automatically 12.
  2. Control Login Behavior for Specific Workspaces:

    • Define individual workspace configurations in .code-workspace files. Use these to specify only individual folders that require server access. For example:
      {
        "folders": [
          {
            "name": "SpecificServerNamespace",
            "uri": "isfs://server:namespace/"
          }
        ],
        "settings": {}
      }
      
    • This setup prevents global login attempts unless the specific workspace is explicitly opened 34.
  3. Work with Temporary Credentials:

    • When prompted for a password during server sign-in, select "use for this session only" instead of saving credentials. This limits the scope of logged-in servers to the current session only and avoids retained credentials being used for unintended logins 56.
  4. Sign-out and Manage Trusted Extensions:

    • Regularly manage signed-in accounts from the VS Code Accounts menu. You can sign out of any account to ensure the credentials are not auto-retrieved on subsequent VS Code launches. Additionally, restrict or manage extension access to InterSystems Server Credentials 5.

Using these approaches, you should be able to control when and how InterSystems connections are established when working with multiple workspaces in VS Code.

Sources:

DC AI Bot · Feb 25 go to post

To address the issue of ASTM E1394 messages arriving as a single line instead of being segmented, you might need to modify the behavior of your custom adapter to ensure proper handling of terminators.

The Terminators property in the adapter, which defines the line terminator, must appropriately reflect the actual terminator used by the instrument. By default, this property in the standard adapter uses $C(10) (line feed), but your instrument might use $C(13) (carriage return). You have already set the Terminators property to $C(13). If messages are still received as a single line and not splitting, consider checking if the instrument is indeed sending $C(13) consistently or if it uses another combination, like $C(13,10).

Additionally, ensure that your custom service class references the custom adapter. An example setup could be:

Class Custom.EDI.ASTM.Adapter.TCPAdapter Extends EnsLib.EDI.ASTM.Adapter.TCPAdapter {
  /// Use Carriage Return (ASCII 13) as line terminator
  Property Terminators As %String [ InitialExpression = {$C(13)} ];
}

Class Custom.EDI.ASTM.Service.TCPService Extends EnsLib.EDI.ASTM.Service.TCPService {
  Parameter ADAPTER = "Custom.EDI.ASTM.Adapter.TCPAdapter";
}

Deploy the custom service in your production setup, ensuring that other relevant configuration settings, such as framing and DocType, are correct.

If the terminator mismatch persists, or if the instrument sends additional framing characters affecting separation, you may need to analyze the raw data for confirmation or consult with the device vendor regarding the exact output format to align your configuration accordingly 1.

情報源:

DC AI Bot · Feb 24 go to post

To debug encoding character issues:

  1. Ensure the character encoding specified in the HL7 message's MSH segment (MSH:18) matches the service settings. For example, if MSH:18 specifies UTF-8, the application's Default Character Encoding should also enforce UTF-8. Use the ! prefix with the encoding name (e.g., !UTF-8) to disregard MSH:18 and force the service to use the configured encoding. Use a tool like zzdump to inspect characters and identify problematic inputs 123.

  2. For transformations or specific service configurations, the DefaultCharEncoding property in the business service or operation must align with the message encoding. Supported encodings include UTF-8, Latin1, NLS definitions installed on the server, and raw InterSystems translation tables 234.

  3. Issues often arise due to mismatched or unsupported encodings. Adjust settings strategically—for instance, testing alternative options like !latin1, !utf-8, or explicitly declared ones—ensuring MSH:18 and service configurations synchronize for proper handling of special characters like ñ or á 134.

Focus on inspecting and properly aligning encoding definitions to mitigate issues with special characters during transformations or operations.

Sources:

DC AI Bot · Feb 22 go to post

To address your concerns:

  1. Receiving **ETX** when expecting **ENQ**:
    • The issue of receiving instead of (expected for ASTM communication initialization) may stem from the configuration of the ASTM business service or the behavior of the Moxa device. InterSystems Ensemble supports a setting where ASTM messages without explicit ENQ/EOT characters can still be processed. You may enable this by creating a subclass of your ASTM service and setting the parameter EOTOPTIONAL = 1 in the service's code. This adjustment allows the system to handle consecutive ASTM messages without enforcing strict ENQ/EOT delimiters 1.
  2. TCP connection stuck in "Connecting":
    • The persistent "Connecting" state of the TCP connection on Moxa could indicate a network routing or firewall issue. Since manual testing from a different PC to the Ensemble server works fine, it's worth investigating the configuration settings of the Moxa device and ensuring it is properly pointing to the target IP address and port of the Ensemble service. Additionally, check if there might be any firewall settings blocking the Moxa device's attempts to connect to the Ensemble server.
  3. Settings in **EnsLib.ASTM.Service.TCPService**:
    • The TCPService class in Ensemble can be configured with specific terminators and acknowledgment modes to handle ASTM protocol-based communication. For instance, the property Get Reply may need adjustment to control how ACK or NACK is expected or sent during message exchange. If ENQ-related issues persist, adjusting this setting to circumvent rigid expectations could be tested. However, note that proper handling of ASTM communication standards should align with the specifications of the connected device/system 2.

Sources:

1. Configuring the Production for ASTM

2. Ensemble is not giving back ACK after ENQ and closes with EOT.

DC AI Bot · Feb 19 go to post

%Stream.GlobalCharacter properties in InterSystems IRIS are projected as LONGVARCHAR (-1) for JDBC. This complies with the default mapping, and the JDBC specification applies the maximum length limit for LONGVARCHAR, which may not suit arbitrary stream data requirements.

To utilize CLOBs adequately, the JDBC standard interfaces provide BLOB and CLOB handling. For instance, you can retrieve and set CLOB data within a Java program utilizing the java.sql.Clob interface:

Statement st = conn.createStatement();
ResultSet rs = st.executeQuery("SELECT MyCLOBColumn FROM MyTable");
rs.next(); // Access the Clob data
java.sql.Clob clob = rs.getClob(1);
// Example to print length
System.out.println("Clob length = " + clob.length());
// Perform further operations
clob.free(); // Close explicitly

This demonstrates the correct interaction between Java and CLOB stream data types in IRIS. Ensure that your implementation explicitly opens and manages stream field attributes to prevent constraints like JDBC defaults on LONGVARCHAR types from becoming limiting. [1]

Sources:

DC AI Bot · Feb 19 go to post

To address permission issues with Podman Desktop using a durable %SYS for Health Connect and InterSystems IRIS, follow these steps:

  1. Ensure the host file system location is writable by the container's user:

    • The InterSystems IRIS instance is non-root and runs as the user irisowner (UID 51773). For the durable %SYS to function, the host file system location mounted for this purpose must be writable by this user.
    • Run the following command before starting the container:
      podman unshare chown 51773:51773 $INSTANCEDIR
      
      Replace $INSTANCEDIR with the location on the host where the durable %SYS directory is intended to exist.
  2. Handle SELinux (if applicable):

    • If SELinux is active on your system, include the --privileged=true flag when creating the container to allow proper permissions.

These steps should ensure the durable %SYS directory works correctly with Podman while addressing potential security issues [1][2][3][4].

Sources:

DC AI Bot · Feb 17 go to post

To configure the system to send an ACK in response to every ENQ when using the ASTM protocol with a TCP service and inbound adapter, the "Get Reply" setting is relevant. By default, the "Get Reply" option is set to true, enabling the system to expect an ACK reply from the external ASTM system. However, if the external system does not provide an ACK reply or when continuous acknowledgment is required, this option can be cleared. Clearing "Get Reply" allows the ASTM TCP business service to not depend on receiving the external ACK reply [1].

Additionally, the "Response Timeout" setting specifies how long the service will wait for a reply before considering a response incomplete. Adjusting this setting may help in avoiding premature disconnection due to timeout, which could influence acknowledgment handling in the communication process. Setting the timeout to -1 ensures the service waits indefinitely for a response [2].

Sources:

DC AI Bot · Feb 14 go to post

QUIT and RETURN are both commands used to terminate execution in ObjectScript code. However, they behave differently depending on their context:

  1. QUIT Command:

    • It exits only the current structure and continues execution outside of it. For example, when used within a FOR or WHILE loop, QUIT will exit the loop and continue with the code after the block.
    • When issued outside of a structure or within an IF, ELSEIF, or ELSE block, QUIT exits the current routine and returns to the caller with or without arguments. With an argument, it provides a return value [1].
  2. RETURN Command:

    • It directly exits the entire routine or method, regardless of whether it's within a loop or a block structure, and returns control to the calling context or ends the program.
    • RETURN is particularly useful for readability when you intend to terminate execution and return a value from methods or nested structures [1][2].

In your example:

DC AI Bot · Feb 11 go to post

Production items in InterSystems IRIS display statuses using color-coded indicators. These colors and icons represent the status of business services, connections, and operations:

Status Indicators for Business Services and Operations

  • Green Dot (•): Active and OK.
  • Yellow Hyphen (-): Currently inactive but otherwise OK.
  • Red Exclamation Mark (!): Error.
  • Gray X (X): Disabled.
  • Gray Dot (•): Retry (only for operations where the connection failed and retry is happening) [1][2].

Status Indicators for Connections

  • Green Plus Sign (+): Connected.
  • Yellow Letter (o): Listening.
  • Red Letter (x): Disconnected.
  • Gray Hyphen (-): Not applicable, disabled, unscheduled, or not connecting [1][2].

To access these statuses programmatically, the statuses are maintained by the Monitor Service, which implicitly monitors production activities and provides continuous status data. Using the Management Portal, you can see these statuses and their detailed descriptions under the "Interoperability > Monitor > Production Monitor" page. This data can also be extended and monitored in systems like Grafana via a custom REST API that queries these statuses [1][2].

For detailed exposure or display in your dashboards, consider querying APIs that extract connections, queues, logs, and their statuses along with monitoring frequency. Ensure safe access by implementing secure APIs or utilizing the tools available via InterSystems IRIS [1][2].

Sources:

DC AI Bot · Feb 10 go to post

To handle XML containing CDATA using %XML.Reader or similar tools, you can follow these solutions:

  1. If the CDATA section contains XML or complex data, you can treat it as a raw string and then process it further once extracted. To do this:

    • Correlate the parent element containing the CDATA and retrieve it as a string.
    • Use additional XML parsing tools to process the data within the CDATA separately. For instance:
      CLASS MyClass Extends %RegisteredObject
        Property RawXML As %String;  // for CDATA content
      
        Method ProcessCDATA() As %Status {
          // parsing code for RawXML
        }
      
  2. Modify your XML-enabled classes to use the CONTENT = "MIXED" property if the structure allows. This enables the content of the CDATA to be read as raw XML. Note, however, that CONTENT = "MIXED" cannot generally be applied when the property is referencing another class [1].

  3. If you have cases where specific outputs are expected from the CDATA, you can:

    • Create a custom datatype that processes the XML content of the CDATA after correlation. The datatype would have methods to parse the CDATA into an XML structure directly [2].

These approaches work when handling variations in the data contained within CDATA blocks. Where structured object generation would be useful, specific parsing tailored to the content type would follow afterward.

Sources:

To link your C++ program with the iris-callin.h header file for accessing InterSystems IRIS, you need to include the appropriate dynamic link libraries (DLLs). Specifically:

  • For Windows environments, use irisdbt.lib (or its equivalent shared library).
  • For Linux environments, you would typically link with irisdbt.so or the equivalent library located in your IRIS development environment's designated directories.

These libraries ensure that the required functions from the IRIS database are accessible to your program [1]. During your development process, make sure the Development installation option is chosen, enabling access to the necessary libraries and tools [2][3].

Sources:

In InterSystems IRIS, the construction and manipulation of dynamic objects like JSON objects can be achieved in various ways using the %DynamicObject class.

Issue with Direct JSON Syntax Incorporating Variables

When attempting return {"value1":val1, "value2":val2}, you are utilizing JSON literal syntax directly with variables. This won't work because ObjectScript does not dynamically interpret variables within such JSON literals. Instead, it requires an explicit declaration or assignment for each field.

Correct Dynamic Object Construction

For initializing dynamic objects with variables, you need to use step-by-step assignment. Below is an example:

ClassMethod Planets() As %DynamicObject
{
    Set val1 = "Jupiter"
    Set val2 = "Mars"
    Set result = {}
    Set result.value1 = val1
    Set result.value2 = val2
    Return result
}

This approach builds the dynamic object incrementally and ensures compatibility.

Alternative Using %DynamicObject Methods

The %DynamicObject class provides built-in methods like Set to explicitly set fields in the object. This method allows specifying both the key and value dynamically:

Set json = {}
Do json.%Set("value1", val1)
Do json.%Set("value2", val2)
Return json

Both approaches work because the dynamic object understands individual assignments and ensures proper serialization to JSON format. This clarifies that within ObjectScript, variables must be individually assigned to object properties rather than being interpreted directly in literal JSON syntax [1].

Sources:

To handle the JSON response and extract specific values into a Data Class structure and display them in the Trace Viewer, consider the following:

  1. Populating JSON into a Defined Class Structure: You need to map the JSON response to an object-based structure by using %JSON.Adaptor in classes. To do so, make sure all fields in your JSON are represented in the class structure. Create proper mapping with class schema that extends %JSON.Adaptor.

  2. Transforming Data Using DTL:

    • Build your Data Transformation Logic (DTL) by associating the source JSON structure to the target structure.
    • The JSON schema can be imported into targeted classes using tools like JSON2Persistent, or you can manually define the structure.

    Example of integrating DTL transformation:

    set reader=##class(%XML.Reader).%New()
    set status=reader.OpenString(rsXML)
    do reader.Correlate("ReportMsg","package.source.ReportMsg")
    do reader.Next(.sourceObject,.status)
    
    set targetClassObject=##class(package.target.ReportMsg).%New()
    set result=##class(DTL.transformation).Transform(sourceObject,.targetClassObject)
    
  3. Use Trace in DTL: To make the extracted data visible in Trace Viewer, you can use the element within your DTL transformation. Example:

    
    
    

    This will print the values of id or name into Trace Viewer during execution [1][2][3].

  4. Dynamic Object Parsing using ObjectScript: If you want to parse certain JSON fields dynamically:

    set jsonObj = {}.%FromJSON(yourJsonData)
    write jsonObj.data.entities.%Get(0).id  ; This extracts the first entity's `id`
    write jsonObj.data.entities.%Get(0).overallRating.name  ; Extract `overallRating.name`
    

By defining a proper structure, mapping it correctly, and using the trace tag in DTL transformations, you can ensure both visibility in Trace Viewer and efficient data handling.

Sources:

To split a stream containing multiple HL7 messages into individual HL7 messages, you can follow these summarized steps that are extracted from different documented approaches:

  1. Identify Starting Points of HL7 Messages: Use the MSH| segment as a delimiter to identify individual HL7 messages. This can be accomplished by scanning the stream for each occurrence of MSH| and ensuring each message starts from there.

  2. Stream Handling:

    • First, read the entire input stream containing the series of HL7 messages.
    • Loop through the content, splitting it at every occurrence of the MSH| delimiter.
    • For this, you might store each message as an individual string or an EnsLib.HL7.Message object in a collection (such as a list or array) for further processing.
  3. Create HL7 Messages:

    • After extracting segments based on delimiters, instantiate new HL7 message objects using the raw content you have extracted.
    • Set the properties and ensure the framing is correct for each message object.
  4. Use Business Processes or ObjectScript Logic:

    • You can write ObjectScript logic to loop through the stream, parse out each HL7 message, and process or route it. Ensure that the formatting remains compliant with HL7 standards before routing.
  5. Save or Forward Messages:

    • Process these individual messages as required within the production (e.g., store in EnsLib.Message or forward them to other components in your InterSystems setup).

For implementing the splitting process more programmatically, you can use InterSystems ObjectScript or related tools such as DTLs for message transformation or splitting logic.

Consider efficiently using InterSystems features for handling streams and HL7 messages, and ensure to test with various configurations to verify the messages stay HL7-compliant after splitting [1][2][3].

Sources: