The subroutine ^routine is not executed while the queue is being processed in WorkMgr. However, it works when defined as a function. Is it mandatory to define subroutine^routine as a function for it to execute properly?
I'm currently looking at a process where we're utilising the Class Ens.StreamContainer, and was looking to do some deletions outside of any purge routines.
Having been burned before, I wanted to make sure that deleting the container also deletes the contents within.
From looking in the class, the %OnDelete ClassMethod appears to be removing an index from a search table and nothing more.
One of our clients has a 'Notes' class with over 3 million records. We have a report that pulls data from this table that was taking about an hour to run. Our test environment (which has a copy of the production database) runs the same report query in 1 second.
We attempted to purge and rebuild indices which made an improvement (down to 15 minutes) but still not great.
I'm currently testing out IRIS 2024.3 for a new project, and it's been running smooth for the most part. However, I noticed that when running some heavier analytical queries, the memory usage spikes more than I expected, even when result sets aren’t that large.
I’ve gone through the basics (buffer sizes, query plans, etc.), but I’m wondering if there are any new tweaks or recommended settings in 2024.3 specifically for managing memory better during these peak loads.
Anyone else run into something similar or have tips to fine-tune this?
Does anyone know if iris merge can be called from Ansible. I have tried a couple of ways, but it doesn't seem to actually run the command on the target even though Ansible outputs it was successful.
I am receiving the garbled text due to incorrect encoding or decoding. I tried to use the $zconvert function to convert it into the normal text but failed to do that. Can anybody suggest what I have to use to convert that into normal text?
Example: Garbled text that I am getting is "canââ¬â¢t , theyââ¬â¢re".
Our team is transitioning to Git in the foreseeable future, and I'm trying to figure out how to design the best development workflow. Being new to IRIS, I am having trouble wraping my head around a few concepts.
I'm exploring this right now: given a bunch of types defined as Pydantic models, how can I come up with an equivalent %RegisteredObject/%SerialObject and convert to/from (e.g., to support persistence and match validation as much as possible)?
People who know Python better than I do (e.g., your average undergraduate from this decade): is this a stupid idea or a cool idea? Has anyone else done this before?
Is there a way to remote connect to IRIS terminal from my local machine?
I can remote connect to Studio & SMP from my locally windows installed IRIS cube to another IRIS installation in a Linux server, but not the case when trying to connect to the Terminal, is there a way to do so? I'm currently using ssh client but that one times out quickly
I deployed an IRIS REST application using Installer class. I think I created the namespace FEEDER database with %DB_Default resource and I used the same resource in Web Application roles. I allowed Unauthenticated Authentication method. I used ^%ISCLOG and reviewed ^ISCLOG. I do not understand why I get 403 Forbidden response.
Does %OSCertificateStore only check the trusted root folder in windows?
Can it be used for Personal store on servers or is there another condition can be used?
Used it for a first time and writing a function to check specific ones being used for expiry but had one this week that was to be installed in personal rather than the trusted root and didn't know if stating OSCertificateStore or a url otherwise to look in the personal installed certs on the server could be used instead so stuck with the original way (which can get confusing)
Trying to start investigating an error we are seeing with multiple of the same messages getting sent to the same vendor. We receive an HL7 message with an RTF embedded from our EMR, send it through a DTL to just update the Patient Class, and then send it onto the Operation which is TCP.
I want to create a scheduler to montor the status of list of backend jobs ( say limit is 10). there going to be job queue. Need to pick a job form job queue when one of the current processing job is finished. What is the best way to implement this
Not sure what I'm missing here, I'm using to download files that starts with MTC_88 from S3 bucket using AmazonS3 inboundAdapter as below, but I'm not getting anything and I'm sure there thousands of files that starts with MTC_88
I have a business service which is responsible for some batch operations with an SQL table. The process is generally slow but it is possible to scale the performance using multithreading and/or parallel processing and logical partitioning (postgres):
When batch inserting data into a table via SQL, I can use the %NOCHECK keyword to avoid checking foreign key integrity. This would be appropriate for cases when the inserted data has been verified by some external process.
However, when inserting via objects, I don't see a way to get the same behavior. Are there any workarounds that use objects for this?
I've a list of running scheduled task in task manger and would to crate a tasks to monitor if any of my tasks has stopped running, is there a function to check tasks status?