Does anyone have a query that I could run to show a Vendor the time difference between when a message was sent out a BO, and when we received the HL7 ACK back that is associated with the message sent?
I am trying to prove to this vendor of the delay we are seeing getting the ACK back because of a Timeout
I know how to pull Ens.MessageHeader, and EnsLib.HL7.Message but not sure how to match up the Message with the HL7 Acknowledgement received.
I was wondering if it was possible to use something like EnsLib.SQL.InboundAdapter with tables in IRIS.
This library monitors when a record has been inserted into a table in an external database, so it requires a DSN to connect to that database.
My goal is to make a call to an external API that takes a long time, it could spend nearly an hour (or more) completing its processes, but I don't want to block the main process.
I am testing vectorsearch, while doing so I am trying to paginate my resultset for a "next page" function to give me the first, second, third 15 entries within a table.
For this I have two embedding classes. One with a HNSW Index (vectornomicembedtextlatest), and one without (vectornomicembedtexttest).
Calling SELECT ID,PRIMKEY FROM SQLUser.vectornomicembedtexttest LIMIT 5 OFFSET 1 works fine with the first entry having the rowID of 486448. (I deleted old entries in the beginning and reused the table)
Hi, I am unsure how to remove this restriction; when I am performing dynamic SQL using ##class(%SQL.Statement).%ExecDirectNoPriv(, .query, args...)
It works fine, but the moment I add specific properties from the persistent class I am performing the select on into the WHERE clause, I get: ERROR #5540: SQLCODE: -99 Message: User UnknownUser is not privileged for the operation. Despite using %ExecDirectNoPriv, I've tried with prepared statement as well, exact same situation.
Hello. Currently, we are developing using Cache 2018 version. Our team is working on improving an existing legacy program so that it can also be used on the web.
Before asking my question, here is the development environment.
Would like to know if there is an alternative or better way to paginate through a dataset using dynamic SQL than what I am using below. The problem is that as the potential pool of data gets larger, this code slows down to the point of not being useable. In analyzing each line of code below, it appears the slow down is related to the initial rset.%Next() iteration. Is there anything available which does not require a subquery/%VID such as a simple LIMIT/OFFSET?
I have an IRIS persistent class with a %Stream property whose value is a JSON object. I'd like to use a SQL trigger to pull some value out of the JSON and persist it in another property, for easy querying and indexing. See below for a minimal example:
I have a general question about HealthShare Provider Directory using Code Tables on disk vs Cache SQL Tables. Why is Provider Directory not using the Cache SQL Tables within the IRIS platform?
I was wondering if someone could help me. In the past I have been able to call external Stored Procedures through a SQL Outbound Connection and have them return me the EnsLib.SQL.Snapshot to use within a BPL to extract data.
But this time instead of using a SQL Outbound BO to make the Stored Procedure call, I decided to create a Linked Stored Procedure through the %JDBC_Server to point to the Stored Procedure out on MS SQL.
However, I am struggling to get the code just right to return the Column value from the Linked Stored Procedure.
Our software commonly returns a full result set to the client and we use the DataTables plugin to display table data. This has worked well, but at datasets grow larger, we are trying to move some of these requests server-side so the server handles the bulk of the work rather than the client. This has had me scratching my head in so many ways.
I'm hoping I can get a mix of general best practice advice but also maybe some IRIS specific ideas.
I have a business service which is responsible for some batch operations with an SQL table. The process is generally slow but it is possible to scale the performance using multithreading and/or parallel processing and logical partitioning (postgres):
I need to generate a DDL file from a .cls class that already exists, the idea is to create a mirror table in SQL. Is it possible to do this export or do I need to do the CREATE TABLE manually?
I have the need to query an external database and write the result set/snapshot to an internal %Persistent [ DdlAllowed ] table that I built. I have built inbound SQL Services before and write them externally to replace SSIS jobs, but how would querying a database via a Service and writing the data to an internal table work?
Can I just take the inbound query structure and write it to the class file of the internal table in a DTL? If so, what would be the Target? Or does this need to be done within a BPL as a Code block?
My query that I am running on my Custom SQL Inbound Service has columns that are larger than the typical string length. How do I enlarge the SQL Snapshot Column limitations
What is the SQL table name for it? How can I obtain it via ObjectScript?
A quick search doesn't show any methods and properties. Documentation is a bit "wrong" here saying that the SQL table name is the same. It will be at least 'x_y.z'.
I've created a method in my File Service to do a cleanup for every file load. Currently, I've set it to delete data when LastUpdated date is greater than maxdate. However, I want to do a cleanup for every new file load. Any suggestions or advice on how to do this? Thanks!
So i want to use the INSERT OR UPDATE command so i can update a COUNTER for a given name:
INSERT OR UPDATE myTable SET name='Omer', counter = counter + 1;
as you can see with the above code - if the row is non-existent then we get an error because COUNTER is NULL! I tried the following to fix this but all have failed:
Hello! So my question is quite simple, Do the different data models of Intersystems all support the ACID properties? I assume that for the SQL data model implementation it does, But does it also work for global (i.e the hierarchical data model)?