WSDL for a CDC vendor was provided with a URL using a custom socket (non 443). Everything generated fine, but when making calls to their https:// URL that has their custom port in the URL - no response comes back. The assumption is, their server isn't even processing the request, as Postman does using the custom https://path.to.server:NNNN port in the URL.

0 3
0 29

As the title says, I've noticed that files that gets saved to the disk where the database lies (.DAT file) in the stream directory, does not get purged. Is this expected and do we need to create our own schedule task to clean this folder up?

I could only find old answers that say this, however I find it a bit odd if that is the case because they are considered temporary files. Perhaps I do not handle the streams correctly in the code?

0 6
1 50

For background, I've developed code that relies on %JSON.Adaptor functionality across an entire package of classes in our codebase. In regenerating these classes with %JSON.Adaptor as a superclass, I've encountered compilation errors from the JSON adaptor generators in certain classes, triggered by certain property types or parameters that are incompatible.

0 2
0 21

Is anyone using DICOM Interoperability in IRIS for Health configured in Mirror?

I'm asking because I'm not sure how to handle where the DICOM messages are stored.

For some reason DICOM use the filesystem to store raw messages, the directory used can be configured in the StorageLocation production settings, obviously this is a big issue if/when a mirror failover occur.

Unfortunately in IRIS it's not possible to change the DICOM storage from file stream to global stream.

Has anyone came across this issue?

0 8
0 69
Question
· May 1
Embedded Python query

Hi Guys,

How can I use this same objectscript principle in embedded python to get the value of each field in my loop?

s sql="SELECT ID, Name, Age FROM Sample.Person WHERE Age="_Myage

Set RS=##class(%ResultSet).%New()

Set Ret=RS.Prepare(sql)

Set Ret=RS.Execute()

While RS.Next()

{

Set name=RS.GetData(2)

}

0 2
0 44
Question
· Apr 25
Setting Python

Hi Guys,

I'm a newbie running IRIS in a container (IRIS for UNIX (Ubuntu Server LTS for x86-64 Containers) 2024.3 (Build 217U) Thu Nov 14 2024 17:30:43 EST) and trying to setup Python so I can start working on ML & Autosklearn,my understanding is that IRIS already comes with embedded Python but unable to do something like "import pandas as pd" in VSCode which looks like I need install a more complete version of Python or packages, so what I'm missing?

1 2
0 75
Question
· Apr 23
IntegratedML

Hi Guys,

I'm a newbie that doesn't know much about integratedML and looking for a first push into it, I've setup VSCode with my IRIS 2024.3 running in Linux and my understanding is that we can create models using SQL, so first, do I need to setup a specific environment where I can run my SQL commands to create & train Models or just using SMP, and do I need to install or enable Python ..etc things required to setup the environment?

Then if there are easy samples or training materials on how to create, train & deploy my model?

Thanks

0 2
0 40

i have a line in my *.scr file like this:

send: s rlt=##class(%SYS.Namespace).ListAll(.rlt)<CR>
wait for:USER>
send: S A="" F S A=$O(rlt(A)) q:A="" I A["USER"!(A["CLIE") F I=1:1:$L(L,",") S G="^["""_A_"""]"_$P(L,",",I) S B="" F S B=$O(@G@(B)) Q:B="" F J=1:1:$L(B) S V=$A($E(B,J,J)) I (V<48!(V>57))&&(V<65!(V>90))&&(V<97!(V>122)) W A,?10,G,?30,B,!<CR>
wait for:USER>

it errors out with this <SYNTAX> error where the "<" or ">" are not accepted.

0 3
0 58

Hello Team,

I got xDBC protocol is not compatible while executing python script. How to fix this error

C:\Users\ak\Desktop\lpyth\iris>C:/Users/ak/AppData/Local/Programs/Python/Python312/python.exe c:/Users/ak/Desktop/lpyth/iris/irisconn.py
An error occurred: connection failed: IRIS xDBC protocol is not compatible

py -m pip list
Package Version
------------------ ---------
intersystems-iris 3.9.2

0 2
0 35

Hi everyone,


I'm currently testing out IRIS 2024.3 for a new project, and it's been running smooth for the most part. However, I noticed that when running some heavier analytical queries, the memory usage spikes more than I expected, even when result sets aren’t that large.

I’ve gone through the basics (buffer sizes, query plans, etc.), but I’m wondering if there are any new tweaks or recommended settings in 2024.3 specifically for managing memory better during these peak loads.

Anyone else run into something similar or have tips to fine-tune this?

0 1
0 53