go to post Dmitry Maslennikov · Apr 2 When you start background job, in the master process, you can collect $zchild with a pid of started process and collect them in the logs, and then in the loop you can check $data(^$job(child)) if it's still running that's the simplest approach
go to post Dmitry Maslennikov · Mar 25 Implemented for Interoperability: it can check status, including items in it, restart, update, recover, and check for queues and errors.There is also SQL Query executonhttps://www.youtube.com/embed/EVzZnkjIvoM[This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]
go to post Dmitry Maslennikov · Mar 21 Looks like there is no one specific use case for server, and there are so many variants on how it can be implemented Do you have something in mind, how would you use it? Just thinking about the list of tools to add in server implementation
go to post Dmitry Maslennikov · Mar 16 The value should be static, at the time of creating record on table For dates it uses range feature, so, you can split it by months, year (I suppose) and move the whole bunch of records including indexes to another databse
go to post Dmitry Maslennikov · Mar 16 While we moved our application to AWS, and we have some data, which we need to keep for a while. With this feature, we can move old data to a cheaper storage. I believe the ability to move to a cheaper storage is mostly the case. Another option is that some table is too big, and someone would like to split it to be stored in multiple different databases, together with the indexes.
go to post Dmitry Maslennikov · Mar 13 I don't see any differenceTime taken: 23.218561416957527 seconds to insert 100000 at 4306.899045302852 records per second. Time taken: 23.179011167027056 seconds to insert 100000 at 4314.247889152987 records per second. from sqlalchemy import create_engine import numpy as np import time import iris hostname = "localhost" port = 1972 namespace = "USER" username = "_SYSTEM" password = "SYS" # Create the SQLAlchemy engine DATABASE_URL = ( f"iris+intersystems://{username}:{password}@{hostname}:{port}/{namespace}" ) engine = create_engine(DATABASE_URL, echo=True) args = { "hostname": hostname, "port": port, "namespace": namespace, "username": username, "password": password, } connection = iris.connect(**args) # connection = engine.raw_connection() # Generate data for each row (50 fields) columns_count = 50 drop_table_sql = f"DROP TABLE IF EXISTS test_table" columns = [f"field_{i + 1}" for i in range(columns_count)] create_table_sql = f"CREATE TABLE test_table ({', '.join([f'{col} DOUBLE' for col in columns])})" num_records = 100000 # Define SQL insert statement sql_insert = f"INSERT INTO SQLUser . test_table ({', '.join(columns)}) VALUES ({', '.join(['?'] * columns_count)})" record_values = [] # Execute SQL insert try: start_time = time.perf_counter() # Capture start time batch = 0 cursor = connection.cursor() cursor.execute(drop_table_sql) cursor.execute(create_table_sql) connection.commit() for _ in range(num_records): record_values = [np.random.rand() for _ in range(columns_count)] cursor.execute(sql_insert, record_values) batch = batch + 1 if batch >= 10000: connection.commit() print("Batch inserted successfully!") batch = 0 connection.commit() end_time = time.perf_counter() # Capture end time elapsed_time = end_time - start_time print( f"Time taken: {elapsed_time} seconds to insert {num_records} at ", num_records / elapsed_time, " records per second.", ) except Exception as e: print("Error inserting record:", e) finally: cursor.close() connection.close() engine.dispose()
go to post Dmitry Maslennikov · Mar 4 I think you just confusing $listbuild with list in BPL context. In BPL when you define it as List Collection, it will use class %Collection.ListOfDT, or in case of array it will be %Collection.ArrayOfDT That means, that you should use if 'context.Facilities.Find(##class(Ens.Rule.FunctionSet).SubString(context.EpicDepartmentID,1,4)) { do context.Facilities.Insert(##class(Ens.Rule.FunctionSet).SubString(context.EpicDepartmentID,1,4)) }
go to post Dmitry Maslennikov · Mar 4 The support for NodeJS in IRIS is quite primitive and limited to only native functions, globals, methods, no SQL check in IRIS folder, what do you have installed with IRIS for nodejs I don't have windows, only docker, and in my case /usr/irissys/bin/iris1200.node /usr/irissys/bin/iris800.node /usr/irissys/bin/iris1600.node /usr/irissys/bin/iris1400.node /usr/irissys/bin/iris1000.node /usr/irissys/dev/nodejs/intersystems-iris-native/bin/lnxubuntuarm64/irisnative.node If you want to use IRIS SQL from nodejs, you can try my package, which you can install with npm npm install intersystems-iris const { IRIS } = require("intersystems-iris"); async function main() { const db = new IRIS('localhost', 1972, 'USER', '_SYSTEM', 'SYS') console.log('connected') let res = await db.sql("select 1 one, 2 two") console.log(res.rows); await db.close() } main() It's quite simple at the moment, only supports SQL with no parameters, but should work I believe
go to post Dmitry Maslennikov · Mar 4 I'm not familiar with HL7 processing, but I suppose should be like this foreach(<field>) { set $list(list, * + 1) = <field> } if you need to create list, than $list on the left side, with *+1 as position, to add to this list, and the right side, any value you would like to add to the list
go to post Dmitry Maslennikov · Mar 4 USER>kill list for i=1:1:10 { set $list(list, *+1) = i } zw list list=$lb(1,2,3,4,5,6,7,8,9,10) Is this what you're after?
go to post Dmitry Maslennikov · Mar 3 or do GenerateEmbedded do $system.OBJ.GenerateEmbedded("*") Which can help to discover hidden issues, like when some old SQL does not compile anymore (real case).
go to post Dmitry Maslennikov · Mar 3 if you happened to install the official python driver, it could mess here.
go to post Dmitry Maslennikov · Feb 27 Awesome, thanks for using sqlalchemy-iris btw, in case when you run it inside IRIS, you can use python embedded mode, this way engine = create_engine('iris+emb:///USER')
go to post Dmitry Maslennikov · Feb 26 Studio is already deprecated and works only on Windows Accessing the management portal depends on how you installed IRIS and which version you used. Your installation may not have private web server installed, and you would need to install any webserver yourself. run command: iris list it will output something like this Configuration 'IRIS' (default) directory: /usr/irissys versionid: 2025.1.0L.172.0com datadir: /usr/irissys conf file: iris.cpf (WebServer port = 52773) status: running, since Sat Feb 15 06:34:51 2025 SuperServers: 1972 state: ok product: InterSystems IRIS Here is WebServer port is what are you looking for, and the full URL will be http://localhost:52773/csp/sys/UtilHome.csp If you don't have a private web server, check documentation, on how to configure separate web server
go to post Dmitry Maslennikov · Feb 24 Yes, looks like it's solved now, pulled again and got 2025.1 now
go to post Dmitry Maslennikov · Feb 23 Tried to check 2025.1, but did not find it $ docker inspect --format '{{ index .Config.Labels "com.intersystems.platform-version" }}' containers.intersystems.com/intersystems/iris:latest-{em,cd,preview} 2024.1.3.456.0 2024.3.0.217.0 2024.1.3.456.0 For some reason, the preview points to em version, why so? For the community edition, $ docker inspect --format '{{ index .Config.Labels "com.intersystems.platform-version" }}' containers.intersystems.com/intersystems/iris-community:latest-{em,cd,preview} 2024.1.2.398.0com 2024.3.0.217.0com 2025.1.0.204.0com At least here it is 2025.1 version but, I've noticed that em version now not the same as for enterprise edition Anyway, where is 2025.1 for the Enterprise version? The community edition is too limited to check everything on it. The post is two months old at this point, but it's not available.
go to post Dmitry Maslennikov · Feb 20 if you are using the community edition, this is most probably related to the amount of connection used. try on enterprise version