Thanks for this valuable feedback.

Few years ago (2020 ish), i had to do a project based on DocDB.
We encontered the same issues :

  • API first, not code first
    • Workaround : We relied heavily on scripts to generate the "Databases", "Properties", "Indexes" and so.
  • When you create a property it's not automatically indexed
    • Workaround : We created a wrapper around the SDK to ensure that every property was indexed
  • No way to enforce a schema
    • Workaround : No workaround, we didn't really care about that at the time

What you didn't mention and that we encountered :

  • Composite indexes are not supported
    • Workaround : We solved this with the "wrapper" we created
  • No support of neasted objects
    • Workaround : We didn't solve this, we had to flatten all the objects :(
  • Some operators was not supported or not working as expected
    • Workaround : We created some wrc tickets and most of them were fixed :) or built our own sql statements based on indexes

What great is that we never get blocked by those issues, we always found a workaround.

I'm glad to see that DocDB is still alive and the team is working on it.

It's a step in the right direction to support "json" databases. I can't wait to see the next steps maybe a client side library, support for nested objects, composite indexes, a great sql function to support json objects, etc.

can you give a try to https://github.com/grongierisc/iris-embedded-python-wrapper.

follow the readme, it will give you the instruction to work with venv and a chosen version of python and bind it to iris.

behind the scene, this module help to setup the PythonPath, PythonRuntimeLibray and PythonRuntimeLibrayVersion

let me know, if you find any issue.

Btw, it will not solve the python 3.13 issue, you need to upgrade to 2025.1 to support it.

I just build it, i had no issue on my side.

In your case, it seems that you don't have the permissions to access /tmp while building the image.

It's weird because /tmp is a public directory, so you should have access to it.

Make sure you haven't mounted a volume on /tmp.

Otherwise, you can try to modify the Dockerfile to use another directory, like /home/your_user/tmp.

# run iris and initial 
RUN iris start IRIS \
    && iris session IRIS < /opt/irisapp/iris.script \
    && iris stop IRIS quietly
import iris

GLOBAL_QUEUE = iris.gref("Ens.Queue")

def get_list_host_queue() -> dict:
    dict_queue = {}
    for composed_key, v in GLOBAL_QUEUE.query():
        host = composed_key[0]
        if host not in dict_queue:
            try:
                dict_queue[host] = v if composed_key[2] == 'count' else None
            except IndexError:
                dict_queue[host] = None
    return dict_queue

if __name__ == "__main__":
    print(get_list_host_queue())
    
# {'Ens.Actor': 0, 'Ens.Alarm': 0, 'Ens.ScheduleHandler': 0, 'EnsLib.Testing.Process': 0, 'Python.MyAsyncNGBO': 10, 'Python.MyAsyncNGBP': 0, 'SystemSignal:29404': 0, '_SyncCall:29420': 0}

Try some thing like that

As Robert said it's because of the list build serialization.

You can give a try to :

https://pypi.org/project/iris-dollar-list/

which is an list build parser in python :

from iris_dollar_list import DollarList

dollar_list_str = b'\x1B\x01SERVERA.FOO.BAR.ORG/STAGE\x1A\x01SERVERA.foo.bar.org|2188\t\x01Primary\x08\x01Active\x13\x01172.31.33.69|1972\x1A\x01SERVERA.foo.bar.org|1972'
dollar_list = DollarList(dollar_list_str)
print(dollar_list)

## $lb("SERVERA.FOO.BAR.ORG/STAGE","SERVERA.foo.bar.org|2188","Primary","Active","172.31.33.69|1972","SERVERA.foo.bar.org|1972")

Thanks, but i can't find the Python Adapter do you mean EnsLib.PEX.BusinessOperation or IOP.BusinessOperation.

Next if Embedded Python runs natively on iris, why i have to declare a connection as you mention in your example ?

Does this work best ?

from iop import BusinessOperation

class HelloWorld(BusinessOperation):

    def on_message(self, request):
        self.log_info("Hello World")

Hi,

If i remember correctly, the default behavior of the to_sql method is to use a transaction to insert the data.

What i do is using with statement to ensure that the transaction is commited and closed after the insert:

with engine.connect() as conn:
    train_df.to_sql(name='table1', con=conn, if_exists='replace', index=False)

Otherwise, you can commit the transaction manually:

conn = engine.connect()
train_df.to_sql(name='table1', con=conn, if_exists='replace', index=False)
conn.commit()
conn.close()

That's what i do, hope it helps.