Hi,

If i remember correctly, the default behavior of the to_sql method is to use a transaction to insert the data.

What i do is using with statement to ensure that the transaction is commited and closed after the insert:

with engine.connect() as conn:
    train_df.to_sql(name='table1', con=conn, if_exists='replace', index=False)

Otherwise, you can commit the transaction manually:

conn = engine.connect()
train_df.to_sql(name='table1', con=conn, if_exists='replace', index=False)
conn.commit()
conn.close()

That's what i do, hope it helps.

Here, those scripts are embedded python script, which are different from python script.

For whatever reason, embedded python iris module is called iris and official driver is also called iris from this wheel intersystems_irispython-3.2.0-py3-none-any.whl.

This create confusion and package collision, an community edition exist that solve this issue and allow you to use them both, you can install it from here : https://github.com/intersystems-community/intersystems-irispython

Another option is to use the iris wrapper : https://pypi.org/project/iris-embedded-python-wrapper/

Works well on macos, and i'm looking for feedback on windows machines.

Hi,

My bet is that you are using web server port 52773 instead of 1972

Try this :

import iris

# Open a connection to the server
args = {'hostname':'127.0.0.1', 'port':1972,
    'namespace':'USER', 'username':'_SYSTEM', 'password':'SYS'
}

try:
    conn = iris.connect(**args)
    # Create an iris object
    irispy = iris.createIRIS(conn)
except Exception as e:
    # Handling the exception and printing the error
    print(f"An error occurred: {e}")

Hi @ala zaalouni,

When you encounter this error :

The error appears: "An error has occurred: iris.cls: error finding class"

This usually mean that the Iop framework is not loaded into iris.

pip install iris-pex-embedded-python

you must init it to iris (install the associated classes to iris), for that you must do :

iop --init

or

from iop import Utils
Utils.setup()

please refer to the documentation :

https://github.com/grongierisc/interoperability-embedded-python?tab=read...

For the more, make sure you are on the right namespace by specifying the var env : $IRISNAMESPACE

For this given python function :

file name demo.py

def return_tuple():
    return 1, 2, 3

To retrieve the values in ObjectScript, you can use the following code, basically use dunder methods to access the values.

set return = ##class(%SYS.Python).Import("demo")."return_tuple"()
write return."__len__"() // 3
write return."__getitem__"(0) // 1
write return."__getitem__"(1) // 2
write return."__getitem__"(2) // 3

Hi,

As you are in Aync mode, it's a bit tricky to get the end time. In sync mode, you just have to wait for the answer to be back.

In this case I would use the session id from the message that is going out of the business service to the BO.

So in the BS, try something like this:

Method OnProcessInput(
    pDocIn As %RegisteredObject,
    Output pDocOut As %RegisteredObject) As %Status
{
    // here you need to pass the jobid from the %CSP.REST that is invoking the BS
    set tJobId = pDocIn.JobID
    // your code here
    set ^CallApi($CLASSNAME(),tJobId,"sessionid") = ..%SessionId
    set pDocOut = pDocIn
    quit $$$OK
}

Then in the BO, you can use the session id to get the information from the global.

Method MyAsyncMethod(
    pRequest As %RegisteredObject,
    Output pResponse As %RegisteredObject) As %Status
{
    set tSessionId = ..%SessionId
    // lookup for the good session id
    for {
        set tClassname = $ORDER(^CallApi(tClassname))
        quit:tClassname=""
        for {
            set tJobId = $ORDER(^CallApi(tClassname,tJobId))
            quit:tJobId=""
            if ^CallApi(tClassname,tJobId,"sessionid") = tSessionId {
                set ^CallApi(tClassname,tJobId,"End") = $HOROLOG
                quit
            }
        }
    }
    // your code here
    quit $$$OK
}

I hope this helps.

Hello Cyril,

I have finished this exercise, I will share my experience on this subject with you:

Have you checked that the snmpd deamon is installed and configured on your docker instance?

By default, it is not installed, so you have to install and configure it.

Example of a dockerfile to install snmpd:

ARG IMAGE=intersystemsdc/iris-community:latest
FROM $IMAGE

WORKDIR /irisdev/app

USER root
RUN apt-get update && apt-get install -y \
    nano \
    snmpd \
    snmp \
    sudo && \
    /bin/echo -e ${ISC_PACKAGE_MGRUSER}\\tALL=\(ALL\)\\tNOPASSWD: ALL >> /etc/sudoers && \
    sudo -u ${ISC_PACKAGE_MGRUSER} sudo echo enabled passwordless sudo-ing for ${ISC_PACKAGE_MGRUSER}

COPY snmpd.conf /etc/snmp/snmpd.conf
USER ${ISC_PACKAGE_MGRUSER}

Example of a snmpd.conf:

###############################################################################
#
# snmpd.conf:
#   An example configuration file for configuring the NET-SNMP agent with Cache.
#
#   This has been used successfully on Red Hat Enterprise Linux and running
#   the snmpd daemon in the foreground with the following command:
#
#   /usr/sbin/snmpd -f -L -x TCP:localhost:705 -c./snmpd.conf
#
#   You may want/need to change some of the information, especially the
#   IP address of the trap receiver of you expect to get traps. I've also seen
#   one case (on AIX) where we had to use  the "-C" option on the snmpd command
#   line, to make sure we were getting the correct snmpd.conf file. 
#
###############################################################################

###########################################################################
# SECTION: System Information Setup
#
#   This section defines some of the information reported in
#   the "system" mib group in the mibII tree.

# syslocation: The [typically physical] location of the system.
#   Note that setting this value here means that when trying to
#   perform an snmp SET operation to the sysLocation.0 variable will make
#   the agent return the "notWritable" error code.  IE, including
#   this token in the snmpd.conf file will disable write access to
#   the variable.
#   arguments:  location_string

syslocation  "System Location"

# syscontact: The contact information for the administrator
#   Note that setting this value here means that when trying to
#   perform an snmp SET operation to the sysContact.0 variable will make
#   the agent return the "notWritable" error code.  IE, including
#   this token in the snmpd.conf file will disable write access to
#   the variable.
#   arguments:  contact_string

syscontact  "Your Name"

# sysservices: The proper value for the sysServices object.
#   arguments:  sysservices_number

sysservices 76

###########################################################################
# SECTION: Agent Operating Mode
#
#   This section defines how the agent will operate when it
#   is running.

# master: Should the agent operate as a master agent or not.
#   Currently, the only supported master agent type for this token
#   is "agentx".
#   
#   arguments: (on|yes|agentx|all|off|no)

master agentx
agentXSocket tcp:localhost:705

###########################################################################
# SECTION: Trap Destinations
#
#   Here we define who the agent will send traps to.

# trapsink: A SNMPv1 trap receiver
#   arguments: host [community] [portnum]

trapsink  localhost public   

###############################################################################
# Access Control
###############################################################################

# As shipped, the snmpd demon will only respond to queries on the
# system mib group until this file is replaced or modified for
# security purposes.  Examples are shown below about how to increase the
# level of access.
#
# By far, the most common question I get about the agent is "why won't
# it work?", when really it should be "how do I configure the agent to
# allow me to access it?"
#
# By default, the agent responds to the "public" community for read
# only access, if run out of the box without any configuration file in 
# place.  The following examples show you other ways of configuring
# the agent so that you can change the community names, and give
# yourself write access to the mib tree as well.
#
# For more information, read the FAQ as well as the snmpd.conf(5)
# manual page.
#
####
# First, map the community name "public" into a "security name"

#       sec.name  source          community
com2sec notConfigUser  default       public

####
# Second, map the security name into a group name:

#       groupName      securityModel securityName
group   notConfigGroup v1           notConfigUser
group   notConfigGroup v2c           notConfigUser

####
# Third, create a view for us to let the group have rights to:

# Make at least  snmpwalk -v 1 localhost -c public system fast again.
#       name           incl/excl     subtree         mask(optional)
# access to 'internet' subtree
view    systemview    included   .1.3.6.1

# access to Cache MIBs Caché and Ensemble
view    systemview    included   .1.3.6.1.4.1.16563.1
view    systemview    included   .1.3.6.1.4.1.16563.2
####
# Finally, grant the group read-only access to the systemview view.

#       group          context sec.model sec.level prefix read   write  notif
access  notConfigGroup ""      any       noauth    exact  systemview none none

From this neat article : https://community.intersystems.com/post/intersystems-data-platforms-and-...

Then, you have to start the snmpd deamon:

sudo service snmpd start

On iris, you have to configure the snmp agent:

%SYS> w $$start^SNMP()

With all these steps, you should be able to retrieve information via snmp.

snmpwalk -m ALL -v 2c -c public localhost .iso.3.6.1.4.1.16563.4.1.1.1.5.4.73.82.73.83

Result :

iso.3.6.1.4.1.16563.4.1.1.1.5.4.73.82.73.83 = STRING: "IRIS for UNIX (Ubuntu Server LTS for x86-64 Containers) 2023.1 (Build 229U) Fri Apr 14 2023 17:37:52 EDT"

Now that the snmpd service is running and functional as well as the snmp agent.

I encourage you to have a look at OpenTelemetry, it's a new standard for monitoring and tracing.

https://opentelemetry.io/

And our implementation of OpenTelemetry for IRIS :

https://docs.intersystems.com/iris20233/csp/docbook/Doc.View.cls?KEY=GCM...

Hi, FHIR Profile Validation will be support with IRIS 2023.3.

https://community.intersystems.com/post/intersystems-announces-its-secon...

Highlights

There are several exciting new features in this release, such as support to base HL7® FHIR® version R5, and the integration with the IBM FHIR® Validator. There are also runtime performance improvements for ObjectScript.  Note that these features are expected, but nor assured to appear in the final release. 

Mean while, you can use community edition of Profile Validation :

In theory you are all set. Just take for example the dockerfile.

You can also read the pom.xml file to take some insperation.

For example take a look how the jdbc driver is added to the project.

        <dependency>
            <groupId>intersystems</groupId>
            <artifactId>jdbc</artifactId>
            <version>3.3.0</version>
            <scope>system</scope>
            <systemPath>${pom.basedir}/lib/intersystems-jdbc-3.3.0.jar</systemPath>
        </dependency>

To sum up, you need obviously the jdbc drive (you have one in the repo), the hibernate dialect, you also have one in the repo, for the hibernete dialet you can also have a look to the article of yuri : https://community.intersystems.com/post/using-new-intersystems-iris-hibe....

Have fun with Iris and quarkus.

That how i read Stream and Write stream with Embedded Python :

Read Stream :

def stream_to_string(stream)-> str:
    string = ""
    stream.Rewind()
    while not stream.AtEnd:
        string += stream.Read(1024)
    return string

Write Stream :

def string_to_stream(string:str):
    stream = iris.cls('%Stream.GlobalCharacter')._New()
    n = 1024
    chunks = [string[i:i+n] for i in range(0, len(string), n)]
    for chunk in chunks:
        stream.Write(chunk)
    return stream
SELECT COUNT(*)
from Ens.MessageHeader
WHERE SourceConfigName = 'EPIC_SIU_IN'
AND TO_NUMBER(SessionId) = %ID
AND %ID >= (SELECT TOP 1 %ID FROM Ens.MessageHeader a WHERE TimeCreated >= '2016-07-27 05:00:00.000' ORDER BY TimeCreated ASC)
AND %ID <= (SELECT TOP 1 %ID FROM Ens.MessageHeader a WHERE TimeCreated < '2016-07-28 05:00:00.000' ORDER BY TimeCreated DESC)

Just change the date here and see how fast it is.

The idea is to use index and the fastest index in MessageHeader is %ID.

Hi,

Do you know that every component in a production has a OnInit() method that is called when the component starts?
You can use this method to set the value of the parameter.

For example:

Method OnInit() As %Status
{
    Set ..#Token = $System.Util.GetEnviron("Token")
    Quit $$$OK
}

May be it's more elegant to do so than update the parameter in the Production file.

FYI, that what i do in IOP (Interoperability On Python) to set the value of the parameter of any component in a production.

def on_init(self):
    self.my_param = os.environ.get("MY_PARAM", "default_value")

Hi Evegny,

You can do it in the following way:
* in the production settings, with DefaultSettings docs here
* but default settings is a kind of replacement for environment variables
* you can to it in the code :
* ObjectScript: $system.Util.GetEnviron("MY_ENV_VAR")
* Python: os.environ['MY_ENV_VAR']
* you can use iop (interoperabilty on python) to import production in python way :

settings.py

import os

PRODUCTIONS = [
        {
            'shvarov.i14y.Production': {
                "Item": [
                    {
                        "@Name": "shvarov.i14y.ChatOperation",
                        "@ClassName": "Telegram.BusinessOperation",
                        "@Category": "Reddit",
                        "@PoolSize": "1",
                        "@Enabled": "true",
                        "@Foreground": "false",
                        "@Comment": "",
                        "@LogTraceEvents": "false",
                        "@Schedule": "",
                        "Setting": [
                            {
                                "@Target": "Adapter",
                                "@Name": "SSLConfig",
                                "#text": "default"
                            },
                            {
                                "@Target": "Adapter",
                                "@Name": "Token",
                                "#text": os.environ['TELEGRAM_TOKEN']
                            }
                        ]
                    }
                ]
            }
        } 
    ]

and then in the terminal:

$ iop -M settings.py

More information about iop and settings.py here