Adel Elsayed · Feb 8, 2020

Executing stored procedure through Python NativeAPI

I'm looking into finding ways to  fetch data from cache efficiently and work with it with python - specially pandas.

i tried the following but i'm wondering if this is the most efficient way?

to run the stored procedure as %ResultSet inside cache in a classmethod and serialize the data to JSON

and then call that class method from python

Class User.RegObj Extends %RegisteredObject
{ ClassMethod runAlpha() As %Library.DynamicAbstractObject
set QHi=##class(%DynamicAbstractObject).%FromJSON("{}") set rs=##class(%ResultSet).%New("User.Person:alpha")
set pr=rs.Execute("","") set idx=0 
while rs.Next() {
set idx=$I(idx) 
Set miniList=##class(%DynamicAbstractObject).%FromJSON("{}") 
do miniList.%Set(1,rs.Data("name"))
do miniList.%Set(2,rs.Data("DB")) 
do QHi.%Set(idx,miniList) 
return QHi.%ToJSON()
} }


call from Python

>>> import irisnative as iris
>>> conn=iris.createConnection("", 51773, "USER", "_SYSTEM","SYS")
>>> wrk=iris.createIris(conn)
>>> print(wrk.classMethodValue("User.RegObj","runAlpha"))


the JSON can be further manipulated to a pandas dataframe.


3 0 5 319
Log in or sign up to continue


Hi Adel!

Maybe it's not the answer to how to call the procedure from Python. But as soon as you have some ObjectScript code already you may want to try the way to call Python (pandas) from IRIS using Objectscript and transfer the data from your query to the python libraries and run it.

Python Gateway does the thing, here is the set of articles and also tagging @Eduard Lebedyuk and @Sergey Lukyanchikov to provide more details if any.


With Python Gateway you "drive" your Python context by sending to it the portions of your script and retrieving to your IRIS context the Python objects you need:

When the Python object is retrieved into IRIS context, you are free to decide how to harvest it, for example:

As other commenters have stated you can use Python Gateway to drive Python process, in that case check this article, the Data Transfer section specifically. Also join our MLToolkit@intersystems.com usergroup - it's focused on AI/ML scenarios.

I'm a user of Python Native API so I can give the following advice. There are two cases here:

  • You need to iterate the result set - in that case you may want to call %Next() and retrieve values as you go.
  • You just need a complete dataset for downstream processing (your case). In that case yes, serialization is required.  I see two ways to do that: JSON and CSV. If the size of the dataset is small you can use JSON, no problem, but (I haven't run any tests but fairly sure) CSV serialization can be faster and less CPU/Memory intensive.