Hi Enrico, what I thought I had seen somewhere worked like this scenario: you are writing code to call a previously defined method, and as you work through the arguments in your IDE of choice (mine = VSCode) you are presented with help strings for the specific argument you are currently working on for the call, as opposed to a a general help text that explains the method and its input arguments.  I went looking for an example and couldn't find one, so perhaps I was imagining it or had my wires crossed for IDEs / Languages 🤷‍♂️

Thanks John. Your guess is correct. client side editing paradigm and %Persistent. Initial save works exactly as planned. Updates the storage definition and reloads the updated definition. Where it gets weird (new info from sleuthing attempts) is if I run a SQL query in the management portal, that's when the extent info gets injected. Then when I subsequently try an inline debug (no edits to the code at all)  I am presented with the Target Source is not compiled (Open launch json) dialog box. Then when I try to save, I get the popup that source on the server is newer and when I do a compare I get results that show extent info has been injected. I actually went ahead and opened a WRC ticket so I'll refrain from opening the issue in github for the moment unless they can't get anywhere.   Will be sure to update the thread when answer gets tracked down. 

Best regards,

Sinon

Hi Kurro,  Thanks for the reply. Running latest ObjectScript  Extensions

Autoadjust & Compile on Save are ticked.
I run ck flags to  fix an alternate problem (Make an edit only to comment and then the debugger doesn't rebuild and can't start interactive debugging, so I rebuild always until that is fixed)

.export settings all match. 

So still trying to track it down..

Figured it out.

From PYODBC, call in the standard ODBC format (duh). The projected procedure name was causing the issue. If using the the above example code, then:
cursor.execute('{CALL exp.MD5NoMatch(?,?,?)}',('exchange', 'cplx','channel'))
(I was previously trying to call with the classname in the file..... exp.DayFiles_MD5NoMatch  

The correct projected SP name is in the management portal:Home->System Explorer->SQL->Procedures

There I could see that the class name is not included in this case, presumably because the SP name is unique in my "exp" schema. (FYI "exp" schema is just an example schema name I used when creating the class, you won't find it in your listings)  Also note that if more than one "." is used in defining the calling procedure name, only the leftmost will be a "." and the others will be replaced by an underscore "_" which is why I was originally trying to call the procedure as schema.classname_spName 

Thanks, but perhaps I wasn't clear Eduard or I am still too dense (likely). I have a class that I defined to have properties, methods and most importantly for this example a query. It expects to receive Exchange, Complex and Channel as parameter inputs and then returns a resultset. Under old methods I would call as shown below. I'm just missing the way to do that via PYODBC or Direct.

import intersys.pythonbind3 as ip
...
database = ip.database(conn)
...
cq2 = ip.query(database)
sqlcode=0
cq2.prepare_class("exp.DayFiles", "MD5NoMatch");
cq2.set_par(1,exchange)
cq2.set_par(2,cplex)
cq2.set_par(3,channel)
cq2.execute()

and then begin fetching rows from the recordset.

The particular query definition from my exp.DayFiles class

/// MD5 discrepency w/ Amazon, not necessiarly a busted file
Query MD5NoMatch(Exchange As %String, Complex As %String, Channel As %String) As %SQLQuery(ROWSPEC = "Exchange:%String,Cplx:%String,Channel:%String,DateStd:%String,FileName:%String,FPathRemote:%String,FPathLocal:%String,MD5Cloud:%String,MD5Local:%String") [ SqlName = MD5NoMatch, SqlProc ]
{
SELECT Exchange, Complex, Channel, DateStd,ZipFileName,{ fn concat(ZipPathCloud,ZipFileName)} as CloudPath,
{ fn concat(ZipPathLocal,ZipFileName)} as LocalPath, ZipMD5Cloud, ZipMD5Local from exp.DayFiles
WHERE Exchange = :Exchange AND Complex = :Complex AND Channel = :Channel
AND ZipMD5Agree = 'False' ORDER BY DateStd
}

Hi Kevin,

Python 3.6.7 is the latest version that I have been able to successfully target for Cache. Listed below are my personal doc notes from the last time I reinstalled Cache and built the python binding (almost a year ago). All paths are specific to my cache install and my personal choices in how I configure / name Anaconda environments.

  • Add cache and cache/bin to path
  • Set up python binding
    • This was a struggle….complicated by placing path additions for CACHE after the Conda environemts were set in .bashrc.  Looked like they were there, but they weren't in the notebook. Place them before.
    • When rebuilding python for Cache be in the /cache/dev/python directory sudo run full anaconda path python to setup python3  /home/xxx/anaconda3/envs/py367/bin/python setup3.py install

Note: no higher than 3.6.7 python at the moment so make sure anaconda env py367 is created: conda create -n py367 python=3.6.7 anaconda

TESTED SUCCESSFULLY!!!!! 

Awesome reply Alexander and thanks for dropping some knowledge on me that greatly simplifies the coding work around I was envisioning. While it resolves my immediate issue, the underlying logic still seems questionable to me. 

$ZTH is supposed to return Cache's internal format: "Validates a time and converts it from display format to Caché internal format"

Shouldn't $ZTH be consistent in its return? All numbers in canonical form, not just those greater than or equal to 1?

Thanks Again

Studio is a windows only based tool. Could bootcamp or vmware / parallels a windows version. Intersystems had sunset new development on Studio and was migrating to an Eclipse based environment using a plugin (Atelier) that would allow you to compile classes and the like, but it may have also been sidelined after the just released 1.3 version for Eclipse Photon. (I am using Atelier on my Mac with Photon)

For what its worth: Cache is a great blend of database and programming language if you stick with it, but they don't make it easy, that's for sure. COS (Cache Object Script) is beyond fast and flexible (perhaps too much so) for achieving things that would take forever in standard RDBS. i.e. Don't give up ;-)