You can easily add your own compile-time check via method generators. Here's an example which checks that all abastract methods are implemented.

Class Test.AbstractChecker
{

ClassMethod Check() As %Status [ CodeMode = objectgenerator, ForceGenerate ]
{
    #Dim sc As %Status = $$$OK
    
    // Get class name from %compiledclass object which is an instance of a currently compiled class
    Set class = %compiledclass.Name

    // Iterate over class methods.
    // You can also use %class object to iterate
    Set method=$$$comMemberNext(class, $$$cCLASSmethod, "")
    While method'="" {
        
        // Get mthod abstract state
        Set abstract = $$$comMemberKeyGet(class, $$$cCLASSmethod, method, $$$cMETHabstract)
        
        // Quit iteration when we find any abstract compiled method
        If abstract {
            set origin = $$$comMemberKeyGet(class, $$$cCLASSmethod, method, $$$cMETHorigin)
            Set sc = $$$ERROR($$$GeneralError, $$$FormatText("Abstract method %1 in class %2 not implemented (origins from %3)", method, class, origin))
            Quit
        }
        
        // Get next method
        Set method=$$$comMemberNext(class, $$$cCLASSmethod, method)        
    }
    Quit sc
}

}

After adding this class to inheritance I get an error on compilation:

You can create your own snippets in VSCode.

For example here's snippet code for BS

Class ${1:class} Extends Ens.BusinessService
{

Parameter ADAPTER = "${2:adaptor}";

Property Adapter As ${2:adaptor};


Method OnProcessInput(pInput As %RegisteredObject, Output pOutput As %RegisteredObject, ByRef pHint As %String) As %Status
{
    Quit $$$ERROR($$$NotImplemented)
}
}

I'm also using ODBC3.5 and I was unable to reproduce your issue.

Here's what I tried:

import pyodbc
import pandas as pd
cnxn=pyodbc.connect(('DSN=MYDSN'),autocommit=True)
Data=pd.read_sql('SELECT TOP 10 DOB, RandomTime, TS FROM isc_py_test.Person',cnxn)
Data
Data.info()

And here's the output I got:

          DOB RandomTime                  TS
0  1977-07-18   07:49:03 1993-10-31 17:23:25
1  2001-11-08   07:45:05 2005-12-25 04:11:22
2  2004-02-20   23:17:49 1981-08-31 02:08:10
3  1995-11-22   01:46:31 2010-05-20 11:25:31
4  1974-01-09   15:20:03 1974-12-22 13:49:00
5  1987-10-19   23:14:52 1974-10-02 17:48:37
6  1985-03-29   17:47:12 1978-02-24 06:40:51
7  2015-10-21   23:09:15 2006-08-29 16:30:29
8  1972-12-26   15:53:23 1996-12-06 03:13:26
9  1990-09-25   05:53:25 2000-03-22 05:54:57

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 3 columns):
DOB           10 non-null object
RandomTime    10 non-null object
TS            10 non-null datetime64[ns]
dtypes: datetime64[ns](1), object(2)
memory usage: 320.0+ bytes

Here's the source data I used (import and run Populate).

I recommend PythonGateway for Python workloads with InterSystems IRIS.

Can you share your dataset?

The idea of document DBs is as follows. You always have constraints on data: what fields are there, what datatypes are there and so on. In the case of a relational DB the constraints are checked and enforced on a database side - you define a required timestamp field once and when you request the data back, hundred times out of a hundred, you get back a valid timestamp field.

On the one hand this is good - separation of concerns and all that, on the other hand what if your application data requirements change often? Or what if you need to store data for several versions of your application at once? In that case maintaining a constraints on a db side becomes cumbersome.

Document databases are created to work in this type of situations. On a DBMS side you now only enforce the existence of collection, id key and maybe some required properties if you're 100% sure they are always available. Everything else is optional and entirely the concern of the application. Document, after all can have any number of fields or don't have them at all. And it is application job to make sense of it.

In your case the schema is extremely stable - the standard defining RSS was aproven in 2005 and, frankly, not likely to change, ever. It is as one may say feature complete.

Using XSD schema you can easily generate the classes  in InterSystems IRIS and import your RSS data into them.

Any particular reason you want to use DocDB and not relational tables?

Send http request to get the feed and %XML.Reader to parse it into individual objects.

Use BS/BO to automate sending http requests.

  1. Shut down InterSystems IRIS on Machine 1.
  2. Copy IRIS.DAT from Machine 1 to Machine 2
  3. Mount IRIS.DAT to InterSystems IRIS on Machine 2.

Get property parameter by global access:

/// Get property param
/// w ##class().GetPropertyParam("Form.Test.Simple", "Name", "MAXLEN")
ClassMethod GetPropertyParam(class As %Dictionary.CacheClassname = "", property As %String = "", param As %String = "") As %String [ CodeMode = expression ]
{
$$$comMemberArrayGet(class, $$$cCLASSproperty, property, $$$cPROPparameter, param)
}

You can send the request directly from a service to operation.

Your BP must implement the following methods to work:

  • OnRequest
  • OnResponse

As both methods are missing from your BP it does not work.

If it's a File Stream you can set TranslateTable property to your charset before reading.

Otherwise you can use $zcvt function to convert strings.

Here's an example of iterating encodings for $zcvt to determine a correct encoding.

If you are interested in encoding internals use zzdump to check hexdumps.

If your encoding is region specific don't forget to set your locale.