It depends. Essentially Interoperability Productions take care of:

  • Parallelization
  • Queues / Async
  • Error management, mitigation, and recovery
  • After-error investigation (Visual Trace / Audit)
  • Unified overview of integration flows

For each integration or part of an integration you need to decide if you need these features and usually you  do. In that case all you need to do is to develop one or more Business Hosts containing the business logic and as long as they conform to Interoperability Production structure you would automatically get all the above mentioned benefits.

You pay for the convenience with the overhead for messages and queues.

In the cases where some (most) of these conditions are true:

  • external system (whatever it is) is reliable - downtime is a scheduled rarity
  • external system does not throw errors much
  • response time is stable and in the sync realm
  • interaction with the system is one simple flow
  • integration is extremely highload

You can choose to interface more directly.

Furthermore it's not Interoperability/no Interoperability, but rather a scale of how much you expose as Interoperability Hosts. In your example maybe having only a BO is enough and you can forego the BP?

I run tests programmatically like this:

ClassMethod runtest() As %Status
{
    set ^UnitTestRoot = basedir
    set sc = ##class(%UnitTest.Manager).RunTest(testdir, "/nodelete")
    quit sc
}

ClassMethod isLastTestOk() As %Boolean
{
    set in = ##class(%UnitTest.Result.TestInstance).%OpenId(^UnitTest.Result)
    for i=1:1:in.TestSuites.Count() {
        #dim suite As %UnitTest.Result.TestSuite
        set suite = in.TestSuites.GetAt(i)
        return:suite.Status=0 $$$NO
    }
    quit $$$YES
}

To be honest despite about my 7+ years of experience in exposing class methods as sql procedures I've yet to write a name of a resulting sql procedure correctly.

The trick I use is to open the list of procedures in SMP:

In this list I search for an already existing procedure with the same nesting level, copy it and change identifiers to my values.

That's how I wrote your query.

First I copied: %Atelier_v1_Utils.Extension_AfterUserAction

Then replaced %Atelier_v1_Utils with com_xyz_utils

Finally replaced Extension_AfterUserAction with Users_getRole

I encounter this issue fairly often, but I need not a complete documentaiton but rather Interoperability production documentation.

As all the class info is also available as a %Dictionary package I just query it and generate XLSX.

Here are some queries to get started (but they usually need to be adjusted on per-project basis). Also queries should be rewritten to use SubclassOf proc instead of the current matching. Also I'm not sure why I don't pass filestream directly. That also  needs to be fixed.

 
Queries

$zf functions (3-6 to be precise) cannot call arbitrary libraries but only InterSystems IRIS callout libraries.

You need to write a C library which is an InterSystems IRIS callout library and which calls the dll you need. Here's documentation on that. And here's a sample callout library.

Another approach would be using the CNA community project to call your library. CNA provides an interface for using native C-compatible shared libraries.

Looks like a user permissions issue.

question: which user does the Cache-terminal login as, is this different to the win10-services cache.exe settings.

Yes, the terminal works under your OS user, Cache (and InterSystems IRIS) background jobs work under system account user (you can check services - Cache to see the user).

You need to give permissions to access the share to system account user.