In Embedded Python, exceptions flow back-and-forth between ObjectScript and Python as you'd expect. 

If your Python code raises an exception, but does not handle it, then the exception is passed to the ObjectScript side.  And vice-versa.  In your example, I'm not sure why you're handling the exception in python and then returning an exception object to the caller.  Why not just not catch the exception and let it get passed along to the ObjectScript side?

I like to bake the timezone into the container when I build it.  That way it'll have the correct timezone no matter where I run it.  Here's an example Dockerfile based on the latest IRIS container that sets the container's timezone to Pacific (aka America/Los_Angeles).
 

FROM intersystemsdc/iris-community
USER root
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
    apt-get install -yq tzdata && \
    ln -fs /usr/share/zoneinfo/America/Los_Angeles /etc/localtime && \
    dpkg-reconfigure -f noninteractive tzdata
USER irisowner

You can then do a docker build . to create the container, docker image ls, and docker run

Yes, IKO provided support for a simple IRIS topology where all Compute (ECP clients) nodes are the same, as are all data (shards), and all web gateways.

IKO is a tool that simplifies common deployments of IRIS in Kubernetes.  It takes your description of your irisCluster and creates the base kubernetes objects needed to support that deployment (a real lifesaver for sharding/mirroring deployments).  You can always see what IKO created and use that as inspiration for creating your own YAML to deploy IRIS in your kubernetes environment.

The IKO container is distributed as both part of the IKO distribution and via the InterSystems Container Registry.  For most users, the right answer is to just point to the container in ICR.  We mention this in the documentation here:  https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...

If you have your own container registry you can push the container there.  The error message provided looks like you're trying to add it to docker hub and getting permission denied. 

We've tried to package up our best practices for deploying IRSI in K8s into the InterSystems Kubernetes Operator.  In other words, instead of documenting everything, we built a tool that automates what can be automated.

That said, it doesn't cover everything.  Your example question is a good one - should IRIS clusters be deployed into the same kubernetes cluster as your other workloads?  The right answer to that depends on your exact needs.

In general, you shouldn't deploy IRIS into a different Kubernetes cluster than the rest of your kubernetes-managed application.  However, I'd consider doing it if there were strong architectural/compliance reasons to keep the IRIS cluster separated from other applications.  

Feel free to reach out if you want someone to go over your situation give more proscriptive advice.

Hi Stefan - 

I'll try to answer your questions in order.  If you want to know more, let me know.

  • IKO does not currently configure any kubernetes ingress, but there is a service that is configured which you could choose to configure for external access (a load balancer, for example).  You can set this via spec.serviceTemplate.spec section of your iriscluster yaml.  That's a spec which gives details about the service that we create.  I like to use ClusterIP for the service type and then create an ingress to give me good control over what is exposed to the internet.  A bit more info can be found here:  https://kubernetes.io/docs/concepts/services-networking/service/#publish...
  •  
  • Regarding routing to individual hosts... From within the cluster (say another pod in the namespace) each other pod is accessible via it's hostname (for example, myCluster-data-0-1) and you can use the kubectl port forwarding feature to allow you administrative access as you may need.  For non-administrative workflows, I suggest creating kubernetes services that point to the subset(s) of pods you want to use for each type of production traffic you want to manage.
  •  
  • We are currently working on adding native support for IAM and SAM in IKO, which greatly simplifies installing these products in Kubernetes.  In the meantime, most use cases would want to run IAM inside the cluster by creating a series of kubernetes deployments, services, and ingress to deploy IAM.  I might be able to gin up an example if you're going this way.