Glad to hear the interest in Python.

When you use embedded python, IRIS loads Python into memory in the process (via CPython).  The specific version of Python depends on your OS.  That's documented here:  https://docs.intersystems.com/irisforhealthlatest/csp/docbook/DocBook.UI...

Loading Python into the IRIS process does just what you think it does - not a separate process and references in Python to ObjectScript objects are in-memory.  So there's no performance difference between using Embedded Python from IRIS or starting with a Python program and `import iris`.

The start.sh script just calls docker-compose up.  You should be able to do podman-compose up to start SAM.  The first time it runs, it needs to download the containers (you can see what they are in the docker-compose.yaml file) if you don't already have them in your local podman cache.

During normal runtime, SAM doesn't require access to the internet.  That said, I haven't personally tested SAM without internet access.

Good luck!  Let us know how it goes.

It sounds like you're looking for persistent storage, which means you want to use the Durable %SYS feature.  Here's the info to get you started:  https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls....

To answer your specific questions:

1. Yes, IRIS works with any StorageClass that your Kubernetes cluster has installed, including any storage class you might have created for EBS or EFS. If you haven't yet tried it, the InterSystems Kubernetes Operator can simplify deployments in Kubernetes.  Even if you don't use IKO in production, it's a good way to see how everything should be configured.  https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...

2. IRIS 2022.1 also includes first-class support for AWS SQS.  You can use it in your Productions or use ObjectScript to send messages to it.  https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic....

Community images are published to DockerHub, as is the longstanding practice. Docker is currently semi-retiring their old "store" portion of DockerHub leaving a confusing interface to find our containers.  We expect that Docker will clear this up in the next few months.

In the meanwhile, you can easily find InterSystems community containers via the InterSystems Organization on DockerHub:

https://hub.docker.com/u/intersystems
 

The OpenTelemetry collector should be able to collect metrics from the IRIS metrics API as well as the structured logs from the logdmn.  I don't have a working example configuration file to show you yet but I expect to have one by June 20th.  

As for a completely shameless plug here for my session on Observability in IRIS at Global Summit, I'm planning to cover the pillars of modern observability stacks, how IRIS fits into them.  At the moment, I'm still deciding which commercial stack to demo (DataDog, perhaps?).  I'll also be covering SAM, recent & future updates,  and how it fits into the bigger observability picture

In Embedded Python, exceptions flow back-and-forth between ObjectScript and Python as you'd expect. 

If your Python code raises an exception, but does not handle it, then the exception is passed to the ObjectScript side.  And vice-versa.  In your example, I'm not sure why you're handling the exception in python and then returning an exception object to the caller.  Why not just not catch the exception and let it get passed along to the ObjectScript side?

I like to bake the timezone into the container when I build it.  That way it'll have the correct timezone no matter where I run it.  Here's an example Dockerfile based on the latest IRIS container that sets the container's timezone to Pacific (aka America/Los_Angeles).
 

FROM intersystemsdc/iris-community
USER root
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
    apt-get install -yq tzdata && \
    ln -fs /usr/share/zoneinfo/America/Los_Angeles /etc/localtime && \
    dpkg-reconfigure -f noninteractive tzdata
USER irisowner

You can then do a docker build . to create the container, docker image ls, and docker run

Yes, IKO provided support for a simple IRIS topology where all Compute (ECP clients) nodes are the same, as are all data (shards), and all web gateways.

IKO is a tool that simplifies common deployments of IRIS in Kubernetes.  It takes your description of your irisCluster and creates the base kubernetes objects needed to support that deployment (a real lifesaver for sharding/mirroring deployments).  You can always see what IKO created and use that as inspiration for creating your own YAML to deploy IRIS in your kubernetes environment.

The IKO container is distributed as both part of the IKO distribution and via the InterSystems Container Registry.  For most users, the right answer is to just point to the container in ICR.  We mention this in the documentation here:  https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...

If you have your own container registry you can push the container there.  The error message provided looks like you're trying to add it to docker hub and getting permission denied.