I read the note, somebody will not read and will use it in production.

Again, if you need root access in real-time inside the container, you doing something wrong. If you need to install anything, you have to do it with Dockerfile, if you need to change some system settings you have to do it in Dockerfile. The container is just to run a particular application inside, and it gets permissions to fit its needs. 

Docker containers completely different from Virtual Machines, and it's important to remember. One container one application, in our case it's IRIS. The state of the container does not matter at all, what's matters is what in the image. Container may die at any time for any reason, and its content has to be restored from the image. And your image has to be prepared for this case. If your delete the container and restart it, will break your application, that something wrong with it, and have to be fixed with Dockerfile.

And even, any Dockerfile can be configured with HEALTHCHECK, which has to check periodically the state of the running application inside, and with control from outside, like with Kubernetes, it will sacrifice container with no expected state and will start it the state from the image. And basic IRIS image produced by InterSystems has it as well. So, if you just stop IRIS inside, the container will be destroyed after a while.

Again container it's not a virtual machine, and in any size of organizations, should not be any restrictions to having only direct access inside.

This is a completely very bad idea. You have several issues with your Dockerfile, for instance

  • Hardcoded password for irisowner and even for root
  • IRIS starts with root, instead of irisowner user

Administrators of the running instance should always have access from the Docker host, or through Kubernetes.

In container should appear only what supposed to be in a Production environment, no backdoors of any kind.

Pinging @Luca Ravazzolo for comments 

To be able to restart IRIS from inside, you in any way will need some independent process on the OS level outside of IRIS, which will control stopping and starting.

If saying about doing it in the container. Most of the changes which would need to restart it, have to be prepared in Docker build process, so, it should be as part of the base image.

Docker supports command to restart, and it's doing it as one command, and it will not delete state during the restart. So, I would say it's a preferable way.

So, in this case, In fact, if you are not going to migrate that data to IRIS in the end, I see no reasons to use IRIS for such data. And microservices has written in some other languages, really a better way. 

Would it be possible to synchronize the date from other services with IRIS? So, your patient data still will be there, and backed up in IRIS, with FHIR endpoint. So, in this case, you can use IRIS Production to do this particular task.

Microservices are supposed to run separately with an ability for scalability, failure recovery, and be independent of the running environment. And the major side effect of it, microservices are language-independent.

And IRIS production, not the platform for microservices at all. Just only one production, limited by one server, if you will need to add some more items to it, you have to update the entire production. Yes, it will be possible to scale services when needed, but it still will be limited by the resources of the server where the production running. There are no easy ways how to automatically recover Production after failure. 

So, nowadays, would recommend trying Kubernetes as a platform for microservices, each microservice could be written in any supported by IRIS language (nodeJS, .Net, Python, Java). With IRIS itself running in the mirror for high availability.

Speaking about HealthCare application, why would you even decide to implement some own way, when FHIR already covers most of the things you would need?

The issue is probably in export settings, which used to find a real file for resources on server. You can check it by opening ObjectScript Explorer, when you dive to your classes/routines, it should show an icon next to the name, the same as in File Explorer. So, it's mean that it's linked correctly. If not. you have to look at "objectscript.export" settings, you may find some useful info about this here.

Class Security.Users is available only from %SYS, you can map it to your namespace, but I would not recommend to do it. I think, you can just have a method just for this exact thing, where you can switch to %SYS namespace, gather what's you need, and that's it. In this method you will need to use New $namespace command, so, when it will return back to your original namespace when returns. As a ClassMethod it would look like this

ClassMethod GetUserInfo(username, ByRef props) As %Status
{
  New $Namespace
  Set $Namespace = "%SYS"
  Return ##class(Security.Users).Get(username, .props)
}