Background tasks are an internal feature mostly for System Management Portal, and in most cases also not supposed to be called by you.

JOB, is a complete different story. with this command, you have control over many aspects of how to run this process in the background. 

You can check $Test, (be careful, and do right after JOB) which says if your process even started in the background

$ZChild returns that job ID, you can check it in SMP if it's running

$Data(^$JOB(child)) will say if your child process is still alive.

You may have up to 25 (can be less) background jobs per process. So, store the child process ID after each call of JOB command

You may redirect output from that process to some file, by passing principal-output parameter

With ^$JOB and $ZChild, you at least can wait until the process is finished its work. With an endless loop and reasonable HANG 

So, it looks like you doing it a wrong way. 

If you need to build your application, then you have to do it with Dockerfile during the build stage, not when it is just up and running. And you still build some container before it.

In any case you may use this way, to wait until it's started

sleep 5; docker exec $CONTAINER /usr/irissys/dev/Cloud/ICM/waitISC.sh

Teah, not documented, internally used script, but nothing offered instead.

Contact me directly, I may help to review the build process, and give particular recommendations.

Nowadays I would not recommend plain CSP files at all, and for sure, and totally no ZEN. 

I see that you use Python in some cases, I may recommend you use Django in that case, I wrote a few articles about it recently.

Or just do REST with %CSP.REST class, and do plain web application on plain HTML and JS, or may use ReactDOM (my application as an example).

If to be honest, I have no idea, of the reasons for appearing SAM in the way it appeared. SAM is a bunch of tools.

  • Grafana, AlertManager - Visualization, and alerting
  • Prometheus - Time-Series Database, which in fact do requests to IRIS to collect metrcis
  • SAM itself, is from what I got, is just an UI tool which helps kind of organize your cluster and configure Prometheus+Grafana. And it uses the whole IRIS just for it.

And in fact SAM is not a requirement for it all. The thing is on your servers, where you have couple of API endpoints for Prometheus. And As far as I remember, the code there is closed, and you can't extend it. But you can easily add, your own endpoints, with your custom metrics, in understandable by Prometheus format. Look a this article as an example

So, any new metrics are supposed to be added in your servers, not in SAM.

And there is another way. Last year I developed a plugin to Grafana itself, which can connect directly to IRIS by its super port, and collect data in any way. So, even like, just without Prometheus at all, just Grafana and IRIS. It's possible to move some metrics logic outside of IRIS. Something like, use SQL query as a metric, or read some global, and visualize it in Grafana. That plugin is just a proof of concept, and can't be used in production. It requires some work on it, but I did not see so much interest in it, yet. And I would really like to improve it and make it useful.

And in fact, I've asked about this particular task at Global Summit 2022. And I'm really interested if any of the companies have this request too? Nowadays, there are a lot of such tools for many programming languages but ObjectScript. And in some cases it could become a requirement, to scan your application, no matter the language it's written.

I hope, that maybe someone from InterSystems may add something. Pinging @Andreas Dieckow 

Not sure why would you need to connect SAM container directly, but ok.

As Robert, already mentioned, to open terminal inside the container, you can do this command

docker exec -it {conatiner_id} bash

-i for interactive, -t for tty

-u root, if you would need to get root access, not needed in most of the cases.

bash is for command which you need to execute, so, it can be just

docker exec -it {conatiner_id} iris session IRIS

where the first iris is a command, session command there and last IRIS is an instance name (by default in docker container)

The next question is how to connect with VSCode. So, first of all, it should be connected over internal WebServer, or another way you managed to configure for access web. So, container have to be started with mapped port 52773

You may get authorization issues with a freshly started container, while it may require you to change the password. VSCode requires authorization. So, just simple config should work

{
  "objectscript.conn": {
    "active": true,
    "port": 52773,
    "host":"localhost",
    "username": "_system",
    "password": "SYS",
    "ns": "%SYS",
  }
}

Hi, Yeah at this time no one security scanner supports ObjectScript. There are a few reasons for it. 

At this time, the only tool closest to it is ObjectScriptQuality, which can scan for possible bugs right now. But can be extended for security scans as well. With proper funding, it's possible to do it there. But only as a scanner just for code.

Another way is to implement a very new especially for a Security scanning tool, a complete scanner for enironment.

If your company or other companies would like to invest in such a project, I can implement such tool.

well, looks so, but there are no methods, which will return available connections.

Anyway, another point of my post, is that it's not enough at all. Only 5 connections and you almost have no control over usage, you can spend all connections too fast. For instance, I'm using DBeaver, and when it connects, it uses 3 workers, and utilize 3 connections. When I develop Django application, I have to be sure, to run in single thread mode, or, it will use all connections too fast. Any development in any language, will get own limits as well. And sometime, you may get some unexpected issues, like with LOAD DATA, you may think, yep, it's just a SQL query, which can be executed in your process, so, no extra connections will be used, but it's wrong. And you always should keep in mind, that you may quickly get into limit. You connected to DBveaver, have one terminal, SMP, something else, and you decided to compile your code in VSCode, nope, no place for it. You have to disconnect DBeaver or, find another way to free connections.

Yes, I get it. I just don't get why I spend my connections on something, that I don't expect to spend. I'm too limited. How many connections, any of the processes may consume? I have no idea how to know it.

Well, yes I see, it's a Community Edition. But how I can be so sure, that the licensed version, will work another way?  There is a limit of 25 connections, and if someway, in the background it will create more connections than left, It will consume 25 license units at once, and it's exactly what I would not want to get in production. I need expected behavior. And this is not expected at all.