Found this way with IDENTITY and ALLOWIDENTITYINSERT=1

CREATE TABLE users (
	id identity NOT NULL,
	name VARCHAR(30) NOT NULL,
	PRIMARY KEY (id)
)
WITH %CLASSPARAMETER ALLOWIDENTITYINSERT = 1;

INSERT INTO users (id, name) VALUES (2, 'fred');
SELECT LAST_IDENTITY();

INSERT INTO users (name) VALUES ('ed');
SELECT LAST_IDENTITY();

Not sure if actually a good way to solve the issue

I see it in difficulty explaining of how to start using IRIS with NodeJS (actually with any supported language). It's not even simpler for NodeJS Developers who already familiar with IRIS and the complexity of getting drivers.

When on answer of how to start develop in NodeJS with IRIS, we could answer just install driver with npm and you are ready to go, then probably will be much more NodeJS based projects.

It has not been updated for a few years, so, yeah, I'm sure it's synchronous.

But I think most of the operations available through that API, should be synchronous. SQL could be asynchronous, but it does not support it.

I think I could make an asynchronous adapter, I have a driver which supports SQL, not async, yet, but I have not have such task yet.

Background tasks are an internal feature mostly for System Management Portal, and in most cases also not supposed to be called by you.

JOB, is a complete different story. with this command, you have control over many aspects of how to run this process in the background. 

You can check $Test, (be careful, and do right after JOB) which says if your process even started in the background

$ZChild returns that job ID, you can check it in SMP if it's running

$Data(^$JOB(child)) will say if your child process is still alive.

You may have up to 25 (can be less) background jobs per process. So, store the child process ID after each call of JOB command

You may redirect output from that process to some file, by passing principal-output parameter

With ^$JOB and $ZChild, you at least can wait until the process is finished its work. With an endless loop and reasonable HANG 

If to be honest, I have no idea, of the reasons for appearing SAM in the way it appeared. SAM is a bunch of tools.

  • Grafana, AlertManager - Visualization, and alerting
  • Prometheus - Time-Series Database, which in fact do requests to IRIS to collect metrcis
  • SAM itself, is from what I got, is just an UI tool which helps kind of organize your cluster and configure Prometheus+Grafana. And it uses the whole IRIS just for it.

And in fact SAM is not a requirement for it all. The thing is on your servers, where you have couple of API endpoints for Prometheus. And As far as I remember, the code there is closed, and you can't extend it. But you can easily add, your own endpoints, with your custom metrics, in understandable by Prometheus format. Look a this article as an example

So, any new metrics are supposed to be added in your servers, not in SAM.

And there is another way. Last year I developed a plugin to Grafana itself, which can connect directly to IRIS by its super port, and collect data in any way. So, even like, just without Prometheus at all, just Grafana and IRIS. It's possible to move some metrics logic outside of IRIS. Something like, use SQL query as a metric, or read some global, and visualize it in Grafana. That plugin is just a proof of concept, and can't be used in production. It requires some work on it, but I did not see so much interest in it, yet. And I would really like to improve it and make it useful.

Not sure why would you need to connect SAM container directly, but ok.

As Robert, already mentioned, to open terminal inside the container, you can do this command

docker exec -it {conatiner_id} bash

-i for interactive, -t for tty

-u root, if you would need to get root access, not needed in most of the cases.

bash is for command which you need to execute, so, it can be just

docker exec -it {conatiner_id} iris session IRIS

where the first iris is a command, session command there and last IRIS is an instance name (by default in docker container)

The next question is how to connect with VSCode. So, first of all, it should be connected over internal WebServer, or another way you managed to configure for access web. So, container have to be started with mapped port 52773

You may get authorization issues with a freshly started container, while it may require you to change the password. VSCode requires authorization. So, just simple config should work

{
  "objectscript.conn": {
    "active": true,
    "port": 52773,
    "host":"localhost",
    "username": "_system",
    "password": "SYS",
    "ns": "%SYS",
  }
}

Hi, Yeah at this time no one security scanner supports ObjectScript. There are a few reasons for it. 

At this time, the only tool closest to it is ObjectScriptQuality, which can scan for possible bugs right now. But can be extended for security scans as well. With proper funding, it's possible to do it there. But only as a scanner just for code.

Another way is to implement a very new especially for a Security scanning tool, a complete scanner for enironment.

If your company or other companies would like to invest in such a project, I can implement such tool.

docker-compose is just a wrapper around Docker. 

But anyway, I'm sure you can still use docker no matter what environment you have. As soon as you can run any virtual machine hypervisor, you can even use a separate machine from yours. Yeah, there are probably some limitations, but not sure if they are not solvable. I'm not a Windows user at all.

Since the beginning of the era of Docker, there was a way, named docker-machine, but I suppose it's not in development anymore.

There is a possibility to install client only on windows, and there was a way to connect it to any external docker server by setting environment variable DOCKER_HOST

Short answer, Yes. 

During docker build, just execute it, the same way, you doing anything when configure IRIS

While docker container starts, you can execute script, which will be called after start of IRIS

Just add it in the Dockerfile, trainmodel.sh, should contain iris session script to start model train

CMD ["-a", "/trainmodel.sh"]