@Jeffrey Drumm have you able to solve this ?

We are having the same problem.
We have the IRIS internal apache webserver configured to have HTTPS access, with self-signed certificate.
Connection to the SMP portal is working fine with HTTP and HTTP.
When we tried to "force" HTTPS only (by doing a HTTP->HTTPS redirect on apche level) we are not able to connect with VS-code: we get the same error: "unable to verify the first certificate".
We tried to uncheck the "Http: System Certificates" and "Http: Proxy Strict SSL" but this was not solving the problem.

Currently, the only workaround I see is to disable the redirection, but with this solution we will still having (even by mistake) HTTP traffic to the server, which we want to aviod.

Any idea ?

On a given machine any process can run "as fast" as the CPU clock rate (higher = faster = more operations/sec.)

It is true that a single process can do approx. 15-20 MB/sec. (depends on the CPU clock rate & the disk type: SSD, Premium SSD, Nvme)

The best way to overcome this limitation is do to a "heavy I/O" processes in parallel using the queue manager.
On machines with 16/32 cores, you may achieve your "infrastructure limits" (160MB/sec) easily and even more (we managed to go to a 1000MB/sec of Nvme disks)

Yes, you have also journal files...  they keep all the changes (set, kills, start/end transactions) made to the DB (after actual write to the DBs) and also to be able to roll back transactions.

The write daemon and the WIJ file is more to keep DB physical "integrity" in case of a failure, and its before actual data is being written to the DBs

I see you are using windows. So just look at the windows "task manager" for the "active time" of the disk D:\ If you see that there are times that you hit the 100% "active time" then move the WIJ to a different disk. This will improve performance.

The "write daemon" is a process that is responsible to write all new/changed data to the disk where the WIJ (write image journal) file is located. Only then, actual databases are being updated.

Sometimes, when the system is very busy with a lot of pending writes this error appears, and then after few minutes it is cleared automatically. (e.g. when you rebuild a huge index, or do some data migration process).

I would monitor the disk activity for the disk that the WIJ file is located on (by default its on the same disk you installed Cache).

One solution is to move the WIJ to a different disk, less occupied. This will give the "write daemon" more writing capabilities (you will have to restart Cache).    

As you probably know the Intersystems IRIS is capable of doing calculations on a numbers in length of up to 19 positions. This is due to the fact that it is stored as a signed 64–bit integer.

If you are using one of the latest version of IRIS which has an "embedded Python" then you will have the Python support for "bignum" or "long".
So you may have a ClassMethod like this:

ClassMethod BigNumbers1() [ Language = python ]
{
9223372036854775807
9223372036854775808
9223372036854775807123456789
print(a)
print(b)
print(c)
print(c%2)
}
 

Which will give you the output:

9223372036854775807
9223372036854775808
9223372036854775807123456789
1

SAM is executing a REST API call to http://[your-server]/api/monitor/metrics for any server included in your cluster. I'm not sure where the interval is being configured.

If your own metric is needed to be run once a day, you can schedule it with the 'task manager" have the result stored in a global, and let the "user defined" sensor read this global which will not caused any performance issue.

BTW - one thing I forgot to mention in the previous post is:
In order to have SAM run your "user defined" metrics, you need to add it to SAM:

%SYS>Set sc = ##class(SYS.Monitor.SAM.Config).AddApplicationClass("MyClass", "namespace")

First create a class:

Class MyClass Extends %SYS.Monitor.SAM.Abstract

Add a parameter that will indicate the prefi name for al your used defined metrics.

Parameter PRODUCT = "Prefix";

Create a wrap method GetSensors() for all your user defined sensors (which can be ClassMethods):

Method GetSensors() As %Status
{
Try {
   D ..SetSensor("sensor1", ..Sensor1())
   D ..SetSensor("sensor2", ..Sensor2())
   }
Catch e { ;call your store error function g.e. ##class(anyClass).StoreError($classname(),e.DisplayString()) }
}
ClassMethod Sensor1() As %Integer
{
   ; do any calculation
   Quit Value
}
ClassMethod Sensor1() As %Integer
{
   ; do any calculation
   Quit Value
}
}
 

Now you will get by the API REST call for the /api/mertics your "user defined" sensors at names:
Prefix_sensor1 and Prefix_sensor2

Remarks:
- Make sure that your GetSensors() and all your "used defined" sensors (classmethods) have a proper error handling so they are fail safe (you may use a try/catch or any other error trap like $ZT="something")
- Make sue all your "user defined" sensors are preforming fast. This will enable the SAM metrics API REST call to get the data quickly without delays. In case some calculations are "heavy" it is better to have a separate process (task manager) to do those calculations, and store them in a global for fast data retrieval by the sensor   

When you install SAM, it is usually installed with a container. (we use docker, so I don't have experience with podman).  

We have this on a seperate machine (Linux) when our IRIS servers are Windows, but I don't see any limitation (except memory & CPU = performance) to run a container with SAM on the same IRIS machine.

 Grafana, and Promethus are part of the "bundle" (container) for SAM do you do not need to install them separately.

get the mirror name:

ZN "%sys"
Set mirrorName=$system.Mirror.GetMirrorNames()
Set result = ##class(%ResultSet).%New("SYS.Mirror:MemberStatusList")
Set sc = result.Execute(mirrorName)
while result.Next() {
   Set transfer=result.GetData(6)
   // you may filer the check for a specific machine on GetData(1)
   // Do any check on "transfer" to see if behind and calculate the threshold time e.g.
  // For i=1:1:$l(transfer," ") {  
  //    If $f($p(transfer," ",i),"hour") { W !,"hour(s) behind" }
  //      Elseif $f($p(transfer," ",i),"minute") { Set minutes=$p(transfer," ",i-1) W !,minutes_" minutes behind" }
  //   }
}

To get any component status, you may use:

SELECT Name, Enabled FROM Ens_Config.Item where Name['Yourname'

To check queues you may use the following sql :

select Name,PoolSize from ENS_Config.Item where Production='YourProductionName'

Then, iterate on the result set and get the queue depth by:

Set QueueCount=##class(Ens.Queue).GetCount(Name)

To check latest activiy on a component, I would go to the:

SELECT * FROM Ens.MessageHeader where TargetQueueName='yourComponentName'

and then, to check the TimeProcessed