go to post Yaron Munz · Nov 30, 2022 I see that all your TLS/SSL configurations are Client type.Usually, there is no need to point to the certificate, unless the "client will be asked to authenticate itself".You should go into each of your configurations, to check if this option is enabled/used. If so, then you will have to update the: "File containing this client's certificate" with the new pem file
go to post Yaron Munz · Nov 8, 2022 Hello Paul, The compression is using "zstd" type of compression, comming from %occStream.inc The function that is used is: $System.Util.Compress(%data,"zstd")
go to post Yaron Munz · Oct 28, 2022 I would open a WRC on that since it looks like a pure bug in the SQL compiler
go to post Yaron Munz · Oct 26, 2022 I assume that you want to skip one (or more) indices for specific values.You may use an "after" trigger to manually delete an index based on some of the properties.For this approach, you will need to change this trigger, every time you add/change a property or an index.
go to post Yaron Munz · Oct 11, 2022 @Jeffrey Drumm have you able to solve this ? We are having the same problem.We have the IRIS internal apache webserver configured to have HTTPS access, with self-signed certificate.Connection to the SMP portal is working fine with HTTP and HTTP.When we tried to "force" HTTPS only (by doing a HTTP->HTTPS redirect on apche level) we are not able to connect with VS-code: we get the same error: "unable to verify the first certificate".We tried to uncheck the "Http: System Certificates" and "Http: Proxy Strict SSL" but this was not solving the problem. Currently, the only workaround I see is to disable the redirection, but with this solution we will still having (even by mistake) HTTP traffic to the server, which we want to aviod. Any idea ?
go to post Yaron Munz · Oct 7, 2022 Usually, when using old VT100 sessions (CHUI) then there is a "main" function that can handle all special chars (e.g. PF1-4, ESC, HOME, CTRL-F) in one central place that is being called from all screens. I would recommend using the following a technique:Read *a Set zb=$zb,key=$keyThen, you may examine those variables zb and key to capture special keys.
go to post Yaron Munz · Sep 28, 2022 On a given machine any process can run "as fast" as the CPU clock rate (higher = faster = more operations/sec.) It is true that a single process can do approx. 15-20 MB/sec. (depends on the CPU clock rate & the disk type: SSD, Premium SSD, Nvme) The best way to overcome this limitation is do to a "heavy I/O" processes in parallel using the queue manager.On machines with 16/32 cores, you may achieve your "infrastructure limits" (160MB/sec) easily and even more (we managed to go to a 1000MB/sec of Nvme disks)
go to post Yaron Munz · Sep 13, 2022 ClassMethod MinLength(s As %String) As %Integer{F i=1:1:$L(s," "){S v($L($P(s," ",i)))=i} Quit $O(v(""))}
go to post Yaron Munz · Sep 5, 2022 Yes, you have also journal files... they keep all the changes (set, kills, start/end transactions) made to the DB (after actual write to the DBs) and also to be able to roll back transactions. The write daemon and the WIJ file is more to keep DB physical "integrity" in case of a failure, and its before actual data is being written to the DBs I see you are using windows. So just look at the windows "task manager" for the "active time" of the disk D:\ If you see that there are times that you hit the 100% "active time" then move the WIJ to a different disk. This will improve performance.
go to post Yaron Munz · Sep 5, 2022 The "write daemon" is a process that is responsible to write all new/changed data to the disk where the WIJ (write image journal) file is located. Only then, actual databases are being updated. Sometimes, when the system is very busy with a lot of pending writes this error appears, and then after few minutes it is cleared automatically. (e.g. when you rebuild a huge index, or do some data migration process). I would monitor the disk activity for the disk that the WIJ file is located on (by default its on the same disk you installed Cache). One solution is to move the WIJ to a different disk, less occupied. This will give the "write daemon" more writing capabilities (you will have to restart Cache).
go to post Yaron Munz · Aug 26, 2022 As you probably know the Intersystems IRIS is capable of doing calculations on a numbers in length of up to 19 positions. This is due to the fact that it is stored as a signed 64–bit integer. If you are using one of the latest version of IRIS which has an "embedded Python" then you will have the Python support for "bignum" or "long".So you may have a ClassMethod like this: ClassMethod BigNumbers1() [ Language = python ]{a = 9223372036854775807b = 9223372036854775808c = 9223372036854775807123456789print(a)print(b)print(c)print(c%2)} Which will give you the output: 9223372036854775807922337203685477580892233720368547758071234567891
go to post Yaron Munz · Aug 22, 2022 SAM is executing a REST API call to http://[your-server]/api/monitor/metrics for any server included in your cluster. I'm not sure where the interval is being configured. If your own metric is needed to be run once a day, you can schedule it with the 'task manager" have the result stored in a global, and let the "user defined" sensor read this global which will not caused any performance issue. BTW - one thing I forgot to mention in the previous post is:In order to have SAM run your "user defined" metrics, you need to add it to SAM: %SYS>Set sc = ##class(SYS.Monitor.SAM.Config).AddApplicationClass("MyClass", "namespace")
go to post Yaron Munz · Aug 22, 2022 First create a class: Class MyClass Extends %SYS.Monitor.SAM.Abstract Add a parameter that will indicate the prefi name for al your used defined metrics. Parameter PRODUCT = "Prefix"; Create a wrap method GetSensors() for all your user defined sensors (which can be ClassMethods): Method GetSensors() As %Status{Try { D ..SetSensor("sensor1", ..Sensor1()) D ..SetSensor("sensor2", ..Sensor2()) }Catch e { ;call your store error function g.e. ##class(anyClass).StoreError($classname(),e.DisplayString()) }}ClassMethod Sensor1() As %Integer{ ; do any calculation Quit Value}ClassMethod Sensor1() As %Integer{ ; do any calculation Quit Value}} Now you will get by the API REST call for the /api/mertics your "user defined" sensors at names:Prefix_sensor1 and Prefix_sensor2 Remarks:- Make sure that your GetSensors() and all your "used defined" sensors (classmethods) have a proper error handling so they are fail safe (you may use a try/catch or any other error trap like $ZT="something")- Make sue all your "user defined" sensors are preforming fast. This will enable the SAM metrics API REST call to get the data quickly without delays. In case some calculations are "heavy" it is better to have a separate process (task manager) to do those calculations, and store them in a global for fast data retrieval by the sensor
go to post Yaron Munz · Aug 19, 2022 When you install SAM, it is usually installed with a container. (we use docker, so I don't have experience with podman). We have this on a seperate machine (Linux) when our IRIS servers are Windows, but I don't see any limitation (except memory & CPU = performance) to run a container with SAM on the same IRIS machine. Grafana, and Promethus are part of the "bundle" (container) for SAM do you do not need to install them separately.
go to post Yaron Munz · Aug 17, 2022 Hello, I have done a similar thing in the past with Cache. As long as at the end of the process the IRIS will be at a new partition with the same letter, all the keys in registry that were created during the install, will still be valid.The risk with the procedure you mentioned is minimal, and all should work as expectged.
go to post Yaron Munz · Aug 15, 2022 get the mirror name: ZN "%sys"Set mirrorName=$system.Mirror.GetMirrorNames()Set result = ##class(%ResultSet).%New("SYS.Mirror:MemberStatusList")Set sc = result.Execute(mirrorName)while result.Next() { Set transfer=result.GetData(6) // you may filer the check for a specific machine on GetData(1) // Do any check on "transfer" to see if behind and calculate the threshold time e.g. // For i=1:1:$l(transfer," ") { // If $f($p(transfer," ",i),"hour") { W !,"hour(s) behind" } // Elseif $f($p(transfer," ",i),"minute") { Set minutes=$p(transfer," ",i-1) W !,minutes_" minutes behind" } // }}
go to post Yaron Munz · Aug 11, 2022 To get any component status, you may use: SELECT Name, Enabled FROM Ens_Config.Item where Name['Yourname' To check queues you may use the following sql : select Name,PoolSize from ENS_Config.Item where Production='YourProductionName' Then, iterate on the result set and get the queue depth by: Set QueueCount=##class(Ens.Queue).GetCount(Name) To check latest activiy on a component, I would go to the: SELECT * FROM Ens.MessageHeader where TargetQueueName='yourComponentName' and then, to check the TimeProcessed
go to post Yaron Munz · Aug 9, 2022 There is an option to get a long time token (refresh token), if tokens are "cached" locally, you may have a scheduled task to refresh them.Another approach I would try here, is to use embedded python with this library:https://pypi.org/project/O365/#authentication