Yaron Munz · Oct 2, 2024 go to post

Hello,
When several instances are connected to remote DB, then all their locks are managed in the IRIS instance where this DB is local (lets call this "the source")
I recommend increasing the "gmheap" parameter at "the source" to have more space on the lock table.

There are utilities to check the lock table free space, or full.
 

Yaron Munz · Oct 2, 2024 go to post

The global IRIS.WorkQueue is located (mapped to) in the IRISLOCALDATA DB, which stores internal IRIS temporary data, including queue manager data.
This DB is being cleaned/purged at IRIS startup, but you may compact and truncate this DB while IRIS is running (either manually or programically)

Yaron Munz · Sep 26, 2024 go to post

Hi,

You said the issue was only 1 server, and you could fail over to the mirror backup server, which could connect to LDAP from within IRIS. I assume you run the d TEST^%SYS.LDAP function to check connectivity.

If only 1 server can't connect, I would ask myself (investigate) "what was changed?" 
using REDEBUG could help to see more information re. the issue.

In any case, I recommend opening a WRC for that, if you can not find the root cause.

Yaron Munz · Sep 16, 2024 go to post

Thanks Robert. working with Intersystems (and other M technologies) since 1991...

Yaron Munz · Sep 10, 2024 go to post

Hello Jignesh,

I guess that you refer to a sync mirror (e.g. a failover pair: primary---backup). People here were mentioning unplanned down time (hardware and OS failures) but there is another advantage to benefit from a better HA for planned downtime for:
- IRIS maintenance => move a DB from disk to disk, split/move data from one DB to another
- Hardware maintenance => VM re-size, add/change data disks
- O/S maintenance => O/S patches, updates
Benefits:
1. All those activities are possible with 0 downtime (since the pair are 100% identical)
2. There is no data loss when there is a switch: either an automatic (due to failure) or manual (due to "planned" maintenance)
3. RTO - usually just a few seconds, depending on the complexity of your application/interoperability
4. Manual switch let you do the necessary work on the "backup", switch manually (e.g. make the "backup" a "primary") and do the same work on the other member.

Yaron Munz · Sep 4, 2024 go to post

I would go with an (old) approach for pagination:
1. Store only the IDs/page in a temporary table
2. For any specific page, get the IDs and query the data from the main table

The pagination class:

Class DB.TempSQL Extends %Library.Persistent [ ClassType = persistent, Not ProcedureBlock ]
{
Index Main On (Session, QueryName, PageNo) [ IdKey, Unique ];
Property IDs As list Of %String(TRUNCATE = 1);
Property PageNo As %Integer; Property QueryName As %String(TRUNCATE = 1);
Property Session As %String(TRUNCATE = 1);
Query PageNumbers(session As %String, queryname As %String) As %SQLQuery(CONTAINID = 0, ROWSPEC = "PageNo:%Integer")
{ SELECT PageNo FROM TempSQL WHERE (Session = :session) AND (QueryName = :queryname) }
 

The function to populate the pagination class:

ClassMethod BuildTempSql(SQL As %Integer, session As %String, queryname As %String, PageSize As %Integer, Special As %Boolean = 0) [ ProcedureBlock = 0 ]
{
(SQL,session,queryname,PageSize,Special,%session)
&sql(delete from DB.TempSQL Where Session = :session and QueryName = :queryname)
rs=##class(%ResultSet).%New()
rs.Prepare(SQL), rs.Execute()
(count,page,SQLCODE)=0,TimeStart=$P($ZTS,",",2)
entry=##class(DB.TempSQL).%New()
entry.Session=session,entry.QueryName=queryname
F  {
   I (rs.Next()'=1) {
   I (entry.IDs.Count()>0) {
      S page=$I(page),entry.PageNo=page entry.%Save()
      K entry
      }
   Quit   ; last one !
   }
   Else {
      I queryname'="Search"||'##class(Utils.Lists).CheckIBlack(rs.GetData(1)) {
         S count=$I(count) entry.IDs.Insert(rs.GetData(1))
         }
   I (count=PageSize) {
      S page=$I(page),entry.PageNo=page
      D entry.%Save() entry
      S count=0,entry=##class(DB.TempSQL).%New()
      S entry.Session=session,entry.QueryName=queryname
      }
   }
}
TimeEnd=$p($zts,",",2)
%session.Data("MaxPageNo"_$S(Special:queryname,1:""),1)=page
%session.Data("Page",1)=$S(page>0:1,1:0)
%session.Data("SqlSearchTime",1)=$FN(TimeEnd-TimeStart,",",3)
rs.Close() rs
}

Code for specific page data retrival:

##class(DB.TempSQL).%ExistsId(session_"||"_queryname_"||"_page) {
  
entry = ##class(DB.TempSQL).%OpenId(session_"||"_queryname_"||"_page)
   i=1:1:entry.IDs.Count() {
      
id = entry.IDs.GetAt(i)
       &sql(
select  ID, prop1, prop2, prop3 into :ID, :p1, :p2, :p3 from any.Table where ID = :id))
      }
   }
Yaron Munz · Sep 3, 2024 go to post

Hi Scott,

My remarks:

As mentioned here:

1. As already mentioned by @David.Satorres6134, global mapping for specific globals (or subscript level mapping) to different databases (located on different disks) may give you a solution for space, and also increase your overall performance.
2. Using ^GBLOCKCOPY is a good idea when there are many small globals. For a very big globals, it will be very slow (since it uses 1 process/global) so I recommend writing your own code + using the "queue manager" to do merges between databases for 1 global in parallel.

Yaron Munz · Aug 27, 2024 go to post

For simple questions, its working fine. If after the answer I want to refine my question, I need to add to the original question, and not sure if sessions are persistence (like in chatGPT that I can do a "conversation").
when a too long and complex question is being entered, I got:
"This is beyond my current knowledge. Please ask the Developer Community for further assistance."
There is an advantage that some (simple) questions get good referrals to the documentation or community pages

Yaron Munz · Jul 16, 2024 go to post

You have those options:
1. you use $ZF(-1) or $ZF(100) to execute a command line from the OS
2. To use embedded python (if your version of IRIS has it) where you can use os. getpid() :
>Write $system.Python.Shell()
Python 3.9.5 (default, Jul  8 2023, 00:24:17) [MSC v.1927 64 bit (AMD64)] on win32
Type quit() or Ctrl-D to exit this shell.
>>> import os
>>> print(os. getpid())
16232

Yaron Munz · Jul 16, 2024 go to post

Both MAC routines and class methods are compiled to INT routines, which are then compiled to OBJ (binary) code that is executed. To achieve better performance, try to make your code compact and efficient

Yaron Munz · Jun 26, 2024 go to post

I would try to:
1. check if there is a header for "Content-Length" (the client is setting this)
2. As the %CSP.Request is a stream you might try to check it's "size" property
3. Find the global(s) that are storing this request (not sure which, maybe ^%csp.session or some CacheTemp.csp* with the sessionID - would be a bit complex since its not documented.

Yaron Munz · Jun 26, 2024 go to post

The @ (indirection) is not used only to write to a device (spool is a special type of device with number 2).
You may use the @ (indirection) to set any variable, array, list, object property or stream, while keeping this in memory:

S %A="""HI THERE"",!,#,33.33,"" "",$ZTIMESTAMP"
S %B=@A
S %C("key1","key2")=@A
 
Yaron Munz · Jun 26, 2024 go to post

To increase your performance there are many factors and test that need to be done in order to take the best action (and approach). Some of them are:

1. Its highly recommended to identify the bottleneck before doing any action.
2. Hardware and infrastructure: Start by monitoring your infrastructure and hardware (network, memory, CPU) to check if there are any bottlenecks. Use task manager (or similar on Linux) to see if one or more disks are exhausted (100% active) - maybe splitting the databases and/or other Cache components (e.g. journals, WIJ, IRISTEMP etc.) between different disks might solve that issue. check if server has enough memory. How many processes are there? is the O/S using swap files when memory is low?
2. Check IRIS (or Cache) related issues: memory usage (are you allocation enough global and routine buffers) heap etc. - are there any errors in console.log that might show any potential issues?
3. Some production related things:
a. Can you run your process in parallel (pool size > 1) - maybe the bottleneck is there
b. Code - is your code optimized - use MONLBL to check the most "overwhelmed" places in your code
c. Journaling & mirroring might slow things if a lot of "temporary data" is journaled and mirrored

This is just the tip of the iceberg... there are many more things that can be done.
If you feel lost, I recommend that you open a WRC to get specific (your system related) help.  

Yaron Munz · Jun 21, 2024 go to post

The best practice is put a token (that was safely acquired by the sender) rather than a user/password in the header. This token will give you both authentication, authorization and validity (expiration date and time or retention). Then, the recipient can verify those.

Yaron Munz · Nov 29, 2023 go to post

As you probably know, a stream is a "collection" of strings ("under the hood"). As the $ZCRC doesn't support incremental hashing you need to choose some other hashing that support this: SHA-256, MD5 (not recommended due to security vulnerabilities)

Yaron Munz · Nov 28, 2023 go to post

We will continue to use SAM for a while. Next year we plan to migrate to "DataDog" monitoring tool which is already being used in our company.
The /api/monitor/metrics/ can be still being used :-) 

Yaron Munz · Nov 28, 2023 go to post

I have tested the metrics rest/api and seems that the terminator that is being used is $C(10)

Yaron Munz · Nov 24, 2023 go to post

using the /LOGCMD is very useful to log the resulting command line into messages.log so you could have easy access to see from the SMP.

Also, I/O Redirection could be useful for having input, output, errors linked to files (both Linux and Windows)

Yaron Munz · Nov 16, 2023 go to post

Could you please give additional information on how the data is being pulled?
You say "tables" so I assume you run a SQL: Is this done by "select" or do you run a SP? 
Is this is a local task on the server (running a COS code) or externally with ODBC/JDBC connection?

Yaron Munz · Nov 14, 2023 go to post

I hope that the APIs (e.g. /api/monitor/metrics) will still work, so metrics can still be embeded into any monitoring tool that we are using like dataDog.

Is this correct to assume?

Yaron Munz · Oct 26, 2023 go to post

Robert, 

WIJ is used to write data to the Db. WIJ holds the copy of Db blocks before they are written to the Db to allow to keep Db integrity (Cache checks if there are "dirty" blocks on WIJ when started, if so writtes them to the Dbs).

"cache buffers" are used to store blocks that were READ from the Db to aviod disk access for concurrent reads

Yaron Munz · Sep 12, 2023 go to post

For a-sync mirror membwers, you could use the query:

Set result = ##class(%ResultSet).%New("SYS.Mirror:MemberStatusList")  

and then to iterate on result and do the necessary cheeks. 

Yaron Munz · Sep 7, 2023 go to post

Great workaround. I remember that I had a problem few years ago that the WebTerminal was unable to do:
ZLOAD routine ZPRINT
I did some (ugly, must say) changes in the broker to let this work
Does your wrapper is able to handle this as well? 

Yaron Munz · Aug 24, 2023 go to post

I would recommend that in systems that the audit database is big or huge you stop IRIS and move that database to another disk. Then change in IRIS.CPF (Database section) the location for the audit database. 

Yaron Munz · Aug 22, 2023 go to post

Check the user "task2" permissions, maybe this user does not have permission to the audit databse?

Yaron Munz · Jun 19, 2023 go to post

Which browser do you use? I noticed that in Edge the credentials box is not popped-up resulting an 401 error, so you need to work with "IE mode" (on chrome, it does pop-up).