the SMP portal "about" page, has an option to choose the language. However, this persists for the current session only (in %session object). I would try to go with the solution proposed by @Raj Singh 
to use a browser add-on that can modify HTTP headers: (e.g. the:  HTTP_ ACCEPT_LANGUAGE CGI variable).

Intersystems could think of adding a user defined language, but not on the user profile since non local users (e.g. LDAP) are non persistent, so a global like ^ISC.someName(user)=Language could be the "best" way.
We don't want to (or can't) modify classes for the portal (some of them doesn't have sources).
This is a good candidate for the "intersystems ideas"

The CSP gateway has a "mirror aware" function that will always point you to the primary in a failover pair. This works most of the times, but in rare cases it keep a connection disabled after a primary swtich.

Another option is to use an external load balancer that has some kind of "health probe". Then, you could have a simple REST/API call (called by that health probe) that will return 200 for the primary and 404 (or 500) for the backup. This way going through that LB will always point you to the primary.

According to Snowflake documentation (https://docs.snowflake.com/en/user-guide/intro-key-concepts) is seems that you may use ODBC and JDBC, so SQL gateway can be used (SQL Gateway Connections | InterSystems Programming Tools Index | InterSystems IRIS Data Platform 2019.1)

There are also native connectors (e.g. Python). embedded python is not available on IRIS 2019.2 you may consider an upgrade to IRIS 2021.2 

Hello,
When several instances are connected to remote DB, then all their locks are managed in the IRIS instance where this DB is local (lets call this "the source")
I recommend increasing the "gmheap" parameter at "the source" to have more space on the lock table.

There are utilities to check the lock table free space, or full.
 

Hi,

You said the issue was only 1 server, and you could fail over to the mirror backup server, which could connect to LDAP from within IRIS. I assume you run the d TEST^%SYS.LDAP function to check connectivity.

If only 1 server can't connect, I would ask myself (investigate) "what was changed?" 
using REDEBUG could help to see more information re. the issue.

In any case, I recommend opening a WRC for that, if you can not find the root cause.

Hello Jignesh,

I guess that you refer to a sync mirror (e.g. a failover pair: primary---backup). People here were mentioning unplanned down time (hardware and OS failures) but there is another advantage to benefit from a better HA for planned downtime for:
- IRIS maintenance => move a DB from disk to disk, split/move data from one DB to another
- Hardware maintenance => VM re-size, add/change data disks
- O/S maintenance => O/S patches, updates
Benefits:
1. All those activities are possible with 0 downtime (since the pair are 100% identical)
2. There is no data loss when there is a switch: either an automatic (due to failure) or manual (due to "planned" maintenance)
3. RTO - usually just a few seconds, depending on the complexity of your application/interoperability
4. Manual switch let you do the necessary work on the "backup", switch manually (e.g. make the "backup" a "primary") and do the same work on the other member.

I would go with an (old) approach for pagination:
1. Store only the IDs/page in a temporary table
2. For any specific page, get the IDs and query the data from the main table

The pagination class:

Class DB.TempSQL Extends %Library.Persistent [ ClassType = persistent, Not ProcedureBlock ]
{
Index Main On (Session, QueryName, PageNo) [ IdKey, Unique ];
Property IDs As list Of %String(TRUNCATE = 1);
Property PageNo As %Integer; Property QueryName As %String(TRUNCATE = 1);
Property Session As %String(TRUNCATE = 1);
Query PageNumbers(session As %String, queryname As %String) As %SQLQuery(CONTAINID = 0, ROWSPEC = "PageNo:%Integer")
{ SELECT PageNo FROM TempSQL WHERE (Session = :session) AND (QueryName = :queryname) }
 

The function to populate the pagination class:

ClassMethod BuildTempSql(SQL As %Integer, session As %String, queryname As %String, PageSize As %Integer, Special As %Boolean = 0) [ ProcedureBlock = 0 ]
{
(SQL,session,queryname,PageSize,Special,%session)
&sql(delete from DB.TempSQL Where Session = :session and QueryName = :queryname)
rs=##class(%ResultSet).%New()
rs.Prepare(SQL), rs.Execute()
(count,page,SQLCODE)=0,TimeStart=$P($ZTS,",",2)
entry=##class(DB.TempSQL).%New()
entry.Session=session,entry.QueryName=queryname
F  {
   I (rs.Next()'=1) {
   I (entry.IDs.Count()>0) {
      S page=$I(page),entry.PageNo=page entry.%Save()
      K entry
      }
   Quit   ; last one !
   }
   Else {
      I queryname'="Search"||'##class(Utils.Lists).CheckIBlack(rs.GetData(1)) {
         S count=$I(count) entry.IDs.Insert(rs.GetData(1))
         }
   I (count=PageSize) {
      S page=$I(page),entry.PageNo=page
      D entry.%Save() entry
      S count=0,entry=##class(DB.TempSQL).%New()
      S entry.Session=session,entry.QueryName=queryname
      }
   }
}
TimeEnd=$p($zts,",",2)
%session.Data("MaxPageNo"_$S(Special:queryname,1:""),1)=page
%session.Data("Page",1)=$S(page>0:1,1:0)
%session.Data("SqlSearchTime",1)=$FN(TimeEnd-TimeStart,",",3)
rs.Close() rs
}

Code for specific page data retrival:

##class(DB.TempSQL).%ExistsId(session_"||"_queryname_"||"_page) {
  
entry = ##class(DB.TempSQL).%OpenId(session_"||"_queryname_"||"_page)
   i=1:1:entry.IDs.Count() {
      
id = entry.IDs.GetAt(i)
       &sql(
select  ID, prop1, prop2, prop3 into :ID, :p1, :p2, :p3 from any.Table where ID = :id))
      }
   }

Hi Scott,

My remarks:

As mentioned here:

1. As already mentioned by @David Satorres, global mapping for specific globals (or subscript level mapping) to different databases (located on different disks) may give you a solution for space, and also increase your overall performance.
2. Using ^GBLOCKCOPY is a good idea when there are many small globals. For a very big globals, it will be very slow (since it uses 1 process/global) so I recommend writing your own code + using the "queue manager" to do merges between databases for 1 global in parallel.

For simple questions, its working fine. If after the answer I want to refine my question, I need to add to the original question, and not sure if sessions are persistence (like in chatGPT that I can do a "conversation").
when a too long and complex question is being entered, I got:
"This is beyond my current knowledge. Please ask the Developer Community for further assistance."
There is an advantage that some (simple) questions get good referrals to the documentation or community pages