go to post Yaron Munz · Dec 16 Hi @Ashok Kumar,you are correct, I was mistaken. The correct one is: "%System/%Login/Terminate"Auditing | InterSystems IRIS Data Platform 2024.3
go to post Yaron Munz · Dec 16 Hello, If you have audit enabled, and the item "%System/%Login/JobEnd" is enabled, you may know which user killed a process
go to post Yaron Munz · Dec 13 Hello, You may use an existing library for SSO using MS or google account with "Microsoft Authentication Library (MSAL)"Overview of the Microsoft Authentication Library (MSAL) - Microsoft identity platform | Microsoft Learn For CSP pages, we used the JavaScript library MSAL.js
go to post Yaron Munz · Oct 29 Does your class have a relationship property to another class?In that case you might want to consider to use: "CompileAfter" or "DependsOn" keywords (those might help the compiler to have a correct order when compiling).
go to post Yaron Munz · Oct 28 maybe there are dependencies for that class? (e.g. class depends on another class(es) that need to be compiled before). try to add "r" (recursive) flag, so your flags will look like: "ckr"
go to post Yaron Munz · Oct 28 You have other 2 options:1. With SQL against the %SYS.Task class/table (delete from %SYS.Task where id=taskID)2. Set sc = ##class(%SYS.Task).%DeleteId(taskID)
go to post Yaron Munz · Oct 28 you are correct, but what I've suggested a way to check also that the interoperability is running, on that specific server
go to post Yaron Munz · Oct 28 the SMP portal "about" page, has an option to choose the language. However, this persists for the current session only (in %session object). I would try to go with the solution proposed by @Raj Singh to use a browser add-on that can modify HTTP headers: (e.g. the: HTTP_ ACCEPT_LANGUAGE CGI variable). Intersystems could think of adding a user defined language, but not on the user profile since non local users (e.g. LDAP) are non persistent, so a global like ^ISC.someName(user)=Language could be the "best" way.We don't want to (or can't) modify classes for the portal (some of them doesn't have sources).This is a good candidate for the "intersystems ideas"
go to post Yaron Munz · Oct 28 If some of those 3000 classes are divided to different packages, I would try to do the load in "segments" (using the queue manager, to have this done in parallel). This might speed things up a bit.
go to post Yaron Munz · Oct 17 You can't directly do that in a BPL since it doesn't have persistence methods. You may convert your %DynamicArray into objectScript array or serialize your data into JSON that can be passed to PBL as a string
go to post Yaron Munz · Oct 17 The CSP gateway has a "mirror aware" function that will always point you to the primary in a failover pair. This works most of the times, but in rare cases it keep a connection disabled after a primary swtich. Another option is to use an external load balancer that has some kind of "health probe". Then, you could have a simple REST/API call (called by that health probe) that will return 200 for the primary and 404 (or 500) for the backup. This way going through that LB will always point you to the primary.
go to post Yaron Munz · Oct 15 According to Snowflake documentation (https://docs.snowflake.com/en/user-guide/intro-key-concepts) is seems that you may use ODBC and JDBC, so SQL gateway can be used (SQL Gateway Connections | InterSystems Programming Tools Index | InterSystems IRIS Data Platform 2019.1) There are also native connectors (e.g. Python). embedded python is not available on IRIS 2019.2 you may consider an upgrade to IRIS 2021.2
go to post Yaron Munz · Oct 2 Hello,When several instances are connected to remote DB, then all their locks are managed in the IRIS instance where this DB is local (lets call this "the source")I recommend increasing the "gmheap" parameter at "the source" to have more space on the lock table. There are utilities to check the lock table free space, or full.
go to post Yaron Munz · Oct 2 The global IRIS.WorkQueue is located (mapped to) in the IRISLOCALDATA DB, which stores internal IRIS temporary data, including queue manager data.This DB is being cleaned/purged at IRIS startup, but you may compact and truncate this DB while IRIS is running (either manually or programically)
go to post Yaron Munz · Sep 26 Hi, You said the issue was only 1 server, and you could fail over to the mirror backup server, which could connect to LDAP from within IRIS. I assume you run the d TEST^%SYS.LDAP function to check connectivity. If only 1 server can't connect, I would ask myself (investigate) "what was changed?" using REDEBUG could help to see more information re. the issue. In any case, I recommend opening a WRC for that, if you can not find the root cause.
go to post Yaron Munz · Sep 16 Thanks Robert. working with Intersystems (and other M technologies) since 1991...
go to post Yaron Munz · Sep 10 Hello Jignesh, I guess that you refer to a sync mirror (e.g. a failover pair: primary---backup). People here were mentioning unplanned down time (hardware and OS failures) but there is another advantage to benefit from a better HA for planned downtime for:- IRIS maintenance => move a DB from disk to disk, split/move data from one DB to another- Hardware maintenance => VM re-size, add/change data disks- O/S maintenance => O/S patches, updatesBenefits:1. All those activities are possible with 0 downtime (since the pair are 100% identical)2. There is no data loss when there is a switch: either an automatic (due to failure) or manual (due to "planned" maintenance)3. RTO - usually just a few seconds, depending on the complexity of your application/interoperability4. Manual switch let you do the necessary work on the "backup", switch manually (e.g. make the "backup" a "primary") and do the same work on the other member.
go to post Yaron Munz · Sep 4 I would go with an (old) approach for pagination:1. Store only the IDs/page in a temporary table2. For any specific page, get the IDs and query the data from the main table The pagination class: Class DB.TempSQL Extends %Library.Persistent [ ClassType = persistent, Not ProcedureBlock ]{Index Main On (Session, QueryName, PageNo) [ IdKey, Unique ];Property IDs As list Of %String(TRUNCATE = 1); Property PageNo As %Integer; Property QueryName As %String(TRUNCATE = 1); Property Session As %String(TRUNCATE = 1); Query PageNumbers(session As %String, queryname As %String) As %SQLQuery(CONTAINID = 0, ROWSPEC = "PageNo:%Integer"){ SELECT PageNo FROM TempSQL WHERE (Session = :session) AND (QueryName = :queryname) } The function to populate the pagination class: ClassMethod BuildTempSql(SQL As %Integer, session As %String, queryname As %String, PageSize As %Integer, Special As %Boolean = 0) [ ProcedureBlock = 0 ]{N (SQL,session,queryname,PageSize,Special,%session)&sql(delete from DB.TempSQL Where Session = :session and QueryName = :queryname)S rs=##class(%ResultSet).%New()D rs.Prepare(SQL), rs.Execute()S (count,page,SQLCODE)=0,TimeStart=$P($ZTS,",",2)S entry=##class(DB.TempSQL).%New()S entry.Session=session,entry.QueryName=querynameF { I (rs.Next()'=1) { I (entry.IDs.Count()>0) { S page=$I(page),entry.PageNo=page D entry.%Save() K entry } Quit ; last one ! } Else { I queryname'="Search"||'##class(Utils.Lists).CheckIBlack(rs.GetData(1)) { S count=$I(count) D entry.IDs.Insert(rs.GetData(1)) } I (count=PageSize) { S page=$I(page),entry.PageNo=page D entry.%Save() K entry S count=0,entry=##class(DB.TempSQL).%New() S entry.Session=session,entry.QueryName=queryname } }}S TimeEnd=$p($zts,",",2)S %session.Data("MaxPageNo"_$S(Special:queryname,1:""),1)=pageS %session.Data("Page",1)=$S(page>0:1,1:0)S %session.Data("SqlSearchTime",1)=$FN(TimeEnd-TimeStart,",",3)D rs.Close() K rs} Code for specific page data retrival: I ##class(DB.TempSQL).%ExistsId(session_"||"_queryname_"||"_page) { S entry = ##class(DB.TempSQL).%OpenId(session_"||"_queryname_"||"_page) F i=1:1:entry.IDs.Count() { S id = entry.IDs.GetAt(i) &sql( select ID, prop1, prop2, prop3 into :ID, :p1, :p2, :p3 from any.Table where ID = :id)) } }
go to post Yaron Munz · Sep 3 Hi Scott, My remarks: As mentioned here: 1. As already mentioned by @David.Satorres6134, global mapping for specific globals (or subscript level mapping) to different databases (located on different disks) may give you a solution for space, and also increase your overall performance.2. Using ^GBLOCKCOPY is a good idea when there are many small globals. For a very big globals, it will be very slow (since it uses 1 process/global) so I recommend writing your own code + using the "queue manager" to do merges between databases for 1 global in parallel.
go to post Yaron Munz · Aug 27 For simple questions, its working fine. If after the answer I want to refine my question, I need to add to the original question, and not sure if sessions are persistence (like in chatGPT that I can do a "conversation").when a too long and complex question is being entered, I got:"This is beyond my current knowledge. Please ask the Developer Community for further assistance."There is an advantage that some (simple) questions get good referrals to the documentation or community pages