go to post Eduard Lebedyuk · Oct 27, 2021 I just like to add that SQL Gateway is an abstraction over both JDBC and ODBC connections.
go to post Eduard Lebedyuk · Oct 23, 2021 I believe so, but it depends on how large the session is that you have in mind Interoperability Visual Trace starts slowing down on sessions above 100 000 messages, and considerably slows down on 300 000 messages per session. I'm interested in a tool that would allow me to see these large sessions graphically, currently I work with them via SQL.
go to post Eduard Lebedyuk · Oct 18, 2021 You can't bypass the challenge.Neither by $classmetho() nor any code generator as this is all static code frozen and inflexible a runtime. Why? A case where user must be able to: create a code snippet with a predetermined interface choose it for execution among all the code snippets with the same interface Can be solved in many ways, without indirection (aka executing code stored as a string). I usually provide a base class, which a user must extend and populate with his own methods. After that in the source app, just call SubclassOf Query from %Dictionary.ClassDefinition to get all implementation and show them to a user to pick from. Works good enough.
go to post Eduard Lebedyuk · Oct 18, 2021 Might as well invoke the method directly: <Invoke Class="%Library.EnsembleMgr" Method="SetAutoStart" CheckStatus="true" > <Arg Value="${MyNamespace}" /> <Arg Value="myProd.Production" /> </Invoke> Or is there a reason to use a proxy method?
go to post Eduard Lebedyuk · Oct 8, 2021 Yeah, zeroes make $tr($j()) trick less applicable to the current challenge.
go to post Eduard Lebedyuk · Sep 13, 2021 Great investigation, Robert! And it looks like we have a winner with this pull request.
go to post Eduard Lebedyuk · Sep 9, 2021 Check this article. Set in your REST broker: Parameter CONTENTTYPE = {..#CONTENTTYPEJSON}; Parameter CHARSET = "UTF-8";
go to post Eduard Lebedyuk · Sep 8, 2021 The idea of a scrollable result set is to call Save/OpenId - and the result set would continue on a next row automatically. So you don't need to manage to/from indices: Here's an example Class User.Pagination { /// do ##class(User.Pagination).Time("NoSave") /// do ##class(User.Pagination).Time("Save") ClassMethod Time(method = "Save") { set start = $zh do $classmethod(,method) set end = $zh write $$$FormatText("%1 took %2 sec", method, $fnumber(end-start,"",4)) } /// do ##class(User.Pagination).NoSave() ClassMethod NoSave() { do { do $i(i) set obj = ..getPersonsPage(20,i) //w obj.%ToJSON(),!,! } while (obj.toIndex < obj.resultSetTotal) } /// do ##class(User.Pagination).Save() ClassMethod Save() { do { set obj = ..getPersonsPageSave(20,.id) //w obj.%ToJSON(),!,! } while (id'=-1) } ClassMethod getPersonsPage(pageSize As %String = 20, pageIndex As %String = 1) As %DynamicObject { #dim sc As %Status = $$$OK #dim rs As %ScrollableResultSet set sc = ..getRS(,.rs) quit:$$$ISERR(sc) {"msg": ($System.Status.GetErrorText(sc))} set vFrom = ((pageIndex -1 ) * pageSize) set vTo = vFrom + (pageSize-1) do rs.CurrRowSet(vFrom) set results = [] set:(pageSize >= rs.Count()) pageSize = rs.Count() set i = 0 while rs.%Next() && $i(i) { quit:(i>pageSize) do results.%Push({ "index": (i), "pid": (rs.%Get("ID")), "ssn" : (rs.%Get("SSN")), "age": (rs.%Get("Age"))}) //do results.%Push(+rs.%Get("ID")) } set out = { "pageSize":(pageSize), "pageIndex":(pageIndex), "fromIndex":(vFrom + 1), "toIndex":(vFrom+i), "resultSetTotal":(rs.Count()), "pageRecords":(i), "pages":($NORMALIZE((rs.Count()/pageSize),0)), "resultSet":(results) } return out } ClassMethod getRS(id As %Integer, Output rs As %ScrollableResultSet) As %Status { #dim sc As %Status = $$$OK if '$d(id) { set sql = "SELECT ID, SSN, Age FROM Sample.Person" set rs=##class(%ScrollableResultSet).%New("%DynamicQuery:SQL") set sc = rs.Prepare(sql) quit:$$$ISERR(sc) sc set sc = rs.Execute() quit:$$$ISERR(sc) sc quit:(rs.Count()=0) $$$ERROR($$$GeneralError, "No results") } else { set rs=##class(%ScrollableResultSet).%OpenId(id) } quit sc } ClassMethod getPersonsPageSave(pageSize As %String = 20, ByRef id As %Integer) As %DynamicObject { #dim sc As %Status = $$$OK #dim rs As %ScrollableResultSet set sc = ..getRS(.id,.rs) quit:$$$ISERR(sc) {"msg": ($System.Status.GetErrorText(sc))} set results = [] set:(pageSize >= rs.Count()) pageSize = rs.Count() set i = 0 set notAtEnd = rs.%Next() while notAtEnd && $i(i) { do results.%Push({ "index": (i), "pid": (rs.%Get("ID")), "ssn" : (rs.%Get("SSN")), "age": (rs.%Get("Age"))}) //do results.%Push(+rs.%Get("ID")) quit:(i>=pageSize) set notAtEnd = rs.%Next() } if notAtEnd { do rs.%Save() Set id=rs.%Id() } else { do rs.%DeleteId(id) set id = -1 } set out = { "pageSize":(pageSize), "resultSetTotal":(rs.Count()), "pageRecords":(i), "pages":($NORMALIZE((rs.Count()/pageSize),0)), "resultSet":(results) } kill rs return out } } It's also about 3 times faster since the query is only executed once: do ##class(User.Pagination).Time("Save") Save took 0,0048 sec do ##class(User.Pagination).Time("NoSave") NoSave took 0,0143 sec
go to post Eduard Lebedyuk · Sep 5, 2021 To reclaim disk space from the wsl docker data disk, execute: wsl --shutdown diskpart select vdisk file="C:\Users\<USER>\AppData\Local\Docker\wsl\data\ext4.vhdx" attach vdisk readonly compact vdisk detach vdisk exit Alternatively (requires Windows Pro and HyperV being enabled): wsl --shutdown optimize-vhd -Path "C:\Users\<USER>\AppData\Local\Docker\wsl\data\ext4.vhdx" -Mode full
go to post Eduard Lebedyuk · Sep 2, 2021 Go to System > Journals, choose any 1GB journal which was just created and press Profile. Now recalculate by size and you'll see which globals created the most journal records.
go to post Eduard Lebedyuk · Aug 27, 2021 is there a way for me to execute in SQL for the above? Of course! Queries are TVFs, you can CALL them or SELECT from them: SELECT * FROM %SYSTEM.License_ConnectionAppList() Summary: SELECT * FROM %SYSTEM.License_Counts()
go to post Eduard Lebedyuk · Aug 27, 2021 You need a Reply Code Action, E=S or something more specific.
go to post Eduard Lebedyuk · Aug 26, 2021 Check %SYSTEM.License queries. There are queries which provide summary and detailed information on license consumption. You can than create a task which runs every minute and if license consumption exceeds say 80%, store current license users into a separate table with a timestamp. Later you can use this table to analyze usage patterns.