go to post David Satorres · Jun 18, 2020 While my VSCode is not fully migrated, I still need my Atelier :)
go to post David Satorres · Jun 16, 2020 Hi Matthew, Thanks for the answer. Can it be downloaded from the vscode marketplace? I actually don't work with dockers :'(
go to post David Satorres · Feb 20, 2020 Thanks Alex. But this other class gives as well the amount of bytes transferred and received ($p19 and $p20). And another bunch of numbers that I'd like to comprehend :-)
go to post David Satorres · Feb 20, 2020 Hi Mikhail, Where did you get the information for the meaning of the returned values from $system.ECP.GetProperty("ClientStats")? Nice job, anyway ;-)
go to post David Satorres · Sep 25, 2019 Hi Peter, Did you finally solve it? It's happening to me as well...
go to post David Satorres · Sep 13, 2019 I agree all other processes time are included. But how you decide the [16] value?Anyway, analytics is showing now 500 so I'd like to know how the calculations are done. I guess it's best if I go with WRC.
go to post David Satorres · Sep 13, 2019 Pool size is 5 :-)I've been looking for an example of one item that take a little long, as most of them are completed in less than 0.3 seconds. This is a full trace:I can see the component that takes long, but not as much as the average shown at the analytics :)
go to post David Satorres · Sep 12, 2019 5.I've seen weird numbers that make no sense. For example, for last hour three components have high values:That's why I'd like to know how is it calculated :)
go to post David Satorres · Jul 3, 2019 My mistake... the odbc driver name was wrong :'( I can connect now :)But I see that I have to rewrite the functions because the prepare, execute, fetch, etc attributes doesn't exist in pyODBC :'(UPDATE: just by changing a few lines allowed me to work with pyODBC.
go to post David Satorres · Apr 11, 2019 I can't remove the question, so I answer myself. The problem was at the routine called by the queue: a "lock" instruction seemed to break down everything.So, removing the line solved the problem :-) Now I just need to know why locking a global messes the whole thing up :-O
go to post David Satorres · Feb 26, 2019 Sorry, I just saw I can do:Index NewIndex On Test(ELEMENTS).Name
go to post David Satorres · Jan 14, 2019 Finally we identified a couple of bottlenecks using MONLBL utility... none of them was the message enqueueing. So, we redid the production and now is feeing more messages to the queue.Thank you all for your help :-)
go to post David Satorres · Jan 10, 2019 I'm using a process that is reading from a global using $order, it's not called by anything external. When removing the call to the BusinessProcess it makes a lot of operations, but when it comes to enqueueing the call it takes several milliseconds. Actually, if I clone the BS and run them at the same time writing to the same queue (even if the queue is Ens.Actor) , I reach the same number of messages than doing it single.I mean, even working in parallel I cannot increase the number of messages put in the queue. I'm not able to write more than 60-70 per second.The adapter is Ens.InboundAdapter, but really not using it.
go to post David Satorres · Nov 16, 2018 Hi!Yes, that solution is good. But we need to stop the production to be able to copy the files, and we don't want to to that. We need a way to transfer the data to IRIS without stopping current Ensemble production.
go to post David Satorres · Oct 22, 2018 Hi Dmitry,Thanks for your help, I've been able to compile the class and start the production.We are using it in a legacy class meant help with the JSON. Anyway, it's working now :-)
go to post David Satorres · Sep 11, 2018 I just installed 1.3 and it still not there :O And Intersystems announced they won't be any enhancements from now on.So I guess this is all we'll get.
go to post David Satorres · Aug 9, 2018 Finally I've made some tests. I have duplicated the listglobal changint its values to string, so I can compare two different globals wth the same data but stored differently. Results, show that accessing a list is much slower.RoutineLineGloRefupper pointer block readsbottom pointer block readsdata block readsdirectory block requests satisfied from a globalupper pointer block requests satisfied from a global bufferbottom pointer block requests satisfied from a global bufferdata block requests satisfied from a global bufferM commandsTimeTotalTimeCodeTest.178171053140103991141091407016586717105343.3253843.32538set dat=$g(^ListData(id))Test.178171053141101411017626600001710530.2656940.265694set dat=$g(^ListData(id)) Test.17917105312358531116071158516664217105320.595820.5958set dat=$g(^StringData(id))Test.179171053116081160817249500001710530.2373110.237311set dat=$g(^StringData(id)) But finally, after reading a bit of doc I found that I could improve the performance by changing the database from 8kb to 64kb. And it really worked:RoutineLineGloRefupper pointer block readsbottom pointer block readsdata block readsdirectory block requests satisfied from a globalupper pointer block requests satisfied from a global bufferbottom pointer block requests satisfied from a global bufferdata block requests satisfied from a global bufferM commandsTimeTotalTimeCode Test.1781710530118611066421694021710537.2341147.234114set dat=$g(^ListData(id))Test.178171053 66431712631710530.2253540.225354set dat=$g(^ListData(id)) Test.179171053 18081 65341694201710532.123632.12363set dat=$g(^StringData(id))So
go to post David Satorres · Aug 8, 2018 Yes, sorry, my mistake. Actually the line should be: set dat=$g(^TestD(id)) //dat=$lb("a","b","c","d","e") compared to: set dat=$g(^TEST2(id)) //dat = "a#b#c#d#e" Thanks to Julius suggestion, I've ran the %SYS.MONLBL analysis tool and clearly something is messing up when trying to get the data from a list:RoutineLineGloRefDataBlkRdUpntBlkBufBpntBlkBufDataBlkBufRtnLineTimeTotalTimeCodeTest.178168239128141291412977421682366.28293566.282935 set dat2=$get(^ListGlobal(id))Test.1791682301849184916904168230.0620760.062076 set dat=$get(^StringGlobal(id))
go to post David Satorres · Aug 8, 2018 Thanks David!But the problem is that that what takes looong time is getting the list values, not throttling over them.