Finally we identified a couple of bottlenecks using MONLBL utility... none of them was the message enqueueing. So, we redid the production and now is feeing more messages to the queue.

Thank you all for your help :-)

I'm using a process that is reading from a global using $order, it's not called by anything external. When removing the call to the BusinessProcess it makes a lot of operations, but when it comes to enqueueing the call it takes several milliseconds. Actually, if I clone the BS and run them at the same time writing to the same queue (even if the queue is Ens.Actor) , I reach the same number of messages than doing it single.

I mean, even working in parallel I cannot increase the number of messages put in the queue. I'm not able to write more than 60-70 per second.

The adapter is Ens.InboundAdapter, but really not using it.

Hi!

Yes, that solution is good. But we need to stop the production to be able to copy the files, and we don't want to to that. We need a way to transfer the data to IRIS without stopping current Ensemble production.

Hi Dmitry,

Thanks for your help, I've been able to compile the class and start the production.

We are using it in a legacy class meant help with the JSON. Anyway, it's working now :-)

I just installed 1.3 and it still not there :O And Intersystems announced they won't be any enhancements from now on.

So I guess this is all we'll get.

Finally I've made some tests. I have duplicated the listglobal changint its values to string, so I can compare two different globals wth the same data but stored differently. Results, show that accessing a list is much slower.

RoutineLineGloRefupper pointer block readsbottom pointer block readsdata block readsdirectory block requests satisfied from a globalupper pointer block requests satisfied from a global bufferbottom pointer block requests satisfied from a global bufferdata block requests satisfied from a global bufferM commandsTimeTotalTimeCode
Test.178171053140103991141091407016586717105343.3253843.32538set dat=$g(^ListData(id))
Test.178171053141101411017626600001710530.2656940.265694set dat=$g(^ListData(id))
              
              
Test.17917105312358531116071158516664217105320.595820.5958set dat=$g(^StringData(id))
Test.179171053116081160817249500001710530.2373110.237311set dat=$g(^StringData(id))
              
              

But finally, after reading a bit of doc I found that I could improve the performance by changing the database from 8kb to 64kb. And it really worked:

RoutineLineGloRefupper pointer block readsbottom pointer block readsdata block readsdirectory block requests satisfied from a globalupper pointer block requests satisfied from a global bufferbottom pointer block requests satisfied from a global bufferdata block requests satisfied from a global bufferM commandsTimeTotalTimeCode
              
Test.1781710530118611066421694021710537.2341147.234114set dat=$g(^ListData(id))
Test.178171053     66431712631710530.2253540.225354set dat=$g(^ListData(id))
              
Test.179171053  18081 65341694201710532.123632.12363set dat=$g(^StringData(id))

So

Yes, sorry, my mistake. Actually the line should be:

        set dat=$g(^TestD(id))    //dat=$lb("a","b","c","d","e")
 

compared to: 

        set dat=$g(^TEST2(id))   //dat = "a#b#c#d#e"
 

Thanks to Julius suggestion, I've ran the %SYS.MONLBL analysis tool and clearly something is messing up when trying to get the data from a list:

RoutineLineGloRefDataBlkRdUpntBlkBufBpntBlkBufDataBlkBufRtnLineTimeTotalTimeCode
Test.178168239128141291412977421682366.28293566.282935  set dat2=$get(^ListGlobal(id))
Test.1791682301849184916904168230.0620760.062076  set dat=$get(^StringGlobal(id))

Thanks David!

But the problem is that that what takes looong time is getting the list values, not throttling over them.

Hi Steve,

Thanks for your help. Finally I got it working and I didn't remember to come here and close the topic. The error was not at Ensemble but at the proxy server.

Thanks anyway.