I agree all other processes time are included. But how you decide the [16] value?
Anyway, analytics is showing now 500 so I'd like to know how the calculations are done. I guess it's best if I go with WRC.
- Log in to post comments
I agree all other processes time are included. But how you decide the [16] value?
Anyway, analytics is showing now 500 so I'd like to know how the calculations are done. I guess it's best if I go with WRC.
Pool size is 5 :-)
I've been looking for an example of one item that take a little long, as most of them are completed in less than 0.3 seconds. This is a full trace:.png)
I can see the component that takes long, but not as much as the average shown at the analytics :)
5.
I've seen weird numbers that make no sense. For example, for last hour three components have high values:
.png)
That's why I'd like to know how is it calculated :)
My mistake... the odbc driver name was wrong :'( I can connect now :)
But I see that I have to rewrite the functions because the prepare, execute, fetch, etc attributes doesn't exist in pyODBC :'(
UPDATE: just by changing a few lines allowed me to work with pyODBC.
I can't remove the question, so I answer myself. The problem was at the routine called by the queue: a "lock" instruction seemed to break down everything.
So, removing the line solved the problem :-) Now I just need to know why locking a global messes the whole thing up :-O
Sorry, I just saw I can do:
Index NewIndex On Test(ELEMENTS).Name
Finally we identified a couple of bottlenecks using MONLBL utility... none of them was the message enqueueing. So, we redid the production and now is feeing more messages to the queue.
Thank you all for your help :-)
I'm using a process that is reading from a global using $order, it's not called by anything external. When removing the call to the BusinessProcess it makes a lot of operations, but when it comes to enqueueing the call it takes several milliseconds. Actually, if I clone the BS and run them at the same time writing to the same queue (even if the queue is Ens.Actor) , I reach the same number of messages than doing it single.
I mean, even working in parallel I cannot increase the number of messages put in the queue. I'm not able to write more than 60-70 per second.
The adapter is Ens.InboundAdapter, but really not using it.
Hi!
Yes, that solution is good. But we need to stop the production to be able to copy the files, and we don't want to to that. We need a way to transfer the data to IRIS without stopping current Ensemble production.
Hi Dmitry,
Thanks for your help, I've been able to compile the class and start the production.
We are using it in a legacy class meant help with the JSON. Anyway, it's working now :-)
Still not there at 1.3 :'(
I just installed 1.3 and it still not there :O And Intersystems announced they won't be any enhancements from now on.
So I guess this is all we'll get.
Finally I've made some tests. I have duplicated the listglobal changint its values to string, so I can compare two different globals wth the same data but stored differently. Results, show that accessing a list is much slower.
| Routine | Line | GloRef | upper pointer block reads | bottom pointer block reads | data block reads | directory block requests satisfied from a global | upper pointer block requests satisfied from a global buffer | bottom pointer block requests satisfied from a global buffer | data block requests satisfied from a global buffer | M commands | Time | TotalTime | Code |
| Test.1 | 78 | 171053 | 1 | 40 | 10399 | 1 | 14109 | 14070 | 165867 | 171053 | 43.32538 | 43.32538 | set dat=$g(^ListData(id)) |
| Test.1 | 78 | 171053 | 14110 | 14110 | 176266 | 0 | 0 | 0 | 0 | 171053 | 0.265694 | 0.265694 | set dat=$g(^ListData(id)) |
| Test.1 | 79 | 171053 | 1 | 23 | 5853 | 1 | 11607 | 11585 | 166642 | 171053 | 20.5958 | 20.5958 | set dat=$g(^StringData(id)) |
| Test.1 | 79 | 171053 | 11608 | 11608 | 172495 | 0 | 0 | 0 | 0 | 171053 | 0.237311 | 0.237311 | set dat=$g(^StringData(id)) |
But finally, after reading a bit of doc I found that I could improve the performance by changing the database from 8kb to 64kb. And it really worked:
| Routine | Line | GloRef | upper pointer block reads | bottom pointer block reads | data block reads | directory block requests satisfied from a global | upper pointer block requests satisfied from a global buffer | bottom pointer block requests satisfied from a global buffer | data block requests satisfied from a global buffer | M commands | Time | TotalTime | Code |
| Test.1 | 78 | 171053 | 0 | 1 | 1861 | 1 | 0 | 6642 | 169402 | 171053 | 7.234114 | 7.234114 | set dat=$g(^ListData(id)) |
| Test.1 | 78 | 171053 | 6643 | 171263 | 171053 | 0.225354 | 0.225354 | set dat=$g(^ListData(id)) | |||||
| Test.1 | 79 | 171053 | 1808 | 1 | 6534 | 169420 | 171053 | 2.12363 | 2.12363 | set dat=$g(^StringData(id)) |
So
Yes, sorry, my mistake. Actually the line should be:
compared to:
Thanks to Julius suggestion, I've ran the %SYS.MONLBL analysis tool and clearly something is messing up when trying to get the data from a list:
| Routine | Line | GloRef | DataBlkRd | UpntBlkBuf | BpntBlkBuf | DataBlkBuf | RtnLine | Time | TotalTime | Code |
| Test.1 | 78 | 16823 | 9128 | 14129 | 14129 | 7742 | 16823 | 66.282935 | 66.282935 | set dat2=$get(^ListGlobal(id)) |
| Test.1 | 79 | 16823 | 0 | 1849 | 1849 | 16904 | 16823 | 0.062076 | 0.062076 | set dat=$get(^StringGlobal(id)) |
Thanks David!
But the problem is that that what takes looong time is getting the list values, not throttling over them.
Hi Steve,
Thanks for your help. Finally I got it working and I didn't remember to come here and close the topic. The error was not at Ensemble but at the proxy server.
Thanks anyway.
Hi,
Yes, we have. Actually, that piece of code worked just fine before the authentication was set on place:
set st = ..Adapter.GetURL(tURL, .callResponse)
If $IsObject(callResponse)
{
}
Now, callResponse is not an object anymore :'(
Yes, I am. This is the class definition:
}