Question
· Nov 3, 2021

How can I reset cached routines?

Hello,

I don't know if the title is accurate enough. I have a legacy code that I need to optimize. It's a routine written in objectscript. It accepts 4 parameters and runs 6 nested FOR...$ORDER reading a big global.

The thing is when I run the routine the first time it takes around 60 seconds to run. If I run it again it takes 5 seconds. If I wait around 6 to 10 minutes to run it again, it takes 60 seconds again, but if I run it every 1, 2, 3... minutes it still takes only 5 seconds to run.

I can guess there's some cached memory stuff somewhere making this happen. I've tried purging SQL queries and purging not needed journals. I knew it was not related but I don't know any other purging tools and I needed to try something.

So, am I guessing right and there's some garbage collector or memory management doing the trick? Can I force it to empty/reset so my routine takes its 60 seconds to run on every call and not once every 10 minutes? How can I know the time setup for the reset besides try and error? Am I wrong in these assumptions?

Product version: Caché 2018.1
$ZV: Cache for Windows (x86-64) 2018.1.2 (Build 309_5) Wed Jun 12 2019 20:14:50 EDT
Discussion (9)2
Log in or sign up to continue

What you experience is the effect of the Global Buffer Pool.
The rule is to overwrite the least used buffer if a new is required.
So the older the buffer the higher the chance to be overwritten and later reloaded.
Purging queries only affects code not data

Possible option:  increase  your buffer pool (double or triple size)
or try this approach: https://community.intersystems.com/post/global-buffer-questions
suggested by @Julius Kavay 

Hi Arturo,

This scenario sounds like during the subsequent runs of the routine you are accessing Global buffers (AKA: Global buffer pool) which is used to optimise the running of the routines for frequent use or data access.

Your subsequent runs will be complimented by any new data that is added between runs with the "latest and greatest" data. The same applies to any data that has been killed between runs.

You don't need to purge anything. You can trust that you will always be getting the data you expect.

Kind regards,
Patrick