Hi @Dmitry Maslennikov 
and for sure you are right. It was not a real testing scenario, it was just a failure that happened by itself. So I decided to ask the community if such kind of issues is a norm. Or, maybe, I would like to figure out if that system behavior is the norm.

Thank you for the query provided, let me try it a little bit later and share my results with you. 

Hi @Evgeny Shvarov, thank you for the response. 

Let me answer this:
 "What is the business goal of the exercise? To test ZPM or to test IRIS on leakages?"

Well, neither nor :)
I just faced it unintentionally when I was developing and deploying
my version of the ZPM download+install script:

So I was busy deploying and running it, then improving and running it repeatedly, and before each next run, I uninstalled the previously installed ZPM... That way I broke it soon and I thought it was something with my k8s cluster. It seemed interesting to me and I decided to repeat the experiment, so I recreated my IRIS container and repeated the scenario from the beginning. And the result was the same each time. That is it

Hi @Dmitry Maslennikov , thx for your reply.

At least, I have not forgotten about the memory-per-process,
and the bbsiz=100000 (100MB) is set, as we can see in my question message.

Likely, you mean I underestimate the number of processes being raised to perform ZPM install/uninstall. 

Probably. But I doubt it. That could be the case, if, say, their number is 10 and each of them immediately occupied all its 100MB, so together they might take 1GB. But even if so, in that case, they should cause the failure right from the first ZPM install attempt. But in reality, they do not, and the system survives at least 3 install/uninstall attempts (with gradual memory consumption growth). 

Does it mean, that the system fails to perform its cleanup (garbage collection, housekeeping, whatever we call it) work for the processes that completed their work and dangling out of care? Or, maybe, we should conclude that for each new ZPM install/uninstall cycle the system uses the same processes created for the first cycle but fails to garbage collect their memory, so, does it mean... again,  the memory leak🙄