Clones, oref reference count, and scope
Hello all, I have a question about constucting thousands of clones, and scope.
In my code, I'm looping through a database, say 200k+ objects, and creating a clone of each object (we need to evaluate a modified clone of the object, but not account for what's on disk).
I see this a lot in the documentation:
" Whenever you set a variable or object property to refer to a object, its reference count is automatically incremented. When a variable stops referring to an object (if it goes out of scope, is killed, or is set to a new value), the reference count for that object is decremented. When this count goes to 0, the object is automatically destroyed (removed from memory) "
After I'm done with an object and its clone, I kill them, but over time, I keep getting a STORE error. I'm wondering if IS has a java-like garbage collector (as in "the object is automatically destroyed"), and if so, is this keeping up with the clone construction and killing, over a 200k+ loop? Is it an immediate thing, or is there a garbage collector crawling around looking for un-referenced objects?
Or, am I missing killing an object?
All objects and clones are killed after each loop iteration. The process has only a short list of variables. The reference count gets higher and higher (I assume it is, when an object's oref is 43234@Package.Class.cls), but I'm pretty sure I'm killing every variable that might reference an object.
Just curious about the speed of the garbage collection, and what might contribute to the high oref reference count.
Thanks,
Laura
Hi Laura,
I don't really have an answer but maybe you can include some pseudo-code or sample code for people to try to reproduce the behavior you're describing. I have an idea of what you're doing from your description but the specifics could be important.
Here is some pseudo code, redacted a little:
So, A Parent object has a one->many relationship with Child, as defined in the class definition.
Class Parent { Relationship Children As Child [ Cardinality = many, Inverse = Parent ]; Property MostRecentChild as Child; } Class Child { Relationship Parent As Parent [ Cardinality = one, Inverse = Children ]; // a few other objects, both as parents, and object properties } Set ParentID="",Child="" for { // order is by parent, then by child DESC (youngest first, or really, when they were added to our database) set Parent ID=$o(^||TEMP($j,"Parent",ParentID)) q:ParentID="" // clones everything; including the Instance's relationship, and the Instance's Objects set Parent Obj=##class(Parent).%OpenId(ParentID) set clone=ParentObj.%ConstructClone(1) for { //looping on count as well - just took that part out { set ChildID="" set ChildID=$o(^||TEMP($j,"Parent",ParentID,"Children",count,ChildID)) quit:ChildID="" Set ChildObj=##class(Child).%OpenId(ChildID) set key=clone.Children.FindOref(ChildObj) // SEEMS TO WORK! Should it? set youngest=clone.Children.GetAt(key) set clone.MostRecentChild=youngest Set ok=..EvaluateClone(clone) // do lots of stuff here; evaluate as if the clone has only these children // here, I'm removing the children from the relationship one at a time do clone.Children.RemoveAt(key) // update clone //update clone's properties //e.g. set clone.TotalAgeOfChildren = clone.TotalAgeOfChildren-youngest.Age kill youngest,ChildObj } } kill ParentObj,clone }
Basically, I have a temp global of the parents, and their children, in reverse "added to database" order.
A. I clone the Parent
B. set the clone's MostRecentChild to the "youngest" object (which is redundant the first time around), evaluate it with all the clone's current children
C. then remove the youngest object from the clone.
D. I get the next Child object, which according to the temp global, is the next youngest in our database. Loop from B.
I'm evaluating the Parent with fewer and fewer children.
The objects aren't really this simple; the Child is also the child of a different parent, which is also cloned in the Deep clone, which is a child of a different parent, also cloned. Do I need to kill more stuff?
Assume a Parent has on average 5 children, but could have 1 to 40. After about 25k children, I get a STORE error.
Thanks,
Laura
I'd suggest to UNSWIZZLE ( ~~ remove from memory) all children once you a done with an object.
it's described here:
https://docs.intersystems.com/iris20192/csp/docbook/DocBook.UI.Page.cls?KEY=GOBJ_relationships
I'll unswizzle everything I can; thanks!
$STORAGE is a special variable which contains the number of bytes available for a current process. After it hits zero you get STORE error. You can profile your code to see which part leaks memory. Small example:
write $STORAGE >268326112 set i=1 write $STORAGE >268326104 kill i write $STORAGE >268326112
Check that:
Another approach is throwing/catching and logging an exception and checking what's available in each frame.
do clone.Children.RemoveAt(key) // update clone
RemoveAt returns oref, maybe you need to kill it.
InterSystems products have automatic garbage collector. I'm not sure on the specifics. Calling @Dan Pasco.
I'll debug it today, and check the $storage. I know I have references hanging around even after I kill objects, because the original Parent object and the cloned object still point to their children -- that is, the code is somewhat disjointed where I remove a child, but the "Youngest" property still points to the removed child, until the next loop, when I point "Youngest" to a new child. At that point, though, I was hoping the object would be removed from memory.
Just a note: I haven't had time to log the $s values yet, but I'm thinking that's the way to go to track down the "leak". I did add the %UnSwizzle, but it didn't help enough. When I point the clone's Youngest property to a new Child, the previous youngest Child still exists in memory, and I feel like that is a problem.
I'll update this thread when I get a chance to test.
Thanks for the help,
Laura