This is an important point. Depending on the size of each node, this $order loop could be touching virtually every block in the global. If you read the 4GB test global after setting it, you're reading from a warm buffer pool, whereas the 40GB production global is less likely to be buffered—hence the greater than 10x difference in time.
I don't have a good suggestion for how to make this loop run faster. $prefetchon might help a little. Rather, depending on why you need to perform this operation, I'd either cache the record count (which could then become a hot spot of its own), or maintain an index global (possibly a bitmap).
If you're downloading from evaluation.intersystems.com, take another look. I see a radio button for "Red Hat 9" that does not appear in a screenshot that John Murray posted last week, so this may have been fixed since you last checked.
Can you elaborate on what it would mean to support Parquet? The diagram appears to have come from this post:
In that context, the query run time refers to acting on the file directly with something like Amazon Athena. Do you have in mind importing the file (which is what an IRIS user would currently do with CSV), or somehow querying the file directly?
Log in or create a new account to continue