Hello Mikhail, Good question!

I think of block count as a rough estimate of the disk size of a table or index (an "SQLmap").  The SQL optimizer uses this to get an estimate of how much disk I/O could be involved in scanning that map.  Block count mostly matters in proportion to other block counts (similar to ExtentSize).  

I am not certain how to interpret 2K reference in the documentation, perhaps someone else will chime in - my guess is that the "units" don't matter, so the original 2K block size is still used as a base unit for measuring the disk size of a storage map.

Block Count can be extremely important for tables that have child tables, or that otherwise share a storage global with other classes. The row count might be relatively small, but because the global nodes are spaced out, more disk I/O is required.  With the block count taken into consideration, the SQL optimizer may be pushed toward an index or different starting table.

It is possible that a query can skip the master map completely, but only if all fields being selected are in the index and in the same form as in the master map.  

Even if all fields are in an index, indexed fields are typically stored in SQLUPPER, required for SQL comparisons, but the query must get the fields in their original form for output.  So for a SELECT *, the query will almost always need to go back to the mastermap.