User bio
404 bio not found
Member since Sep 9, 2020
Replies:

I have checked your project and have extracted the logic of this function (almost 1:1 copy paste). It works for small databases (few GB in size). I can calculate a fragmentation percentage easily (by checking consecutive blocks of same globals). But for bigger databases (TB in size) it does not work as it only enumerate a small percentage of all globals. It seems the global catalog is quite big and split on multiple blocks (usually it's at block #3).

EDIT : there is a "Link Block" pointer to follow :

The database is on a SSD/NVMe drive. The impact of random access vs sequential on SSD is less than HDD but it's not neglectable. Run a CrystalDiskMark benchmark on any SSD and you will find out that the random access is slower than sequential one. 

This image summarize it well : 
r/computing - SATA HDD vs SATA SSD vs SATA NVMe CrystalDiskMark results
 

Why I want to defragment the database: I found out that the length of the I/O write queue on the database drive goes quite high (up to 35). The drives holding the journals and WIJ have much lower maximum write queue length (it never get higher than 2) while the amount of data being written is the same (the peaks are about 400MB/s). The difference is database is random access while WIJ and journals are pretty much sequential.

Certifications & Credly badges:
Norman has no Certifications & Credly badges yet.
Global Masters badges:
Norman has no Global Masters badges yet.
Followers:
Following:
Norman has not followed anybody yet.