I wrote a script that calculate database fragmentation level (between 0 and 100%). The idea is to fetch all blocks, to find which global they belongs to, and then count how many segments exists (one sequence being a set of consecutive blocks belonging to same global (eg: in AAAADDBBAACCC there is 5 segments). It's based on Dmitry BlocksExplorer open source project. The formula is as such :

Fragmentation % = (TotalSegments - GlobalCount) / (TotalBlocks - GlobalCount)
Blocks Formula Fragmentation
AAAAAACCBBDD (best) (4-4) / (13-4) 0%
AAAADDBBAACCC  (5-4) / (13-4) 11%
ACADDAABBAACC (8-4) / (13-4) 44%
ACABADADBACAC (worst) (13-4) / (13-4) 100%
    ///usage: do ..ReadBlocks("D:\YOUR_DATABASE\")
    ClassMethod ReadBlocks(path As %String)
    {
        new $namespace
        znspace "%sys"
        //get total amount of blocks
        set db = ##class(SYS.Database).%OpenId(path)
        set totalblocks = db.Blocks
        set db = ""
        set blockcount = 0
        open 63:"^^"_path	
        set ^TEMP("DEFRAG", "NODES", 3)=$listbuild("", 0)	
        while $data(^TEMP("DEFRAG", "NODES"))=10 //any childs
        {
            set blockId = ""
            for
            {		
                set blockId = $order(^TEMP("DEFRAG", "NODES", blockId),1,node)
                quit:blockId=""
                kill ^TEMP("DEFRAG", "NODES", blockId)
                
                set globalname = $lg(node,1)
                set hasLong = $lg(node,2)
                
                do:blockId'=0 ..ReadBlock(blockId, globalname, hasLong, .totalblocks, .blockcount)			
            }	
        }
        close 63
        set ^TEMP("DEFRAG","PROGRESS") = "DONE"
        do ..CalculateFragmentation()
    }
    ClassMethod ReadBlock(blockId As %String, globalname As %String, hasLong As %Boolean, ByRef totalblocks As %Integer, ByRef blockcount As %Integer)
    {
        view blockId   	
        set blockType=$view(4,0,1)
        
        if blockType=8  //data block
        {
            if hasLong 
            {
                for N=1:1
                {
                    set X=$VIEW(N*2,-6)
                    quit:X=""
                    set gdview=$ascii(X)
                    if $listfind($listbuild(5,7,3),gdview) 
                    {
                        set cnt=$piece(X,",",2)
                        set blocks=$piece(X,",",4,*)
                        for i=1:1:cnt 
                        {
                            set nextBlock=$piece(X,",",3+i)
                            
                            set ^TEMP("DEFRAG","GLOBAL",nextBlock) = globalname		
                            set blockcount = blockcount + 1 //update progress
                            set ^TEMP("DEFRAG","PROGRESS") = $number(blockcount / totalblocks * 100, 2)
                        }
                    }
                }
            }
        } 
        else //block of pointers
        {		
            if blockType = 9 //catalog
            {
                set nextglobal=$view(8,0,4)	//large catalogs might spawn on multiple blocks
                quit:$data(^TEMP("DEFRAG","GLOBAL",nextglobal))
                set:nextglobal'=0 ^TEMP("DEFRAG", "NODES", nextglobal) = $listbuild("",0) //next catalog
            }
            
            for N=1:1
            {
                set X=$VIEW(N-1*2+1,-6)
                quit:X=""
                set nextBlock=$VIEW(N*2,-5)
                if blockType=9 set globalname=X
                set haslong=0
                if $piece($view(N*2,-6),",",1) 
                {
                    set haslong=1
                }
                
                continue:$data(^TEMP("DEFRAG","GLOBAL",nextBlock) )//already seen?
                set ^TEMP("DEFRAG", "NODES", nextBlock) = $listbuild(globalname,haslong)				
                set ^TEMP("DEFRAG","GLOBAL",nextBlock) = globalname	
                set blockcount = blockcount + 1
                set ^TEMP("DEFRAG","PROGRESS") = $number(blockcount / totalblocks * 100, 2)
            }
        }
    }
    ClassMethod CalculateFragmentation()
    {
        set segments = 0, blocks = 0, blocktypes = 0
        kill ^TEMP("DEFRAG", "UNIQUE")
        
        set previousglobal = ""
        set key = ""
        for
        {	    
            set key = $order(^TEMP("DEFRAG","GLOBAL",key),1,global)
            quit:key=""
            if global '= previousglobal
            {	   		   	
                set previousglobal = global		   		
                set segments = segments + 1
            }
            
            if '$data(^TEMP("DEFRAG", "UNIQUE", global))	
            {
                set ^TEMP("DEFRAG", "UNIQUE", global)=""
                set blocktypes = blocktypes + 1
            }
            
            set blocks = blocks + 1
        }
        write $number((segments - blocktypes) / (blocks - blocktypes) * 100, 2)
    }

    Notes : 

    • Use it at your own risks. It's not supposed to write anything in database (doing only read operations) but I'm unfamiliar with the VIEW command and it's possible caveats.
    • This might take a really long time to complete (several hours), especially if database is huge (TB in size). Progress can be checked by reading ^TEMP("DEFRAG","PROGRESS") node.

    I have checked your project and have extracted the logic of this function (almost 1:1 copy paste). It works for small databases (few GB in size). I can calculate a fragmentation percentage easily (by checking consecutive blocks of same globals). But for bigger databases (TB in size) it does not work as it only enumerate a small percentage of all globals. It seems the global catalog is quite big and split on multiple blocks (usually it's at block #3).

    EDIT : there is a "Link Block" pointer to follow :

    The database is on a SSD/NVMe drive. The impact of random access vs sequential on SSD is less than HDD but it's not neglectable. Run a CrystalDiskMark benchmark on any SSD and you will find out that the random access is slower than sequential one. 

    This image summarize it well : 
    r/computing - SATA HDD vs SATA SSD vs SATA NVMe CrystalDiskMark results
     

    Why I want to defragment the database: I found out that the length of the I/O write queue on the database drive goes quite high (up to 35). The drives holding the journals and WIJ have much lower maximum write queue length (it never get higher than 2) while the amount of data being written is the same (the peaks are about 400MB/s). The difference is database is random access while WIJ and journals are pretty much sequential.

    • Both systems are using Windows Server 2012 R2 Standard and Hyper-V (with same very similar CPU). 
    • Both systems are using a core license.
    • CreateGUID is not the bottleneck for sure. This is something I have checked very early. Removing the write to the global (keeping CreateGUID) will allow CPU to reach 100%. The effect of using a GUID (versus a incremental ID) is to spread out the global node writes, which might affect performance. But that not the explanation, because then both systems should be affected.

    I have edited OP to reflect those details.
    I have tested this on 4 systems (all very similar), and only one behave like that (slow DB writes).

    FileSet does a lot of things under the hood. I found that it does several QueryOpen operations per file, due to GetFileAttributesEx calls to get file size, modified date and such. One call should be enough, but FileSet does 4 calls per file :



    $ZSEARCH seems more efficient (especially if you don't need extra file info like size or date). This function is not meant to be called in a recursive context, so special care is needed :

    kill FILES
    set FILES($i(FILES))="C:\somepath\"
    set key = ""
    for
    {
        set key = $order(FILES(key),1,searchdir)
        quit:key=""
        set filepath=$ZSEARCH(searchdir_"*")
        while filepath'=""
        {
            set filename = ##class(%File).GetFilename(filepath)
            if (filename '= ".") && (filename '= "..") //might exclude more folders
            {
                if ##class(%File).DirectoryExists(filepath)
                {
                    set FILES($i(FILES)) = filepath_"\" //search in subfolders
                }
                else
                {
                    //do something with filepath
                    //...
                }
            }
    
            set filepath=$ZSEARCH("")
        }
    }

    $ZSEARCH still does one QueryOpen operation per file (AFAIK it's not needed since we only need filename, which is provided by QueryDirectory operation happening before, using FindFirstFile) , but at least it does it only once.

    Based on my own measurements, it's at least 5x faster ! (your results may vary). I am looping through 12.000 files, if your have a smaller dataset, it might not worth the trouble.

    If you need extra file attributes (like size) you can use those functions :

    ##class(%File).GetFileDateModified(filepath)
    ##class(%File).GetFileSize(filepath)

    Even with those calls in place, it's still faster than FileSet.

    Hello, I got the same as you (4096) : 

    D:\>fsutil fsinfo ntfsInfo D:
    NTFS Volume Serial Number :        0x52a864f9a864dd4b
    NTFS Version   :                   3.1
    LFS Version    :                   2.0
    Number Sectors :                   0x000000003e7be7ff
    Total Clusters :                   0x0000000007cf7cff
    Free Clusters  :                   0x0000000000f5785c
    Total Reserved :                   0x0000000000000400
    Bytes Per Sector  :                512
    Bytes Per Physical Sector :        512
    Bytes Per Cluster :                4096
    Bytes Per FileRecord Segment    :  1024
    Clusters Per FileRecord Segment :  0
    Mft Valid Data Length :            0x0000000089b00000
    Mft Start Lcn  :                   0x00000000000c0000
    Mft2 Start Lcn :                   0x0000000000000002
    Mft Zone Start :                   0x0000000006cb4d40
    Mft Zone End   :                   0x0000000006cb7320
    Max Device Trim Extent Count :     64
    Max Device Trim Byte Count :       0x7fe00000
    Max Volume Trim Extent Count :     62
    Max Volume Trim Byte Count :       0x40000000
    Resource Manager Identifier :     F59E5B7C-C569-11ED-B0AE-AC1F6B365CAA

    Thanks for the suggestion. I have tried to group CLS files to be loaded into clusters of 256 items, each cluster is then sent to a worker (instead of worker getting one CLS at a time). This increase chance of worker working exclusively on one package. In the end it's roughly same time. I don't wanna load them by package as packages are not balanced (some have 10 classes, some 500).

    I tried that and what happen is weird : the CPU usage of IRISDB.exe processes (4 of them used as workers) fall back to 0-1% while before it was peaking 25% (on a 4 cores machine, so 100% of the CPU was used). Despite this, it takes as much time as before, if not even more. There might be some bottleneck. I don't think it's I/O because importing MAC file is definitely faster (and they just as big as CLS files).

    This is because first 0-255 characters of Unicode are same as Latin1 charset, therefore no conversion is needed.

    Are you sure about that ? AFAIK it's true for the first 128 characters, but not the ones above. Characters with accents are encoded with two characters in Unicode while it's only one character in Latin1. If it works out of the box (no conversion is needed, only mounting database back on a Unicode system), this means system must be doing heavy work in the background.

    EDIT : It's possible because IRIS can encode a string using 8 bit per character if that string contains only Unicode positions between 0-255 (Latin 1 charset). When not possible, chars are encoded with UTF-16. It's not same as UTF-8 (which encode chars on 1 byte if code is 0-128 and use up to 4 bytes otherwise).

    In other words, once the instance where the remote DBs are located has it's lock table full, any other server requiring a lock on a database hosted by this instance will be in trouble, is this right ?

    Eg: 
    FOO and BAR database are located on an instance where the lock table is full
    BAZ database is located on an instance where the lock table is almost empty
     

    Application Server A lock on FOO.X denied
    Application Server B lock on FOO.X denied
    Application Server C lock on BAR.X denied
    Application Server D lock on BAZ.X OK

    Increasing gmheap : yes this might help but it you have some dummy process that enter a loop and create many many lock in a short amount of time, it's only delaying the issue (it will occur at some point no matter what)