I have to maintain a legacy module that receive as input a local global variable (eg: it's like a list).

ClassMethod Foo(ByRef ITEMS) { }

It's too risky to refactor it (eg: use something else) as it's used in LOT of places. 

I need in some case to add a extra item at the end of that "list".

Norman W. Freeman · Nov 5, 2025 go to post

I also did the following test : create a routine with "write $ascii("♥")" inside and call it from outside (eg: Studio console). It works (so server code also works).

However I have a IRIS server where write $ascii("♥") always return 63, even in code and Terminal. Is there a settings somewhere in portal for UTF-8 support ?
EDIT : I found where it's being defined, it's inside NLS (National Language Settings). 

The server has Latin1 defined, while the working local station has UTF-8. You can define different tables per category : for Terminal, Process, ...

Norman W. Freeman · Nov 5, 2025 go to post

write $ascii("♥") give me 63 (which is question mark). I'm running this in Studio output window (as for the OP).

Running this in terminal works !
I got expected result and thus same result as you.

Norman W. Freeman · Jul 8, 2025 go to post

Thanks, this is exactly what I was looking for.
Interestingly, all idling IRISDB.exe processes (created because using JobServers setting) are reported using "<0.01" of CPU usage under Process Explorer. You can indeed see the total CPU cycles counter being increased with the time. This is unlike processes created using NDS / Apache (which have no CPU usage reported at all, unless they are actually doing something). Not sure if it's a big deal.

Norman W. Freeman · Jun 27, 2025 go to post

Do you think it make sense to set a expansion size to something else than default (eg: 300MB), especially knowing TEMP database in my case end up being 5GB at the end of everyday (thus requiring many expansions through the day) ?

Norman W. Freeman · Jun 27, 2025 go to post

I wrote a script that calculate database fragmentation level (between 0 and 100%). The idea is to fetch all blocks, to find which global they belongs to, and then count how many segments exists (one sequence being a set of consecutive blocks belonging to same global (eg: in AAAADDBBAACCC there is 5 segments). It's based on Dmitry BlocksExplorer open source project. The formula is as such :

Fragmentation % = (TotalSegments - GlobalCount) / (TotalBlocks - GlobalCount)
Blocks Formula Fragmentation
AAAAAACCBBDD (best) (4-4) / (13-4) 0%
AAAADDBBAACCC  (5-4) / (13-4) 11%
ACADDAABBAACC (8-4) / (13-4) 44%
ACABADADBACAC (worst) (13-4) / (13-4) 100%
///usage: do ..ReadBlocks("D:\YOUR_DATABASE\")ClassMethod ReadBlocks(path As%String)
{
    new$namespaceznspace"%sys"//get total amount of blocksset db = ##class(SYS.Database).%OpenId(path)
    set totalblocks = db.Blocks
    set db = ""set blockcount = 0open63:"^^"_path	
    set^TEMP("DEFRAG", "NODES", 3)=$listbuild("", 0)	
    while$data(^TEMP("DEFRAG", "NODES"))=10//any childs
    {
        set blockId = ""for
        {		
            set blockId = $order(^TEMP("DEFRAG", "NODES", blockId),1,node)
            quit:blockId=""kill^TEMP("DEFRAG", "NODES", blockId)
            
            set globalname = $lg(node,1)
            set hasLong = $lg(node,2)
            
            do:blockId'=0..ReadBlock(blockId, globalname, hasLong, .totalblocks, .blockcount)			
        }	
    }
    close63set^TEMP("DEFRAG","PROGRESS") = "DONE"do..CalculateFragmentation()
}
ClassMethod ReadBlock(blockId As%String, globalname As%String, hasLong As%Boolean, ByRef totalblocks As%Integer, ByRef blockcount As%Integer)
{
    view blockId   	
    set blockType=$view(4,0,1)
    
    if blockType=8//data block
    {
        if hasLong 
        {
            for N=1:1
            {
                setX=$VIEW(N*2,-6)
                quit:X=""set gdview=$ascii(X)
                if$listfind($listbuild(5,7,3),gdview) 
                {
                    set cnt=$piece(X,",",2)
                    set blocks=$piece(X,",",4,*)
                    for i=1:1:cnt 
                    {
                        set nextBlock=$piece(X,",",3+i)
                        
                        set^TEMP("DEFRAG","GLOBAL",nextBlock) = globalname		
                        set blockcount = blockcount + 1//update progressset^TEMP("DEFRAG","PROGRESS") = $number(blockcount / totalblocks * 100, 2)
                    }
                }
            }
        }
    } 
    else//block of pointers
    {		
        if blockType = 9//catalog
        {
            set nextglobal=$view(8,0,4)	//large catalogs might spawn on multiple blocksquit:$data(^TEMP("DEFRAG","GLOBAL",nextglobal))
            set:nextglobal'=0^TEMP("DEFRAG", "NODES", nextglobal) = $listbuild("",0) //next catalog
        }
        
        for N=1:1
        {
            setX=$VIEW(N-1*2+1,-6)
            quit:X=""set nextBlock=$VIEW(N*2,-5)
            if blockType=9set globalname=Xset haslong=0if$piece($view(N*2,-6),",",1) 
            {
                set haslong=1
            }
            
            continue:$data(^TEMP("DEFRAG","GLOBAL",nextBlock) )//already seen?set^TEMP("DEFRAG", "NODES", nextBlock) = $listbuild(globalname,haslong)				
            set^TEMP("DEFRAG","GLOBAL",nextBlock) = globalname	
            set blockcount = blockcount + 1set^TEMP("DEFRAG","PROGRESS") = $number(blockcount / totalblocks * 100, 2)
        }
    }
}
ClassMethod CalculateFragmentation()
{
    set segments = 0, blocks = 0, blocktypes = 0kill^TEMP("DEFRAG", "UNIQUE")
    
    set previousglobal = ""set key = ""for
    {	    
        set key = $order(^TEMP("DEFRAG","GLOBAL",key),1,global)
        quit:key=""if global '= previousglobal
        {	   		   	
            set previousglobal = global		   		
            set segments = segments + 1
        }
        
        if '$data(^TEMP("DEFRAG", "UNIQUE", global))	
        {
            set^TEMP("DEFRAG", "UNIQUE", global)=""set blocktypes = blocktypes + 1
        }
        
        set blocks = blocks + 1
    }
    write$number((segments - blocktypes) / (blocks - blocktypes) * 100, 2)
}

Notes : 

  • Use it at your own risks. It's not supposed to write anything in database (doing only read operations) but I'm unfamiliar with the VIEW command and it's possible caveats.
  • This might take a really long time to complete (several hours), especially if database is huge (TB in size). Progress can be checked by reading ^TEMP("DEFRAG","PROGRESS") node.
Norman W. Freeman · Jun 10, 2025 go to post

I have checked your project and have extracted the logic of this function (almost 1:1 copy paste). It works for small databases (few GB in size). I can calculate a fragmentation percentage easily (by checking consecutive blocks of same globals). But for bigger databases (TB in size) it does not work as it only enumerate a small percentage of all globals. It seems the global catalog is quite big and split on multiple blocks (usually it's at block #3).
EDIT : there is a "Link Block" pointer to follow :

Norman W. Freeman · Jun 10, 2025 go to post

Thanks. I remember seeing it before, while looking for info about database internals (it was a long series of community pages you wrote). I will try to make that BlocksExplorer work and will post results.
 

Norman W. Freeman · Jun 10, 2025 go to post

The database is on a SSD/NVMe drive. The impact of random access vs sequential on SSD is less than HDD but it's not neglectable. Run a CrystalDiskMark benchmark on any SSD and you will find out that the random access is slower than sequential one. 

This image summarize it well : 
r/computing - SATA HDD vs SATA SSD vs SATA NVMe CrystalDiskMark results
 

Why I want to defragment the database: I found out that the length of the I/O write queue on the database drive goes quite high (up to 35). The drives holding the journals and WIJ have much lower maximum write queue length (it never get higher than 2) while the amount of data being written is the same (the peaks are about 400MB/s). The difference is database is random access while WIJ and journals are pretty much sequential.

Norman W. Freeman · May 20, 2025 go to post
  • Both systems are using Windows Server 2012 R2 Standard and Hyper-V (with same very similar CPU). 
  • Both systems are using a core license.
  • CreateGUID is not the bottleneck for sure. This is something I have checked very early. Removing the write to the global (keeping CreateGUID) will allow CPU to reach 100%. The effect of using a GUID (versus a incremental ID) is to spread out the global node writes, which might affect performance. But that not the explanation, because then both systems should be affected.

I have edited OP to reflect those details.
I have tested this on 4 systems (all very similar), and only one behave like that (slow DB writes).

Norman W. Freeman · May 12, 2025 go to post

FileSet does a lot of things under the hood. I found that it does several QueryOpen operations per file, due to GetFileAttributesEx calls to get file size, modified date and such. One call should be enough, but FileSet does 4 calls per file :


$ZSEARCH seems more efficient (especially if you don't need extra file info like size or date). This function is not meant to be called in a recursive context, so special care is needed :

kill FILES
set FILES($i(FILES))="C:\somepath\"set key = ""for
{
    set key = $order(FILES(key),1,searchdir)
    quit:key=""set filepath=$ZSEARCH(searchdir_"*")
    while filepath'=""
    {
        set filename = ##class(%File).GetFilename(filepath)
        if (filename '= ".") && (filename '= "..") //might exclude more folders
        {
            if##class(%File).DirectoryExists(filepath)
            {
                set FILES($i(FILES)) = filepath_"\"//search in subfolders
            }
            else
            {
                //do something with filepath//...
            }
        }

        set filepath=$ZSEARCH("")
    }
}

$ZSEARCH still does one QueryOpen operation per file (AFAIK it's not needed since we only need filename, which is provided by QueryDirectory operation happening before, using FindFirstFile) , but at least it does it only once.
Based on my own measurements, it's at least 5x faster ! (your results may vary). I am looping through 12.000 files, if your have a smaller dataset, it might not worth the trouble.
If you need extra file attributes (like size) you can use those functions :

##class(%File).GetFileDateModified(filepath)
##class(%File).GetFileSize(filepath)

Even with those calls in place, it's still faster than FileSet.

Norman W. Freeman · May 12, 2025 go to post

I would have say that code that check for lock and then lock (in two steps) is usually not thread safe (there might be race conditions between the two steps) but since here it can only happen within same process (and not two different processes) that might be OK.

Norman W. Freeman · May 12, 2025 go to post

 "protect some code for being called by multiple processes at same time" - same or multiple processes?

Both. LOCK in other programming languages (eg: C#, Java, ...) will protect any process (other processes or your own process) to enter the critical section twice. 

Norman W. Freeman · Nov 22, 2024 go to post

Hello, I got the same as you (4096) : 

D:\>fsutil fsinfo ntfsInfo D:
NTFS Volume Serial Number :        0x52a864f9a864dd4b
NTFS Version   :                   3.1
LFS Version    :                   2.0
Number Sectors :                   0x000000003e7be7ff
Total Clusters :                   0x0000000007cf7cff
Free Clusters  :                   0x0000000000f5785c
Total Reserved :                   0x0000000000000400
Bytes Per Sector  :                512
Bytes Per Physical Sector :        512
Bytes Per Cluster :                4096
Bytes Per FileRecord Segment    :  1024
Clusters Per FileRecord Segment :  0
Mft Valid Data Length :            0x0000000089b00000
Mft Start Lcn  :                   0x00000000000c0000
Mft2 Start Lcn :                   0x0000000000000002
Mft Zone Start :                   0x0000000006cb4d40
Mft Zone End   :                   0x0000000006cb7320
Max Device Trim Extent Count :     64
Max Device Trim Byte Count :       0x7fe00000
Max Volume Trim Extent Count :     62
Max Volume Trim Byte Count :       0x40000000
Resource Manager Identifier :     F59E5B7C-C569-11ED-B0AE-AC1F6B365CAA
Norman W. Freeman · Nov 10, 2024 go to post

I don't know if it's same case as OP but I got "ERROR #5001: Create Directory failed: _err" as well when the method System.OBJ.ExportUDL() is called in parallel. Looks like there is a race condition. Create directory might fail is another process is already creating or has created that directory.

Norman W. Freeman · Oct 28, 2024 go to post

Thanks for the suggestion. I have tried to group CLS files to be loaded into clusters of 256 items, each cluster is then sent to a worker (instead of worker getting one CLS at a time). This increase chance of worker working exclusively on one package. In the end it's roughly same time. I don't wanna load them by package as packages are not balanced (some have 10 classes, some 500).

Norman W. Freeman · Oct 27, 2024 go to post

I tried that and what happen is weird : the CPU usage of IRISDB.exe processes (4 of them used as workers) fall back to 0-1% while before it was peaking 25% (on a 4 cores machine, so 100% of the CPU was used). Despite this, it takes as much time as before, if not even more. There might be some bottleneck. I don't think it's I/O because importing MAC file is definitely faster (and they just as big as CLS files).

Norman W. Freeman · Oct 16, 2024 go to post

Do you know if its needed to stop the IRIS instance/service before running the installer again ?

Norman W. Freeman · Oct 9, 2024 go to post

Thanks for clarification. I see your point. I didn't know IRIS would optimize and encode Unicode characters on 8-bit when possible (when it fits Latin 1), as explained by Steven. It's different from UTF-8.

Norman W. Freeman · Oct 8, 2024 go to post

This is because first 0-255 characters of Unicode are same as Latin1 charset, therefore no conversion is needed.

Are you sure about that ? AFAIK it's true for the first 128 characters, but not the ones above. Characters with accents are encoded with two characters in Unicode while it's only one character in Latin1. If it works out of the box (no conversion is needed, only mounting database back on a Unicode system), this means system must be doing heavy work in the background.

EDIT : It's possible because IRIS can encode a string using 8 bit per character if that string contains only Unicode positions between 0-255 (Latin 1 charset). When not possible, chars are encoded with UTF-16. It's not same as UTF-8 (which encode chars on 1 byte if code is 0-128 and use up to 4 bytes otherwise).

Norman W. Freeman · Oct 2, 2024 go to post

In other words, once the instance where the remote DBs are located has it's lock table full, any other server requiring a lock on a database hosted by this instance will be in trouble, is this right ?

Eg: 
FOO and BAR database are located on an instance where the lock table is full
BAZ database is located on an instance where the lock table is almost empty
 

Application Server A lock on FOO.X denied
Application Server B lock on FOO.X denied
Application Server C lock on BAR.X denied
Application Server D lock on BAZ.X OK

Increasing gmheap : yes this might help but it you have some dummy process that enter a loop and create many many lock in a short amount of time, it's only delaying the issue (it will occur at some point no matter what)

Norman W. Freeman · Aug 28, 2024 go to post

I found this command too, but it does not work, I got : 

ERROR #921: Operation requires %DB_IRISSYS:WRITE privilege
Norman W. Freeman · Aug 17, 2024 go to post

Thanks for your reply. I didn't know you could run code in database. My guess is that in this context you can only access database globals, not the routines/classes, am I right ?

Norman W. Freeman · Jun 27, 2024 go to post

The solution I found is to create a new static method that creates an instance and returns it : 

Class Foo Extends%Exception.AbstractException
{
    ClassMethod Create(arg1 As%String, arg2 As%String, arg3 As%String, arg4 As%String, arg5 As%String) As%Status
    {
        quit ..%New("some message")
    }
}

Before :

throw##class(Foo).%New("args1", "args2", "args3", ...)

After : 

throw##class(Foo).Create("args1", "args2", "args3", ...)
Norman W. Freeman · May 7, 2024 go to post

Classes (which are high level) are converted to INT modules. INT and MAC files are compiled into OBJ. This is binary data (similar to Java bytecode) that will be interpreted using some kind of virtual machine/interpreter. A "+" sign appears in Studio when the modified date of a MAC or INT module is different than the related OBJ.

You can take a look at ^rMAC, ^rOBJ and ^ROUTINE globals for more details.

Norman W. Freeman · Feb 1, 2024 go to post

Thanks. I tried it and it works great most the time. However, I got a few cases where the IsUpToDate() returns 0 while the class does not show any "+" sign in Studio. I tried different values for "type" parameter but it does not help.

The error reported is as such : 
ERROR: ^oddCOM(cls,timechanged) does not exist

I checked and indeed there is no TimeChanged or TimeCreated in the class compiled global. Seems Studio is happy with that.

Norman W. Freeman · Dec 28, 2023 go to post

Thanks for the code.

The pound symbol is indeed the entry U+00A3 in the Unicode table, but it's always encoded as 0xc2 0xa3 in UTF-8. See this page. In UTF-8, anything above U+007F will be encoded with 2 bytes.

When you see \xc2 inside CSP gateway logs, you have now clue if it was originally 0xc2 or if it was already \xc2 (0x5c 0x78 0x63 0x32 in hexa) because there is no escaping made. Apache will instead double the backslash (\\xc2) so you know it was originally \xc2 and not 0xc2. 

Norman W. Freeman · Dec 20, 2023 go to post

I would like to get back to record the HTTPS requests sent to a server in their RAW format (which is exactly what "V9r" is doing. There is probably other ways (eg: using packet dumper or apache module) but this is the only working one I have found so far. The other methods have also their own disadvantages and quirks.