Depends on what are you trying to achieve.

Import as is, with an iterator

Class User.Test Extends (%RegisteredObject, %JSON.Adaptor)
{

Property name As %String;

ClassMethod Import()
{
  Set data = [{
    "name": "test1"
  },
  {
    "name": "test2"
  }]

  Set iter = data.%GetIterator()
  While iter.%GetNext(.key, .value) {
    Set obj = ..%New()
    Set tSC = obj.%JSONImport(.value)
    Write !,obj.name
  }
}

}

Import with a wrapper object

Class User.TestList Extends (%RegisteredObject, %JSON.Adaptor)
{

Property items As list Of User.Test;

ClassMethod Import()
{
  Set data = [{
    "name": "test1"
  },
  {
    "name": "test2"
  }]

  #; wrap to object
  Set data = {
    "items": (data)
  }

  Set list = ..%New()
  Set tSC = list.%JSONImport(.data)

  For {
    set obj = list.items.GetNext(.key)
    Quit:key=""
    Write !,obj.name
  }
}

}

Jeffrey, thanks. But if I would have only 16KB blocks buffer configured and with a mix of databases 8KB (mostly system or CACHETEMP/IRISTEMP) and some of my application data stored in 16KB blocks. 8KB databases in any way will get buffered in 16KB Buffer, and they will be stored one to one, 8KB data in 16KB buffer. That's correct?

So, If I would need to separate global buffers for streams, I'll just need the separate from any other data block size and a significantly small amount of global buffer for this size of the block and it will be enough for more efficient usage of global buffer? At least for non-stream data, with a higher priority?

I've mentioned above a system with a significant amount of streams stored in the database. And just checked how global buffers used there. And streams are just around 6%. The system is very active, including files. Tons of objects created every minute, attached files, changes in files (yeah, our users can change MS Word files online on the fly, and we keep all the versions).

So, I still see no reasons to change it. And still, see tons of benefits, of keeping it as is.

Fragmentations issues, with SSD disks not an issue anymore. 

But in any way, I agree with storing files in the database. I have a system in production, where we have about 100TB of data, while more than half is just for files, stored in the database. Some of our .dat files by mapping used exclusively for streams, and we take care of them, periodically by cutting them at some point, to continue with an empty database. Mirroring, helps us do not to worry too much about backups. But If would have to store such amount of files as files on the filesystem, we would lose our mind, caring about backups and integrity.

LuhnMCheckSum(input) public {
  Set input = $Piece(input, "#", 1)
  Set codePoints = "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789/:"
  Set n = $Length(codePoints)

  Set sum = 0
  Set factor = 2
  Set len = $Length(input)  
  For i = len:-1:1 {
    Set codePoint = $Find(codePoints, $Extract(input, i)) - 2
    Set addend = factor * codePoint
    Set factor = $Case(factor, 2: 1, : 2)
    Set addend = (addend \ n) + (addend # n)
    Set sum = sum + addend
  }
  Set remainder = sum # n
  Set checkCodePoint = (n - remainder) # n
  Return $Extract(codePoints, checkCodePoint + 1)
}
LuhnMValidate(input) public {
  Set checksum = $Piece(input, "#", 2)
  Set input = $Piece(input, "#")
  Return $$LuhnMCheckSum(input) = checksum
}