OK. We're improving a bit.

I've changed Cache Terminal to CP850 which is the code page that Windows Powershell uses by default:

Now, any idea how could I make up-down keys work so to see the commands' history? Also, as Robert C. also mentioned… it seems that Ctrl-C quits not only from the command/function in execution but from Caché Terminal itself...

Thanks Eduard. You are right. Test() method assumed that the ID refers to an object (world) already defined… but I like more your suggestion. I've uploaded a new release to GitHub. Now, if you execute the Test() and the World doesn't exist, it creates one for you... also we can choose the char to represent alive or dead blocks and get the iteration we're seeing...

Hi Bob,

in Windows, for the key/password it works if you just define a volume where the files are. Then the call would be simpler/smaller:

docker run --name iris3 --detach --publish 52773:52773 \
--volume C:\pmartinez\iris_external:/external \
--volume durable_data:/durable --env ISC_DATA_DIRECTORY=/durable/irissys \
--env ICM_SENTINEL_DIR=/durable iris3:test --key /external/iris.key \
--before "/usr/irissys/dev/Cloud/ICM/changePassword.sh /external/password.txt"

I understand that using a named volume will store the durable %SYS within the Linux VM itself which would avoid issues with Windows FS regarding database files updating, permissions,... but, is there any reason why you choose to mount each file separately instead of this way I include? In the end we just use these 2 files (iris.key and password.txt) once when starting the container.

I've installed PRTG on a windows 10 system... enabled SNMP services... I've configured SNMP Service to Community "public" destination "127.0.0.1"... PRTG is able to see and graph the system statistics... OK.

Then I imported ISC-Cache.mib with Paesler MIB Imported, OK, and "Save for PRTG Network Monitor"... everything seems fine, but, then, where is supposed to be? When I go to PRTG NM I cannot see anyhing related to Caché ... no clue about the library that I supposedly just imported...  S of SNMP means Simple... so I'm pretty sure I'm missing something really basic here, but I don't know how to go on.

I'm getting some errors  compiling the RulesBuilder in the auto-generated class:

Compilando clase Demo.RulesBuilder.ParagraphDomain
Compilando rutina Demo.RulesBuilder.ParagraphDomain.1
Compilando clase Demo.RulesBuilder.ParagraphDomain.Domain
Compilando rutina Demo.RulesBuilder.ParagraphDomain.Domain.1
ERROR: Demo.RulesBuilder.ParagraphDomain.Domain.cls(%CreateDictionaries+33) #1027: Error in SET command : '$zt($p($h,",",2))_": Finished creating ",tProfiles," matching profiles"' : Offset:189 [%CreateDictionaries+28^Demo.RulesBuilder.ParagraphDomain.Domain.1]
 TEXT: if pVerbose && ($g(tProfiles)) { if pAsync { set ^CacheTemp.ISC.IK.DomainBuild(+$j,"out",$i(^CacheTemp.ISC.IK.DomainBuild(+$j,"out"))) = $zt($p($h,",",2))_": Finished creating ",tProfiles," matching profiles" } else { write !,$zt($p($h,",",2)),": Finished creating ",tProfiles," matching profiles" } }
ERROR: Demo.RulesBuilder.ParagraphDomain.Domain.cls(%CreateDictionaries+2001) #1026: Invalid command : 'catch' : Offset:8 [%CreateDictionaries+1859^Demo.RulesBuilder.ParagraphDomain.Domain.1]
 TEXT: } catch (ex) {
ERROR: Demo.RulesBuilder.ParagraphDomain.Domain.cls(%CreateDictionaries+2005) #1043: QUIT argument not allowed : 'tSC' : Offset:9 [%CreateDictionaries+1863^Demo.RulesBuilder.ParagraphDomain.Domain.1]
 TEXT: quit tSC }
ERROR: Demo.RulesBuilder.ParagraphDomain.Domain.cls(%LoadExpressions) #1044: PUBLIC label not allowed : 'public' : Offset:32 [%LoadExpressions^Demo.RulesBuilder.ParagraphDomain.Domain.1]
 TEXT: %LoadExpressions(pParams) public {
ERROR: Demo.RulesBuilder.ParagraphDomain.Domain.cls(%LoadExpressions+7) #1043: QUIT argument not allowed : '}' : Offset:11 [%LoadExpressions+7^Demo.RulesBuilder.ParagraphDomain.Domain.1]
 TEXT: quit tSC }
ERROR #5123: No se encuentra el punto de entrada para el método '%LoadExpressions' en la rutina 'Demo.RulesBuilder.ParagraphDomain.Domain.1'
6 errores detectados al compilar.

Interesting article and discussion btw :-) Including this new Data Model is a clear and great enhancement to the product imho. I wonder if performance considerations are being taking into account in design and implementation decisions. Are there any numbers about it? How do we compare with our own SQL over relational structures? How do we compare against competitors?