#1 is correct

Your calculation #2 is seriously wrong.
reason
reading documentation you see
dformat -2 

$ZDATETIME returns an integer specifying the count of seconds from a platform-specific origin date/time. This is the value returned by the time() library function, as defined in the ISO C Programming Language Standard. For example, on POSIX-compliant systems this value is the count of seconds from January 1, 1970 00:00:00 UTC

And that's the mistake:
Your BirthDate is obviously considered as  LOCAL time
And therefore the difference you see reflects the time offset of your machine to UTC

-19800 sec => -5.5 hrs
system variable $ZTZ will show your offset to UTC in minutes  => -330
my guess: your machine is running at local time in India

 

for $ZTDH:
https://docs.intersystems.com/iris20231/csp/docbook/DocBook.UI.Page.cls?KEY=RCOS_fzdatetimeh
for $ZDT:
https://docs.intersystems.com/iris20231/csp/docbook/DocBook.UI.Page.cls?KEY=RCOS_fzdatetime

USER>r x
1997-08-09 10:38:39.700000000
USER>w $ZDTH(x,3,,9)
57199,38319.7
USER>w $zdt($ZDTH(x,3,,9),3,7)
1997-08-09T08:38:39Z
USER>w $zdt($ZDTH(x,3,,9),3,7,9)
1997-08-09T08:38:39.700000000Z
USER>

win docker desktop not just consumes fast-growing  vhdx
but also a lot of temp files, that get never deleted or shrinked
not even with deinstall / reinstall
typically in 
C:\Users\<usename>\AppData\Local\Temp\docker-scout\sha256 
C:\Users\<username>\.docker\scout\sbom\sha256
C:\Users\<username>\AppData\Local\Temp\  *.ico, *.vhdx

in your namespace you can map not just full Globals   
to a different Database but also parts of a Global.
This works over Global Subscript   Details

IF your structure is ^HISTORY(yyyymm, ....)    [yyyymm as first subscript ]
? eventually also your IDKEY ?
this is a possible way   to set   
^HISTORY(201606) >>  201606_HIPAA.dat 
^HISTORY(201607) >>  201607_HIPAA.dat

But if yyyymm is just somewhere in your data, you need to reorganize your global
I assume this is something  you have to do anyhow with your history 

ATTENTION: this is total static.
so for 120 DBs you need 120 mapping lines

If not disabled all global SET and KILL and also transactions are documented in JOURNAL
there are also related search utilities available in %SYS
there is no equivalent feature for Global READ.

if you just look for the fact that there was a SET or KILL at object level
DSTIME could be an option see example: 
https://community.intersystems.com/post/synchronize-data-dstime
it is easier to handle than JOURNAL


 

Just to rephrase your issue:

  • you expect a JSON array of JSON objects   [{..},{..},{..} ]
  • but you get a JSON object containing that array {"cursos": [{..},{..},{..} ]}
;;  asssumptio input holds the received obj
    set jobj={}.%FromJSON(input) ; convert to obj
    set jarray=jobj.%Get("cursos")   ; content of "cursos" = [..]
    set output=jarray.%ToJSON() ; convert to  string
    

docu: %Library.DynamicObject
 

if you take a look to method ##class(EnsLib.HL7.Segment).getAtFromArray(...)
you see that the segment data is assembled in row 1008 of the class by  Set data=data_value 
without checking the size.
So it is designed to fail with large documents as your Base64 encoded PDF (~+33% of original)
So just using a reference to an external stored file as you suggested should work. 

 

BTW datatype %VarString is just a shortcut of %String(MAXLEN="") and a sometimes appropriate SQLTYPE.
 

I found an acceptable workaround.

  • installed telnetd into the container and started it
  • mapped some external port to port 23
  • set this external port  in my cube
  • started the IRIS Terminal
  • BINGO !

You may raise all concerns on Security and Container Isolation.  Accepted! And ignored!
Since THIS solves my issues on optical verification of the user interface.

If you are not afraid of using basic COS functionality:
your reload method raises a LOCK ^myRELOAD  
and drops it with completion LOCK -^myRELOAD

Your check utility does the same but with a timeout LOCK ^myRELOAD:0
if it fails - signaled by $TEST=0  you loop and hang around and retry
for success $TEST=1 you go on but release your successful LOCK immediately
not to block anyone else.