The %Stream.DynamicCharacter (%SDC) and %Stream.DynamicBinary (%SDB) classes are designed to hold a copy of the string data that is in an element of a %DynamicArray (%DA) or a %DynamicObject (%DO) class.  The %SDC class, and its %SDB subclass, were designed to be small and efficient even when the string in the %DA/%DO class is very large, containing many billions of characters or bytes.  The %SDC classes contain readonly data so you cannot modify the string data that they contain.  However, you can create any other object in %Stream package of classes that supports the CopyFrom() method (such the %Stream.TmpCharacter, %Stream.TmpBinary, %Stream.FileCharacter or %Stream.FileBinary classes) and use the CopyFrom() method to create a writeable copy of the string data in the %SDC object.  If your %SDC stream contains many billions of data elements then the
    DO tmpStream.CopyFrom(sdc)
access will take much longer to load the tmpStream object than it took to create the original sdc object.

A %DA/%DO object internally contains a data structure called a Packed Vector Array (PVA).  The PVA supports rapid creation from JSON text and also rapid data lookup.  It does not support rapid data modification, although the %Set() method can be used to modify and rearrange the data in a PVA.  When a %DA/%DO contains a string element longer than a selected length (currently defaulting to about 4095 characters) then those characters are placed in a separate buffer object and a small object reference (oref) is placed in the PVA.  The buffer classes are the %DynamicString, %DynamicBinary and %DynamicBase64 classes which are used to contain the characters and bytes of long strings.  These classes are not very interesting to ordinary ObjectScript programers as they have no properties and no methods.  When a %SDC object is created (using something like the SET sdc=dao.%Get(key,,"stream") ObjectScript statement) then the DAOData property in the %SDC object references the buffer class so that characters/bytes in the buffer do not need be copied.  A buffer class can be shared by multiple %DA, %DO, %SDC and %SDB objects, which is why the buffer objects are read only.

Upgrading from an 8-bit instance to  a Unicode instance is much simpler as you can just skip your export and import steps.  Instead, just reinstall your original IRIS kit as an update kit.  During the update, the installation will ask you:

Do you want to convert 8-bit to Unicode <No>?

Just answer Yes and the instance will be converted.

Whenever a string value in a ^global variable contains only 8-bit characters then a Unicode IRIS instance stores that string in IRIS.DAT using 8-bit representation in order to save space.  After the update, all your existing global data items are still there and the strings are all in 8-bit.  IRIS Unicode instances use the UTF-16 Unicode encoding.  If you have any 8-bit strings encoded in UTF-8 then you can use $ZCVT to convert UTF-8 strings to the IRIS default Unicode representation which uses UTF-16.  Functions like $wlength, $wextract, etc. do not work on UTF-8 encoded 8-bit strings but they do work on the UTF-16 encoded strings.

Note, if you do port IRIS.DAT files between different hardware instances and you also port between big-endian hardware and little-endian hardware  (e.g., aix to windows) then there is a documented utility that describes how to convert the IRIS.DAT files between big-endian and little-endian representation.

There is no support for automatic conversion starting from a Unicode IRIS.DAT file back to an 8-bit IRIS.DAT file.  You can imagine this working if you are very lucky and the ported Unicode IRIS.DAT files just happen to have no Unicode strings, which will not happen with the "%SYS" namespace because the upgrade will add Unicode support to that namespace which will include some Unicode strings.  With only a few, easily found Unicode strings then you can use %ZCVT to convert UTF-16 to 8-bit UTF-8.  If are so lucky that you can do those conversions to completely remove all UTF-16 strings from a IRIS Unicode instance then you can try to install a new 8-bit instance and keep the IRISSYS and IRISLIB databases and replace the other database files with IRIS.DAT files that now just contain 8-bit user string data.  If you fail to convert all the Unicode strings while trying to go back to an 8-bit instance then I believe you will get a <WIDE CHAR> signal if you attempt to access wide UTF-16 data.

You cannot store {{}} as it is an object reference (oref) and the translation of the oref to a %String is not useful.  You can try:

Parameter STATUSES = {{}}.%ToJSON();

and where you write 'w ..#STATUSES' you can instead write 'w [].%FromJSON(..#STATUSES)'

I assume you can also write 'w [].%FromJSON(##class(Test...).#STATUSES)' although I am not certain that will do what you desire.  [[ There is an ancient request for "DWIM" software (Do What I Mean software) to replace "DWIS" software (Do What I Said software). ]]

And of course you Parameter modification can be written as:

    d $system.OBJ.UpdateConfigParam("Test","STATUSES",{"a":10}.%ToJSON())

I don't know anything about the FhirBinary class but I do know why the %GetNext method in the %Iterator.Array or %Iterator.Object classes is giving the <MAXSTRING> signal.  Let's assume the %GetNext in question is accessing a %DynamicObject as the %DynamicArray case is identical.  A %DynamicObject (usually created by the %FromJSON method) limits it contained string elements only by the amount of memory that can be obtained from the Operating System running the IRIS instance.  Fetching a long string element from a %DynamicObject as an ObjectScript string value will signal <MAXSTRING> when the %DynamicObject string element is too long.  However, you can fetch a long string element as a %Stream.DynamicBinary or %Stream.DynamicCharacter class object.  Such a %Stream object can contain any string that will into the system memory.  Long %Stream contents longer than the ObjectScript maximum string length can be accessed by using the Read(.len,.status) method to access smaller substrings of the %Stream contents.  Also, you can copy one class of long %Stream into a different class of long %Stream if you want to move a long string out of memory and into either a a file or an ObjectScript global array.

The Class Reference documentation for the %Iterator.Object class does not directly appear in either Class Reference webpage on an IRIS instance nor in the Class Reference web page in the InterSystems IRIS network documentation.  This strikes me as a documentation bug.

Fortunately you can see the appropriate Class Reference documentation by indirectly going to the Class Reference page for the %Library.DynamicObject class (or %Library.DynamicArray class) and going to the documentation for %GetIterator method. That documentaton contains a link to the %Iterator.Object class. Click on that link and you will go the %Iterator.Object Class Reference page where you can see the documentation for the GetNext(.key,.value,.type) method.  

%GetNext method is used to iterate over the values in a %DynamicObject.  The third argument, .type, is optional.  The two argument form, %GetNext(.key,.value) will return the key index in .key and its associated ObjectScript value in .value.  If the element value cannot be converted to an ObjectScript value then you will get a <MAXSTRING> or <MAXNUMBER> signal.  However, the three argument form, %GetNext(.key,.value,.type) will not signal these errors.  In place of a <MAXSTRING> signal, the .type argument will contain "string" and the the .value argument will contain the appropriate in-memory %Stream object value.  In place of a <MAXNUMBER> signal, the .type argument will contain "number" and .value will contain a string with the decimal representation of the numeric value.

More detail of other uses for the .type argument of the %GetNext(.key,.value,.type) method call can be found on the %Iterator.Object (or %Iterator.Array) Class Reference web page.

When the Fhir.Binary class needs to iterate over the elements of a %DynamicArray/Object then it should not use the 2 argument form of %GetNext(.key,.value) but should instead use the the 3 argument form %GetNext(.key,.value,.type) and be prepared when the type of the the .value variable does not match the type specified by the .type argument.

%DynamicAbstractObject-s provides the fastest for reading JSON from an external location and once in memory it provides excellent random access reading of data components.  Large JSON arrays/objects are supported.  I have done %FromJSON on a 20GB JSON file on a system with 16GB of physical memory—required paging of memory but still acceptable performance.  You don’t require random access performance and it is best to avoid using more memory than you need.  Maybe someday InterSystems can provide methods for reading JSON subobjects while remember how that object was nested in its parent objects.

Assuming your JSON is in a file or stream and the outer array contains only elements which are arrays or objects you can try the following.  1. First read the leading ‘[‘;  2. Then use %FromJSON to read the array element and process that JSON element (the read will stop on the closing ‘]’ or ‘}’);  3. Read single chars from your stream/file skipping white space and if that char is a ‘,’ then go back to step 2 but if that char is ‘]’ then you are done.

The documentation is not quite clear on all this.

If you have written data to a temporary file before first calling the LinkToFile(filename) method then the previously written temporary file disappears and future Read and Write method calls go to the new filename.  If you later do a LinkToFile to a different filename then you will detach from the previous filename and attach to the new filename.  But the previous file data is still available if you later attach back to the old filename.

When you do a LinkToFile(filename) when there is no existing file with that filename then a new file is created.  If your filename describes an existing file then your %Stream.FileBinary is attached to the existing file.  If the first thing you do after LinkToFile(filename) are Read() method calls (or a Rewind() followed by Read() method calls) then you will Read the existing contents of the file.  If after the LinkToFile, you instead do a MoveToEnd() method call followed by Write() method calls, then you keep the current file contents and your data is appended to the end of the existing file.  If you do a Write() immediately after a LinkToFile() (or immediately after either a Rewind() or a Read()) then existing file data is deleted and you Write() to the beginning of the existing file.  It is possible to Read() data from an existing file by avoiding Write() method calls before you Read() the data.  it is possible to append new data to an existing file by using MoveToEnd() followed by Write() method calls.

Some additional picky details:

The unary + operator is a numeric operator so it converts its operand to a number.  If a string operand starts with a fractional number then unary + produces a fractional number (in its canonical numeric form) and it throws away any unneeded characters.  If you want your numeric result to be an integer then you need to throw away the fractional digits by doing an integer-division by 1.  Since the integer-division operator, \, is a numeric operator it always converts its operands to numbers so you no longer need the unary + to do the conversion of a string to numeric representation.  E.g.s:

USER>w "0012.543000abc"   
0012.543000abc
USER>w +"0012.543000abc"
12.543
USER>w +"0012.543000abc"\1
12
USER>w "0012.543000abc"\1 
12
 

Internally, a %DynamicArray or %DynamicObject is usually limited only by the amount of memory that the OS is willing to allocate to the job,  Because the components of such objects are not limited by their size, some very large components need to be allocated by a non-standard allocator that does not participate with the $ZSTORAGE limit.  But some components of a %DynamicObject/Array are allocated by the usual memory allocator so %ZSTORAGE might limit the size.

Since %DynamicObject/Array classes are always stored in memory (I once built one that needed 20GB of memory) you want to limit how many large objects are active at the same time.  When you encounter a large %DynamicObject/Array you want to move its data to a file or global %Stream before you start reading in another large %DynamicObject/Array,

The %FromJSON(…) method can create a %DynamicObject/Array while reading JSON from any kind of %Stream.  The %ToJSON(…) method can write JSON text to any kind of %Stream.  A %Stream is not usually limited by the max supported length of an ObjectScript string and the %FromJSON and %ToJSON methods read/write %Stream data using buffers smaller than the max supported string length.

If you have a %DynamicObject/Array with a string component larger than the ObjectScript max string length then you can use the three argument %Get(key,default,type) method where the ‘type’ argument specifies a “stream” type.  The resulting %Stream is still stored in memory but that %Stream is readonly and will often share the memory allocation with string component in the %DynamicObject/Array. That in-memory %Stream can be copied into a file or global %Stream.  The three argument %Set(key,value,type) method can create a %DynamicObject/Array component of any length when the ‘type’ argument specifies that the ‘value’ argument is a %Stream.  If the %Set ‘value’ argument is a readonly, in-memory %Stream created by %Get(key,default,”stream”) then the component created by %Set will usually share in-memory data with its  ‘value’ argument stream.

See the class reference documentation for %Get and %Set.  The ‘type’ argument options include the capabilities to encode to, or to decode from, BASE64 representation.  This can be useful when working with a BASE64 encoded .pdf file in a JSON object.

On Caché and IRiS, both %ZU(0) and $ZU(1) will signal <FUNCTION> because they need more than 1 arugment.

On IRIS, $ZU(0,directoryName,....) with enough additional arguments will create an IRIS.DAT database file. and $ZU(1,...) will modify an existing IRIS.DAT database file.  On Caché these functions create/modify a CACHE.DAT database file.  These are legacy functions.

The Config.Database class provides ways to modify the CPF configuration file to create and configure databases.

Interactively you also use the System Management Portal (SMP) by clicking SystemAdministration > Configuration > SystemConfiguration > LocalDatabase to find the SMP webpage that can create or modify local databases.

Or you can use the DO ^|"%SYS"|DATABASE command line utility as a way to create and configure databases.

InterSystems changed how <MAXNUMBER> was signaled during conversion from text representation to numeric representation when support for 64-bit IEEE binary floating point was added to Caché.  Textual numbers that overflowed the default ObjectScript decimal representation are converted to 64-bit IEEE binary floating point which supports a much larger range of magnitudes (but about 3 fewer digits of decimal precision.)  When a literal would exceed the magnitude of IEEE floating point, the choice of whether to signal <MAXNUMBER> depends on the run-time $SYSTEM.Process.IEEEError setting since IEEE floating-point overflow can either signal <MAXNUMBER> or it can return an IEEE floating-point infinity.  When the compiler sees a numeric literal that exceeds the finite IEEE number range then the decision to signal an error is delayed until run-time execution so the current $SYSTEM.Process.IEEEError setting can be obeyed.

The quoting of the slash character, "/", is optional in JSON.  When doing a $ZCVT output conversion ,"O", in JSON IRIS chooses to not quote the slash character.  It makes the output easier to read.  However, $ZCVT doing an input conversion, "I", will recognize a quoted slash character in JSON input.  E.g.:

USER>w $zcvt("abc\/def","I","JSON")
abc/def
USER>w $zcvt("abc/def","I","JSON") 
abc/def
 

Another translation is

 Read *R:20
 ;; Test error case

 If '$Test { Use Write !!!,"Expired time." Quit }
 ;; Test character "a" case

 If $c(R)="a" {
   Use Write !!!,"A letter a has been read."
   Quit
 }
 ;; I added more code here to demonstrate "fall through" in original
 ;; when neither timeout nor "a" occurs
 Use 0
 Write !,"A character other than ""a"" was read"
 Quit

 ;; Since all 3 cases execute Use 0, this statement can be placed after Read and the other 3 copies deleted
 

I am assuming your problem is that request.HttpResponse.Data.Read() is complaining because you are reading the entire pdf file into an ObjectScript variable with its maximum supported string length of 3,641,144 characters.  You will have to read it out in smaller chunks that individually fit into an ObjectScript string.  The chunksize will be important as you pass the chunked data to $system.Encryption.Base64Encode(content) and your chunks cannot end between the boundaries between two different BASE64 encoding blocks.  The results of each Base64Encode must then be sent to some form of %Stream (probably %Stream.GlobalBinary or %Stream.FileBinary) since only a %Stream can hold a block of data larger than 3,641,144 characters.  Using a small, appropriate chuncksize will limit the in-memory resources used by this conversion.

If you don't mind having the entire PDF file in memory at one time you can use %DynamicObject to hold and decode that base64 data.  The %Library.DynamicObject and %Library.DynamicArray class objects are usually used to represent data that was originally JSON encoded.  These Dynamic Objects exist only in memory but you can serialize them into JSON textual representation using the %ToJSON(output) method.  But if the JSON text representation contains more than 3,641,144 characters then you better direct 'output' into some form of %Stream.

You can convert a binary pdf file into BASE64 encoding doing something like:

SET DynObj={}  ;; Creates an empty %DynamicObject
DO Dynobj.%Set("pdffile",request.HtttpResponse.Data,"stream")
SET Base64pdf=Dynobj.%Get("pdffile",,"stream>base64")

Then Base64pdf will a readonly, in-memory %Stream.DynamicBinary object which is encoded in BASE64.  You can use Base64pdf.Read(chunksize) to read the BASE64 out of Base64pdf in ObjectScript supported chunks.  You do not have to worry about making sure the chunksize is a multiple of 3 or a multiple of 4 or a multiple of 72.  You can also copy the data in Base64pdf into a writeable %Stream.FileBinary or a %Stream.GlobalBinary using the OtherStream.CopyFrom(Base64pdf) method call.

If your HttpResponse contains a BASE64 encoded pdf file instead of a binary pdf file then you can do the reverse decoding by:

SET DynObj={}
DO Dynobj.%Set("pdffile",request.HtttpResponse.Data,"stream<base64")
SET BinaryPDF=Dynobj.%Get("pdffile",,"stream")

Then BinaryPDF is a readonly %Stream.DynamicBinary containing the decoded pdf data.  You can copy it to a %Stream.FileBinary object which can then be examined using a pdf reader application.

A canonical numeric string in ObjectScript can have a very large number of digits.  Such a string can be sorted with ObjectScript sorts-after operator, ]], and reasonably long canonical numeric strings can be used as subscripts and such numeric subscripts are arranged in numerical order before all the subscript strings that do not have the canonical numeric format.

However, when ObjectScript does other kinds arithmetic on a numeric string then that string is converted to an internal format, which has a restricted range and a restricted precision.  ObjectScript currently supports two internal formats.  The default format is a decimal floating-point representation with a precision of approximately 18.96 decimal digits and a maximum number about 9.2E145.  For customers doing scientific calculations or needing a larger range, ObjectScript also supports the IEEE double-precision binary floating-point representation with a precision around 16 decimal digits and a maximum number about 1.7E308.  You get the IEEE floating-point representation with its reduced precision but its greater range by using the $double(x) function or doing arithmetic on a string which would convert to a numeric value beyond the range of the ObjectScript decimal floating-point representation.  When doing arithmetic that combines ObjectScript decimal floating-point values with IEEE binary floating-point values then the decimal floating-point values will be converted to IEEE binary floating point before performing the arithmetic operation.

Here are more picky details.

The ObjectScript decimal floating-point representation has a 64-bit signed significand with a value between -9223372036854775808 and +9223372036854775807 combined with a decimal exponent multiplier between 10**-128 and 10**127.  I.e., a 64-bit twos-complement integer significand and a signed byte as the decimal exponent.  This decimal floating-point representation can exactly represent decimal fractions like 0.1 or 0.005.

The IEEE binary floating-point representation has a sign-bit, an 11-bit exponent exponent encoding, and a 52 bit significand encoding. The significand usually encodes a 53-bit range values between 1.0 and 2.0 - 2**-52 and the exponent usually encodes a power-of-two multiplier between 2**1023 and 2**-1022.  However, certain other encodings will handle +0, -0, +infinity, -infinity and a large number of NaNs (Not-a-Number symbols.)  There are also some encodings with less than 53 bits of precision for very small values in the underflow range of values.  IEEE 64-bit binary floating-point cannot exactly represent most decimal fractions.  The numbers $double(0.1) and $double(0.005) are approximated by values near 0.10000000000000000556 and 0.0050000000000000001041.

I have written some ObjectScript code that can do add, subtract and modulo on long canonical numeric strings for use in a banking application.  However, if you are doing some serious computations on large precision values then you should use the call-in/call-out capabilities of IRIS to access external code in a language other than ObjectScript. Python might be a good choice.  You could use canonical numeric strings as your call-in/call-out representation or you could invent a new encoding using binary string values that could be stored/fetched from an IRIS data base.

ObjectScript was designed to do efficient movements and rearrangements of data stored in data bases.  If you are doing some serious computations between your data base operations then using a programming language other than ObjectScript will probably provide better capabilities for solving your problem.

The ObjectScript $ZDATETIME function (also-know-as $ZDT) contains lots of options, some of which are close to what your want.  [[ Note $HOROLOG is also-known-as $H; $ZTIMESTAMP is aka $ZTS. ]]

$ZDT($H,3,1) gives the current local time, where character positions 1-10 contain the date you want and positions 12-19 contain the time you want.  However, character position 11 contains a blank, " ", instead of a "T".

$ZDT($ZTS,3,1) gives the current UTC time with character position 11 containing a blank.

Assigning
    SET $EXTRACT(datetime,11)="T"
to your datetime result will fix the character position 11 issue.

Instead of using time format 1, you can use time formats 5  and 7 with $H.  $ZDT($H,3,5) gives local time in the format you want except character positions 20-27 contain the local offset from UTC.  $ZDT($H,3,7) converts the local $H date-time to UTC date-time and makes character position 20 contain "Z" to indicate the "Zulu" UTC time zone.  However, if your local time-zone includes a Daylight Saving Time (DST) offset change when DST "falls back" causing a local hour to be repeated then the time format 5 UTC offset or the time format 7 conversion from local to UTC will probably be incorrect during one of those repeated hours.

Although the above description says $ORDER scans "down" a multidimensional global, other programers might say it scans "sideways".  There are many different structures for databases.  E.g., there are network databases (sometimes called CODASYL databases); there are hierarchical databases (like ObjectScript multidimensional arrays/globals); there are relational databases (often accessed by the SQL language); ...

ObjectScript is based on the ANSI M language standard.  I believe that the name of the ANSI M hierarchical function $QUERY has always been $QUERY but the original name of the ANSI M hierarchical function $ORDER was formerly $NEXT.  $NEXT is very similar to $ORDER but $NEXT had problems with its starting/ending subscript values.  IRIS ObjectScript no longer documents the obsolete $NEXT function but the ObjectScript compiler still accepts programs using $NEXT for backwards compatibility.

Consider the following ObjectScript global array:

USER>ZWRITE ^g
^g("a")="a"
^g("a",1)="a1"
^g("b",1)="b1"
^g("b",1,"c")="b1c"
^g("c")="c"
^g("c","b10")="cb10"
^g("c","b2")="cb2"
^g("d",2)="d2"
^g("d",10)="d10"

Consider the following walk sideways by $ORDER along the first subscript of ^g"

USER>set s=""
USER>for { SET s=$ORDER(^g(s))  QUIT:s=""  WRITE $NAME(^g(s)),! }    
^g("a")
^g("b")
^g("c")
^g("d")

Although these 4 globals contain values below them, the $ORDER walks did not walk down to deeper subscripts.  As it walked sideways, it returned the subscripts "b" and "d" even though ^g("b") and ^g("d") did not have values of their own but only values underneath them.

Now consider the walk down deeper by $QUERY through all the subscripts of ^g(...) at all subscript levels:

USER>set s="^g"
USER>for { WRITE s,!  SET s=$QUERY(@s)  QUIT:s="" }               
^g
^g("a")
^g("a",1)
^g("b",1)
^g("b",1,"c")
^g("c")
^g("c","b10")
^g("c","b2")
^g("d",2)
^g("d",10)

This walk by $QUERY was told to start at ^g and every call on $QUERY went through every subscripted node of ^g(...) that contained a value regardless of the number of subscripts needed.  However, elements ^g("b") and ^g("d") that did not contain values of their own were skipped by the $QUERY walk as it continued on to nodes with deeper subscripts that did contain values.

Also note that each $ORDER evaluation returned only a single subscript value as it walked sideways while each $QUERY evaluation returned a string containing the variable name along with all the subscript values of that particular multidimensional array node.

You might consider using the %Library.DynamicObject and %Library.DynamicArray classes which are built into ObjectScript.  ObjectScript supports JSON constructors for %DynamicObject, {...}, and for %DynamicArray, [...].  a %DynamicObject/Array can contain both JSON values and ObjectScript values.   There is also a %FromJSON(x) which reads JSON objects/arrays when x is a %Stream or a FileNme or an ObjectScript %String containing JSON syntax.  Here is an example from a recent IRIS release:


USER>zload foo

USER>zprint
foo()    {
            set DynObj = {
            "notanumber":"28001",
            "aboolean":true,
            "anumber":12345,
            "adecimal":1.2,
            "adate":"01/01/2023",
            "adate2":"01-01-2023",
            "anull":null,
            "anull2":null,
            "anull3":null
            }
            write DynObj.%ToJSON(),!,!

            set DynObj.Bool1 = 1    ; Set without type
            do DynObj.%Set("Bool2",1,"boolean") ; Set with type
            set DynObj.old1 = "abc"   ; Set without type
            do DynObj.%Set("new1","abc","null") ; Set with type
            set DynObj.old2 = ""   ; Set without type
            do DynObj.%Set("new2","","null") ; Set with type
            write DynObj.%ToJSON()
}        

USER>do ^foo()
{"notanumber":"28001","aboolean":true,"anumber":12345,"adecimal":1.2,"adate":"01/01/2023","adate2":"01-01-2023","anull":null,"anull2":null,"anull3":null}

{"notanumber":"28001","aboolean":true,"anumber":12345,"adecimal":1.2,"adate":"01/01/2023","adate2":"01-01-2023","anull":null,"anull2":null,"anull3":null,"Bool1":1,"Bool2":true,"old1":"abc","new1":"abc","old2":"","new2":null}

Your can use ordinary ObjectScript assignment to assign ObjectScript values to entries in a %DynamicObject/Array.  You can use the %Set(key,value,type) method to assign ObjectScript values into an object with an optional type parameter controlling which JSON value to which to convert the ObjectScript value.  I.e., type "boolean" converts ObjectScript numeric expressions into JSON false/true values and type "null" converts an ObjectScript %String into a JSON string *except* that the empty string is converted to the JSON null value.  (Note:  some older implementations of %Set with type "null" only accepted "" as the value while new implementations accept any string expression as the value.)

The JSON constructor operators in ObjectScript, besides accepting JSON literals as values, can also accept an ObjectScript expression enclosed in round parentheses. 

E.g.,  SET DynObj2 = { "Pi":3.14159, "2Pi":(2*$ZPI) }

When asking about a 'ROUTINE' you may be asking about the difference between a 'Routine', 'Procedure', 'Subroutine', 'Function', 'Label', 'Class' and 'Method'

A 'Routine' is a source code file with name like 'myroutine.mac'.  Source code can also be a 'Method' which is found in a 'Class' file with a name like 'myclass.cls"

A Routine file can contain Procedures, Subroutines, Functions and Labels.  See https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCOS_usercode#GCOS_usercode_overview for some InterSystems documentation.

You can call a subroutine with the statement 'DO MySubroutine^myroutine(arg)'; You can call a function with the expression '$$MyFunction^myroutine(arg)';  You can call the procedure 'MyProcedure^myroutine(arg)' using either the syntax for a subroutine call or function call depending on whether you need the value returned by the procedure; and, You can Goto the source code following a label with the statement 'GO MyLabel^myroutine'.  If you reference a subroutine/function/procedure/label inside a Routine file (e.g. myroutine.mac) and that subroutine/function/procedure/label access (e.g.$$MyFunction^myroutine(arg)) is referencing a name defined in the same Routine file then you do not have to specify the ^Routine name in the call syntax (e.g. $$MyFunction(arg)).

The local variables used inside a subroutine or function are 'public' variables and those named variables are shared with all other subroutines and functions.  You can use the NEW command inside a subroutine or function to temporarily create new public variables without modifying previous instances of public variables with the same names.

The local variables used inside a procedure are private variables.  The private variables are only available inside the procedure.  Those names do not conflict with variables used by any caller of the procedure.  Those names are not available in any code called by the procedure although it is possible to pass either the private variable or the value of the private variable as an argument when calling out of a procedure.  The declaration of a procedure can optionally include a list of public variables.  Names in public list reference the public variable when they are accessed by code in the procedure.  You can use the NEW command with an argument that is a public variable within a procedure.  A label defined inside a procedure is also private and it cannot be referenced by any GO command outside that procedure.

Methods defined in a class file default to being a procedure with private variables and private labels.  However, it is possible to specify that a method is a subroutine or function.   Also, a method procedure declaration can optionally include a list of global variables.

As mentioned in another Developer Community Question, a recent version of IRIS would allow you to  evaluate object.%Get("pdfKeyName",,"stream") which would return to you an %Stream object containing the JSON string in question as raw characters.  Also, %Get in IRIS can support object.%Get("pdfKeyName",,"stream<base64") which would do Base64 decoding as it creates the %Stream. However, you said you need to stick with an older version of Caché/Ensemble which predates these %Get features.

It is possible to convert the long JSON string component to a %Stream but it will take some parsing passes.

(1) First use SET json1Obj=[ ].%FromJSON(FileOrStream) to create json1OBJ containing all the elements of the original JSON coming from a File or Stream.

(2) If your pdf JSON string is nested in json1OBJ then select down to the closest containing %DynamicObject containing your pdf JSON string,  I.e. Set json2Obj=json1Obj.level1.level2 if your original JSON looks like {"level1":{"level2":{"pdfKeyName":"Very long JSON string containing pdf", ...}, ...}, ...}

(3) Create a new %Stream containing the JSON representation of that closest containing %DynamicObject.  I.e.,

   SET TempStream=##class(%Stream.TmpBinary).%New()
   DO json2Obj.%ToJSON(TempStream)

(4) Read out buffers from TempStream looking for "pdfKeyName":" .  Note that this 14-character string could span a buffer boundary.

(5) Continue reading additional buffers until you find a "-characer not preceded by a \-character; Or until you find a "-character preceded by an even number of \-characters.

(6) Take characters read in step (5) and pass them through $ZCVT(chars,"I","JSON",handle) to convert JSON string escape characters to binary characters and then write them to a %Stream.FileBinary (or some other Binary %Stream of your choosing.)

(7) If your JSON "pdfKeyName" element contains Base64 encoding  then you will also need to use $SYSTEM.Encryption.Base64Decode(string). Unfortunately $SYSTEM.Encryption.Base64Decode(string), unlike $ZCVT(chars,"I",trantable,handle), does not include a 'handle' argument to support the buffer boundary cases in those situations where you have broken a very large string into smaller buffers.  Therefore you should remove the whitespace characters from 'string' before calling $SYSTEM.Encryption.Base64Decode(string) and you must make sure the argument 'string' has a length which is a multiple of 4 so that a group of 4 data characters cannot cross a buffer boundary.

Again the more complete support in IRIS can do steps (2) through (7) without worrying about buffer boundaries by executing

   Set BinaryStreamTmp = json1Obj.level1.level2.%Get("pdfKeyName",,"stream<base64")

and you can then copy the BinaryStreamTmp contents to wherever you want to resulting .pdf file.