The ObjectScript command
   ZBREAK %method^%CSP.Broker.1
or
   ZBREAK zmethod^%CSP.Broker.1
will allow you to set an ObjectScript debugging breakpoint at the entry of "%method" or "method" of that class.  The ObjectScript commands  BREAK "S" or BREAK "S+" would then allow you to single step through the statements in that method.  See documentation on ObjectScript debugging for more details.

Unfortunately, the ObjectScript routine %CSP.Broker.1 on my IRIS instance is "deployed" which means the %CSP.Broker.1.int routine source code is not available and I assume the %CSP.Broker.cls source code is also not available.  Without one of these source files you will be debugging and single stepping blind through the ObjectScript commands.  Some of InterSystems classes and routines are not "deployed" which means you can do Command-line Routine Debugging.  You can also modify those classes/routines after you change the IRISLIB database from readonly to read-write.

It is best that you take the advice above to contact support in the InterSystems WRC.

I am assuming your problem is that request.HttpResponse.Data.Read() is complaining because you are reading the entire pdf file into an ObjectScript variable with its maximum supported string length of 3,641,144 characters.  You will have to read it out in smaller chunks that individually fit into an ObjectScript string.  The chunksize will be important as you pass the chunked data to $system.Encryption.Base64Encode(content) and your chunks cannot end between the boundaries between two different BASE64 encoding blocks.  The results of each Base64Encode must then be sent to some form of %Stream (probably %Stream.GlobalBinary or %Stream.FileBinary) since only a %Stream can hold a block of data larger than 3,641,144 characters.  Using a small, appropriate chuncksize will limit the in-memory resources used by this conversion.

If you don't mind having the entire PDF file in memory at one time you can use %DynamicObject to hold and decode that base64 data.  The %Library.DynamicObject and %Library.DynamicArray class objects are usually used to represent data that was originally JSON encoded.  These Dynamic Objects exist only in memory but you can serialize them into JSON textual representation using the %ToJSON(output) method.  But if the JSON text representation contains more than 3,641,144 characters then you better direct 'output' into some form of %Stream.

You can convert a binary pdf file into BASE64 encoding doing something like:

SET DynObj={}  ;; Creates an empty %DynamicObject
DO Dynobj.%Set("pdffile",request.HtttpResponse.Data,"stream")
SET Base64pdf=Dynobj.%Get("pdffile",,"stream>base64")

Then Base64pdf will a readonly, in-memory %Stream.DynamicBinary object which is encoded in BASE64.  You can use Base64pdf.Read(chunksize) to read the BASE64 out of Base64pdf in ObjectScript supported chunks.  You do not have to worry about making sure the chunksize is a multiple of 3 or a multiple of 4 or a multiple of 72.  You can also copy the data in Base64pdf into a writeable %Stream.FileBinary or a %Stream.GlobalBinary using the OtherStream.CopyFrom(Base64pdf) method call.

If your HttpResponse contains a BASE64 encoded pdf file instead of a binary pdf file then you can do the reverse decoding by:

SET DynObj={}
DO Dynobj.%Set("pdffile",request.HtttpResponse.Data,"stream<base64")
SET BinaryPDF=Dynobj.%Get("pdffile",,"stream")

Then BinaryPDF is a readonly %Stream.DynamicBinary containing the decoded pdf data.  You can copy it to a %Stream.FileBinary object which can then be examined using a pdf reader application.

The %ToJSON method sends JSON text to a device or %Stream one element at a time.  Starting in IRIS 2019.1.0 it broke up long JSON strings into blocks of 5460 characters so that strings exceeding the maximum length could be sent to the device.  Make sure you are using an IRIS 2019.1.0 or later if you are getting a <MAXSTRING> signal.

In a future IRIS release (now out in a preview release) a change was made such that sequences of many small items would be blocked together and sent to the device in a larger buffer.  This is being done to improve the performance of %ToJSON method when sending many small elements to a %Stream.

Although InterSystems allowed unlicensed use (with some limitations) of Caché by educational institutions very few colleges/universities took advantage of this.  The exception seemed to be universities in Russia.  When we had contests for the best Caché applications we would offer the university winner a free trip to Global Summit.  However, a recent Russian student was unable to attend because she could not get a Visa to visit the US.

Maybe IRIS with embedded Python could help increase university interest in IRIS databases.

The %Library.DynamicArray and %Library.DynamicObject class objects are stored only in process memory and are not affected by the 3641144 character string length of ObjectScript strings.  But they are affected by the amount of virtual memory that the operating system will grant to a process so avoid having many such large things active.  These Dynamic Objects generally hold JSON format data but they can also hold ObjectScript values.  The %ToJSON(output) method can change the contents of Dynamic Object into JSON text and it will send that text to 'output', which can be a file, a device or a %Stream.  If you need to persist a Dynamic Object in a database then you can use %ToJSON(globalstream) to persist the JSON text in a %Stream.GlobalCharacter or %Stream.GlobalBinary class object.  There is a %FromJSON(input) which can build a new Dynamic Object from 'input' which is JSON text from a string, a file, a device or a %Stream.  In recent versions of IRIS, the %Get(key,default,type) and %Set(key,value,type) methods can let you choose 'type' as some form of %Stream so you can examine and modify elements of a Dynamic Object when that element needs to be larger than 3614411 characters.  The default %Stream-s generated by the %Get method use the %Stream.DynamicBinary and %String.DynamicCharacter classes, which also store data in process memory so you should avoid having many such large streams active at the same time.  They can be copied out to other streams which can be stored a database or into a file or a device.

A canonical numeric string in ObjectScript can have a very large number of digits.  Such a string can be sorted with ObjectScript sorts-after operator, ]], and reasonably long canonical numeric strings can be used as subscripts and such numeric subscripts are arranged in numerical order before all the subscript strings that do not have the canonical numeric format.

However, when ObjectScript does other kinds arithmetic on a numeric string then that string is converted to an internal format, which has a restricted range and a restricted precision.  ObjectScript currently supports two internal formats.  The default format is a decimal floating-point representation with a precision of approximately 18.96 decimal digits and a maximum number about 9.2E145.  For customers doing scientific calculations or needing a larger range, ObjectScript also supports the IEEE double-precision binary floating-point representation with a precision around 16 decimal digits and a maximum number about 1.7E308.  You get the IEEE floating-point representation with its reduced precision but its greater range by using the $double(x) function or doing arithmetic on a string which would convert to a numeric value beyond the range of the ObjectScript decimal floating-point representation.  When doing arithmetic that combines ObjectScript decimal floating-point values with IEEE binary floating-point values then the decimal floating-point values will be converted to IEEE binary floating point before performing the arithmetic operation.

Here are more picky details.

The ObjectScript decimal floating-point representation has a 64-bit signed significand with a value between -9223372036854775808 and +9223372036854775807 combined with a decimal exponent multiplier between 10**-128 and 10**127.  I.e., a 64-bit twos-complement integer significand and a signed byte as the decimal exponent.  This decimal floating-point representation can exactly represent decimal fractions like 0.1 or 0.005.

The IEEE binary floating-point representation has a sign-bit, an 11-bit exponent exponent encoding, and a 52 bit significand encoding. The significand usually encodes a 53-bit range values between 1.0 and 2.0 - 2**-52 and the exponent usually encodes a power-of-two multiplier between 2**1023 and 2**-1022.  However, certain other encodings will handle +0, -0, +infinity, -infinity and a large number of NaNs (Not-a-Number symbols.)  There are also some encodings with less than 53 bits of precision for very small values in the underflow range of values.  IEEE 64-bit binary floating-point cannot exactly represent most decimal fractions.  The numbers $double(0.1) and $double(0.005) are approximated by values near 0.10000000000000000556 and 0.0050000000000000001041.

I have written some ObjectScript code that can do add, subtract and modulo on long canonical numeric strings for use in a banking application.  However, if you are doing some serious computations on large precision values then you should use the call-in/call-out capabilities of IRIS to access external code in a language other than ObjectScript. Python might be a good choice.  You could use canonical numeric strings as your call-in/call-out representation or you could invent a new encoding using binary string values that could be stored/fetched from an IRIS data base.

ObjectScript was designed to do efficient movements and rearrangements of data stored in data bases.  If you are doing some serious computations between your data base operations then using a programming language other than ObjectScript will probably provide better capabilities for solving your problem.

The ObjectScript $ZDATETIME function (also-know-as $ZDT) contains lots of options, some of which are close to what your want.  [[ Note $HOROLOG is also-known-as $H; $ZTIMESTAMP is aka $ZTS. ]]

$ZDT($H,3,1) gives the current local time, where character positions 1-10 contain the date you want and positions 12-19 contain the time you want.  However, character position 11 contains a blank, " ", instead of a "T".

$ZDT($ZTS,3,1) gives the current UTC time with character position 11 containing a blank.

Assigning
    SET $EXTRACT(datetime,11)="T"
to your datetime result will fix the character position 11 issue.

Instead of using time format 1, you can use time formats 5  and 7 with $H.  $ZDT($H,3,5) gives local time in the format you want except character positions 20-27 contain the local offset from UTC.  $ZDT($H,3,7) converts the local $H date-time to UTC date-time and makes character position 20 contain "Z" to indicate the "Zulu" UTC time zone.  However, if your local time-zone includes a Daylight Saving Time (DST) offset change when DST "falls back" causing a local hour to be repeated then the time format 5 UTC offset or the time format 7 conversion from local to UTC will probably be incorrect during one of those repeated hours.

Although the above description says $ORDER scans "down" a multidimensional global, other programers might say it scans "sideways".  There are many different structures for databases.  E.g., there are network databases (sometimes called CODASYL databases); there are hierarchical databases (like ObjectScript multidimensional arrays/globals); there are relational databases (often accessed by the SQL language); ...

ObjectScript is based on the ANSI M language standard.  I believe that the name of the ANSI M hierarchical function $QUERY has always been $QUERY but the original name of the ANSI M hierarchical function $ORDER was formerly $NEXT.  $NEXT is very similar to $ORDER but $NEXT had problems with its starting/ending subscript values.  IRIS ObjectScript no longer documents the obsolete $NEXT function but the ObjectScript compiler still accepts programs using $NEXT for backwards compatibility.

Consider the following ObjectScript global array:

USER>ZWRITE ^g
^g("a")="a"
^g("a",1)="a1"
^g("b",1)="b1"
^g("b",1,"c")="b1c"
^g("c")="c"
^g("c","b10")="cb10"
^g("c","b2")="cb2"
^g("d",2)="d2"
^g("d",10)="d10"

Consider the following walk sideways by $ORDER along the first subscript of ^g"

USER>set s=""
USER>for { SET s=$ORDER(^g(s))  QUIT:s=""  WRITE $NAME(^g(s)),! }    
^g("a")
^g("b")
^g("c")
^g("d")

Although these 4 globals contain values below them, the $ORDER walks did not walk down to deeper subscripts.  As it walked sideways, it returned the subscripts "b" and "d" even though ^g("b") and ^g("d") did not have values of their own but only values underneath them.

Now consider the walk down deeper by $QUERY through all the subscripts of ^g(...) at all subscript levels:

USER>set s="^g"
USER>for { WRITE s,!  SET s=$QUERY(@s)  QUIT:s="" }               
^g
^g("a")
^g("a",1)
^g("b",1)
^g("b",1,"c")
^g("c")
^g("c","b10")
^g("c","b2")
^g("d",2)
^g("d",10)

This walk by $QUERY was told to start at ^g and every call on $QUERY went through every subscripted node of ^g(...) that contained a value regardless of the number of subscripts needed.  However, elements ^g("b") and ^g("d") that did not contain values of their own were skipped by the $QUERY walk as it continued on to nodes with deeper subscripts that did contain values.

Also note that each $ORDER evaluation returned only a single subscript value as it walked sideways while each $QUERY evaluation returned a string containing the variable name along with all the subscript values of that particular multidimensional array node.

The ZZDUMP command is useful for debugging, especially when a searching for a bug caused by an unprintable character in an ObjectScript string.  The most recent versions of the ZWRITE command are now usually easier to use than the ZZDUMP command when they are used for debugging purposes.

One of the legacy uses for ZZDUMP was to look at corrupted strings, especially when a $LIST string had become corrupted by the use of some non-$LIST compliant string operations.  Back when I first joined InterSystems the documentation of ZZDUMP included a description of some of the $LIST formats so programmers could use ZZDUMP to see if a $LIST string was corrupted.  However, now ZWRITE will use $LB(...) syntax when a string contains a valid $LIST and it will use $C(i,j,k,l,...) in other cases when a string contains unprintable characters.  This makes ZWRITE a much better command than ZZDUMP when checking to see if a $LIST string is corrupted.  ZWRITE can recognize a compressed $BIT string but I have very little experience with how compressed $BIT strings are encoded.  Also ZWRITE can provide a translation of a %Status value.  ZWRITE will change further in the future as we discover additional useful display formats.

Back when ZZDUMP was documenting $LIST format that caused two types of problems:  (1) When new $LIST encodings were added there were complaints that older applications did not understand the new encodings; and (2) When the documentation described multiple encodings that could be used for the same data then the documentation did not include the complicated details about which encodings could be generated and which encodings could not be generated.  In general, when new $LIST encodings were generated as a replacement for older encodings then all the $LIST functions would still accept the older no longer generated encodings as well as the newer encodings for backwards compatibility.  Third party applications that used $LIST encodings without using the corresponding $LIST functions would be broken until they were modified to support both old and new encodings.  However, when there were new $LIST encodings that were not used in all possible cases then the $LIST functions may not properly decode that encoding in cases that were never generated.  Third party software would find that some $LIST functions would not work if third party software made a choice to use a $LIST encoding that had never been generated by the InterSystems supplied $LIST functions.  Having ZZDUMP no longer document $LIST formats resulted in a great reduction in such incompatibilities.

Publishing a full specification of the $LIST encodings complete with usage restrictions would allow ObjectScript to support programs that manipulated these encodings without being limited to using only the $LIST functions.  It would also freeze the $LIST encodings on existing data types and would not allow the specification to be extended to provide future optimizations.  On the other hand, supporting customer debugging often requires the exchange of information involving these internal specifications so we often ask customers to look at these encodings.  Discussing the encoding specifications in a debugging context is useful.  However, depending on the internal details of these specifications not changing in the future is much less useful and that is discouraged.

Actually,  the implementation of the $LIST functions has changed a lot in the past 25 years.  There have been 28 changes to the $LIST kernel code since it became part of IRIS and over 100 changes while it was part of Caché.  The changes were carefully done so that working programs that use functions of the form $LISTxxx, $Lx, and $LxS to manipulate $LIST strings will not notice the changes.  There are $LIST element encodings that are no longer generated but all the functions that examine a $LIST string will still correctly process the obsolete encodings.  New encodings were added to support new data types and to improve performance.

Code to READ compressed encodings for binary floating-point values and for Unicode strings have been available for over 5 years.  $LISTVALID in recent releases of IRIS and Caché would accept such compressed encodings even though they were not being generated.   Only the most recent releases have provided the ability to GENERATE these compressed encodings.  The $system.Process.ListFormat(n) classmethod will allow $LISTBUILD to generate these new compressed encodings.  Currently the new encodings  are turned off by default.  But any user who enables these $LIST optimizations will see new encodings in their $LIST strings.

Like the $ZHEX(n) function, the %DynamicAbstractObject classes that support for JSON representation can do some type conversions that were not inherited from the original ANSI M standard.  It might be slightly faster if you use an array instead of an object to save the conversion from a keyname into an array index.

    quit [(val)].%GetTypeOf(0)="string"

I should mention that the numeric comparisons $A($LB(val),2)<3 and $A($LB(val,2)>3 are not quite legal ObjectScript since the internal specification of $LIST string representation is subject to extension in the future.  In previous Caché and IRIS releases the $LIST code type byte for a string could be 1 or 2.  In the future, $LISTBUILD will support compressing string values and those compressed $LIST string elements will use $LIST code type values greater than 3.

It is legal to use such undocumented $LIST representation techniques when debugging new code on a particular Caché/IRIS release.  However, depending on such internal representations in a production application should be avoided since the internal representations can change in the future.

Example: some programmers have used the string equality operation, ls1=ls2, to test if two $LIST strings contain the same values.  They should have used the $LISTSAME(ls1,ls2) function call.  This use of string equality on $LIST strings has resulted in broken applications in the past when extensions were added to the internal specification of $LIST strings.

The original ANSI M (also known as ANSI MUMPS) standard did not distinguish between a canonical numeric string versus a number.  Many early implementations of this standard used strings of numeric characters as the internal representation of a number and did arithmetic on character strings instead of converting the numeric string to a binary representation and then using the hardware binary arithmetic instructions.  The result of a numeric operation would always be a character string using the canonical numeric representation.

In such an implementation there was no way to tell the difference between the character string "123" and the literal 123 since "123" was the canonical numeric string representation for that value.  For performance reasons, InterSystems ObjectScript has several different internal representations it can use for the number 123.  It can use the three character strintg "123"; it can use the 32-bit binary integer 123; it can use one of several representations of 123 in ObjectScripts' decimal floating point representation which has a 64-bit binary integer significand combined with an 8-bit power-of-10 exponent.  Writing 12300E-2 will usually be represented with the decimal floating-point representation and not with the integer repersentation.

The command WRITE $LISTSAME($LISTBUILD("123"),$LISTBUILD(12300E-2)) will write 1 because the string "123" and the numeric literal 12300E-2 are the same value.  However WRITE $LISTBUILD("123")=$LISTBUILD(12300E-2)) will write 0 because the $LISTBUILD will will generate different string encodings in this situation even though the values are considered to be identical.  Although $LISTBUILD could convert different internal representations into identical $LIST strings, it is faster just to place the internal representation into the generated binary string.  This is why you must use $LISTSAME when you want to check if two $LIST strings contain the same value.

There are a very few functions built into ObjectScript that have a behavior that depends on the internal representation of the argument.  All of these function were inherited when InterSystems extended ObjectScript to have a compatible feature with the language implemented by other vendors who produced extended M/MUMPS systems.  The most common function is $ZHEX(x) which turns a hex string to a decimal number and which turns a decimal number into a string.  Thus, WRITE $ZHEX("10") will write number 16 while WRITE $ZHEX(10) will write the string "A".  If you want to convert variable 'x' from an integer to hex then you need to write $ZHEX(+x) to make sure 'x' is using an internal numeric representation.  If you want to convert variable 'x' from hex to an integer then you need to write $ZHEX(x_"") to make sure 'x' is using the string internal representation.

One additional note:  The literal 123.0 will use an internal numeric representation so WRITE "123.0"=123.0 will WRITE 0 since a string equality comparison operator will be done between the literal string "123.0" and the canonical numeric string "123".  However, WRITE +"123.0"=123.0 will WRITE 1 because the unary plus operator converts the first operand to a numeric representation and then the string equality operator converts both numeric operands into their canonical numeric string representations making the string equality operation be the same as "123"="123".

The %Library.DynamicAbstractObject, %Library.DynamicArray and %Library.DynamicObject classes with the %ToJSON and %FromJSON methods first appeared in Caché 2016.2.  I believe that 2016.2 also included JSON constructor syntax in ObjectScript.  I.e., you could write SET jsonvar={"number":1.2, "string":"abcd",null,true} as an executable ObjectScript statement.  Any JSON constant array or object is legal.  I am not sure when JSON constructors involving ObjectScript expressions was allowed.  E.g. SET jsonvar={"number":(1+.2+variable1), "string":("abcd"_variable2),null,false}.  Newer versions of Caché have more JSON support and IRIS has even more support, including the %JSON.Adapter class.  Inheriting the %JSON.Adapter class makes it possible to use the %JSONImport/%JSONExport methods which can read/writeJSON representation into/from ordinary class definitions.

In particular,  an object is garbage collected when there are no ways to access that object using oref values starting from an oref-type value stored in a local variable.  When an oref is converted to an integer (or to a string) then that does not count as an object reference.  Store that integer/string in a PPG and then do KILLs (or otherwise make changes) of oref-type values referring to an object then that object will be deleted despite the fact a PPG (or any other variable) contains an integer/string reference to the object.

Sorry about that.  My browser only showed this part of the URL:   https://docs.intersystems.com/latest/csp/documatic/%25CSP.Documatic.cls?...  I did not notice ?... which indicated a longer URL than what I was really looking at.  The shorter URL I thought I was looking at gives the Class Reference page of the InterSystems documentation web site.

The SubclassOf query in ##class(%Dictionary.ClassDefinitionQuery) executed by a SQL SELECT command on a particular instance does provide a way to look at the subclasses of a class on that instance.  The query provides an even better way for executing code in program that needs to do a run-time search for information that is machine readable.

The local Class Reference web page supported by a particular Caché instance is probably an easier way to find a human readable display of the subclasses.  After clicking on a SubClasses Summary links then you probably want to again click on other links (especially the nested SubClasses Summary links.)
 

The class reference at docs.intersystems.com will tell you about InterSystems supplied classes but it will not tell you about user written classes that are extended from a system supplied abstract classes nor about a user class extended from other user classes.

Look at the following with your browser:  http://localhost:57772/csp/documatic/%25CSP.Documatic.cls , where localhost is the system name on which you are running Caché and 57772 is the WebServer port of your Caché instance.  (If you were running IRIS your WebServer port would look something like 52773.)

Once you are on the Class Reference web page then navigate to find the particular class in which you are interested.  Look at the SubClasses Summary section to find inherited classes.  You can click on the an inherited class name to navigate to the Class Reference information of that class.

Leon has not made a return comment and his original question would be ambiguous to some people.  I often call the " character, the 'double quote' character while I often call the ' character the 'single quote' character.  Other names I might use for the " character are 'quotation character' or 'ditto mark'.  Other names for the ' character would be 'apostrophe' or 'apostrophe quotation character'.  [[Notice I am using apostrophe quotation characters in this comment.]]  I might describe "" characters as 'doubled quotation characters' or 'doubled double quotes'.

Since Leon asked to 'remove double quotes (") from a string' my first impression would be to remove all " characters from the string.  I.e., every $C(32) should be removed.  If Leon wanted to only remove 'doubled quotation characters' then he probably would have written 'remove double quotes ("") from a string'.

If your long strings are coming from JSON representation then

   Set DynObj=##class(%DynamicObject).%FromJSON(...)

will create a  %Library.DynamicObject or %Library.DynamicArray object in memory containing the JSON array/object elements where the sizes are limited only by the amount of virtual memory the platform will allocate to your process.  A string element of an object/array can have many gigabytes of characters (virtual memory permitting) and you can get the value of such a huge string element in the form of an in-memory, read-only %Stream doing:

   Set StreamVal=DynObj.%Get(key,,"stream")

in cases where DynObj.%Get(key) would get a <MAXSTRING>.

The StreamVal (class %Stream.DynamicBinary or %Stream.DynamicCharacter) is a read-only, random-access %Stream and it shares the same buffer space as the 'key' element of the DynObj (class %Library.DynamicObject) so the in-memory %Stream does not need additional virtual memory.

You can then create a persistent object from a class in the %Steam package (%Stream.GlobalBinary, %Stream.GlobalCharacter, %Stream.FileBinary, %Stream.FileCharacter, or some other appropriate class.)  You can then use the CopyFrom method to populate the persistent %Stream from the  read-only, in-memory %Stream.DynamicBinary/Character.

Consider the following:

USER>set num1=1,str1="""1""",num2=2,str2="""2""",num10=10,str10="""10""",abc="abc"

USER>set (x(num1),x(str1),x(num2),x(str2),x(num10),x(str10),x(abc))=""            

USER>set i="" do { set i=$order(x(i))  q:i=""   write i,! } while 1               
1
2
10
"1"
"10"
"2"
abc

The WRITE command treats its expression arguments as ObjectScript string expressions so any argument expression with a numeric value is converted to a string value containing characters.  Note that variables str1 and str2 are strings containing 3 characters starting with one double-quote character, ".  The str1 and str2 values sort among other strings where the first character is a double-quote character.  When you print these subscript strings, the subscripts from variables num1 and num2 print as one character strings,  the subscript from variable num10 prints as a two character string, the subscripts from variables sub1, sub2 and abc print as three character strings and the subscript from variable str10 prints as a 4 character string.

If you use ZWRITE instead of WRITE then ZWRITE will add extra quote marks, will add $C(int) expression syntax and will add other things so that the textual representation that is printed does not contain any unprintable characters and you can see trailing white space.