Hi @Pravin Barton, I was looking for this exact advice however, I am having trouble debugging.  I have looked at the generated INT code and I'm a bit stumped.  

All our %Persistent objects are updated via object access in CSP pages calling class methods.  I utilzed the example you have below but I used set tableName = %compiledclass.NameGet() to get the tableName.  The properties wrote to ^test no problem, so that's not an issue.

The issue is after updating objects using the CSP pages (i.e. the front end), all the checks for {property*C} are false (0).  I see expected operations being passed into the zFunctions with the pChanged array being setup.  All our inserts and updates are accomplished by %Save().  We never %Delete().

I also included Time=After as a parameter in the trigger generator.  So according to the doc, the trigger should happen similar to then %OnAfterSave() method.  

Any thoughts on what I may have missed or why this isn't working in my setup?

I suppose another way of stating the issue.  The doc says you might want to use sessions to:

  • Preserve data across REST calls — in some cases, preserving data across REST calls may be necessary to efficiently meet your business requirements.

Well, I'm not finding an easy way to do this using spec first dev because the %session variable is undefined in the generated impl.cls. Even when I extend impl.cls to use %CSP.REST.  

@Eduard Lebedyuk I did catch one article you wrote that said sessions are basically only good for authentication (I assume you just meeting keep one logged into the IRIS server over the session), however since the doc does mention preserving data, I would like to see if I can utilize that feature.

For now I've reverted to the manual creation of rest services which now allows me the %session variable to use.

Welp, this was total user error.  I was trying to read class property that didn't exist.  Also there's some good debugging advice here: Debugging Web | InterSystems Developer Community | CSP|Debugging|Frontend

@Rich Taylor maybe you can help me?

I am passing in a very simple structure in the body of a post request and I'm getting an error returned in Postman saying that the Dynamic Object I'm trying to create from the stream has a property that doesn't exist.

HTTP/1.1 500 Internal Server Error
Content-Type: application/json; charset=utf-8
{
    "errors":[ {
            "code":5002,
            "domain":"%ObjectErrors",
            "error":"ERROR #5002: ObjectScript error: <PROPERTY DOES NOT EXIST>

...

Sample data: {"quote_id":2000}

ClassMethod deleteQuote(quote As %Stream.Object) As %DynamicObject

{

    s quoteObj={}.%FromJSON(quote)

    s quoteId=quoteObj."quote_id"

.....

So a few problems:

1) I can't debug this easily . . . I can't pass in a %Stream.Object in VS Code using the debugger and on the command line I can't seem to create a %New() %Stream.Object (my guess is because it's an interface).  Any tips on debugging when you're passing in a stream?

2) The one thing I did do was set my stream data to a global and I'm getting some hints as to what's wrong but I'm not understanding.  A ZW of the global shows me "9430@%Library.DynamicObject" after I have done the %FromJSON() method (I'm pretty sure a global can't store an object so no wonder it's quoted but there's more). If I just save the Read() of the stream to the global then go to the command line and set the string to a variable and then do the {}.%FromJSON(data) method I get the expected data=<OBJECT REFERENCE>[3304@%Library.DynamicObject] and can access the property.

I did notice when using postman, if you 'beautify' the JSON string you get lots of white space and other characters that need to be stripped, so I'm aware that could be a problem, but right now I have the data as a single line with no extra characters.  It's working on the command line when I break it down, but I can't see what the issue is when running the actual POST request.

Any experience with these kinds of issues?

Can this be used to temporarily set a dynamic entity to a DocDB so one could run a query on it and return a result set object?

I'm trying to find a solution for returning a result set object to a CSP page when I have a dynamic entity or JSON object.

It's basically a complex, ad hoc query that's easier (for me) to write in ObjectScript rather than write a SQL query and aggregate and transform all kinds of data.  I really need something in memory just to run the report, nothing saved to disk, but it seems creating a database using %DocDB will create a new class def weather I like it or not.

I'm just poking around here to gain some knowledge.  When you say routine to you mean a compiled MAC file?  I'm curious to know how ZR would work if you've compiled the routine and, as Robert said, it's not on the stack.  Any time I've run a routine from the CLI and do a zprint, there's nothing there, so I'm not sure what ZR would remove it ZP isn't showing in the buffer.  When you call an entry point and there's a quit, isn't there an inherent ZR to get the routine out of the buffer?  

I guess I'm simply asking: did ZR solve your issue?

Well we are in IRIS 2021 and that's the documentation I was looking in.  So I'm not sure what's going on.  

Here's where I landed:

Given a %Stream.FileBinary I calculate the size to read as such

s readSize=($J((stream.SizeGet()/12000),"",0)+1)*12000

And then 'cast' my stream to a %xsd.base64Binary datatype (ODBC type VARBINARY) as such

s base64string=##class(%xsd.base64Binary).LogicalToJSON(stream.Read(readSize,.sc))

In my command line testing I'm able to decode this base64string, write it to a file stream and save and I have a very much in tact PDF.  This is new to me however, so I hope I'm not tricking myself into thinking this is working correctly. 

When I run w ##class(%xsd.base64Binary).IsValid(myVarBinaryData) I get 1 so I think it's working correctly! 

Wondering however about the reading of 12,000 at a time.  Just as long as the read len is divisible by four it should work?

@Evgeny Shvarov and @Alex Woodhead 

Lots of good progress here, but still a few issues:

- Thanks to @Dmitry Maslennikov and his help, I am using a Dockerfile and compose to do the build, so everything is a little more contained and standard (I used that coffee store template as a starting place)

- After running App.Installer.setup() via the Dockerfile commands, I run do ##class(%EnsembleMgr).EnableNamespace("$NAMESPACE").  After that I do some application setup logic including a recompile of those classes that depend on those Ensemble classes and I finally get that clean compile of those classes and no error messages saying HL7.Class does not exist.  

- As I mentioned before, the namespace element in the Installer.cls manifest has interoperability turned on <Namespace Name="${NAMESPACE}" Code="${NAMESPACE}-CODE" Data="${NAMESPACE}-DATA" Create="yes" Ensemble="1">

- How can I run the command do ##class(%EnsembleMgr).EnableNamespace("$NAMESPACE") after the creation of the application's namespace but BEFORE loading and compiling the source code? My only workaround is to recompile after running App.Installer.setup()  and do ##class(%EnsembleMgr).EnableNamespace("$NAMESPACE").  I'm not sure how and where the manifest for the application in App.Installer is used and where I can insert this enable method.  That said, I feel like that <Namespace> tag should be turning it on.  I can't find a reference in the documentation for this manifest.

- Another question/issue is CSP pages.  Following the template, in the docker file I 'COPY csp csp`.  In the installer I mapp the source director to all the code I put in the src folder (classes and mac files).  Should I copy my CSP file here as well so they compile?  My workaround is similar to above, I use a utility to compile the CSP pages in the csp director since the installer doesn't pick them up since they aren't in the source dir.

@Dmitry Maslennikov 

It's a bit much and there's probably a better way, but the long story short is that many of our static files are in a folder in the root, so I have to build the container, run a 'docker exec' command as root to copy my file to the root, exit out of that, then log in again as a the regular user to start the iris session and run the build script.  

The script uses the sleep command to wait (now at 70 seconds and working fine) for iris to finish starting so I can start the session and run the script.  Otherwise I get the startup error.  

@Julie Bolinsky and @Stephen Canzano 
 

Thanks for your replies!

To back up and offer more clarity: the issue was that the WHERE clause of our query used a stored procedure to calculate a data element. This was causing the query to churn and for the Zen reports to timeout.  We wanted to break the query apart so that it could select a subset of the data by date range first and then loop through that data and run the store proc only that data to filter it further.   A subquery didn't end up being any more efficient here.  Views and CREATE TABLE ... AS wouldn't work at scale.  Since Zen wants a %Library.ResultSet object, I researched fetching each record and deleting data as needed OR creating a new %Library.ResultSet object and adding data to that, but there are no methods to support that.

The question was: can you give a Zen report a dynamic object as a data source?

Sounds like the answer is only %Library.ResultSet or XML file (or stream).  The stream idea is ok, but it's a big report and it would have been a big lift to transform the %Library.ResultSet into XML.

The solution was 1) move the stored proc out of the WHERE clause and into the SELECT and 2) use the filter='%val(""DataElement"")=VALUE' in the report tag to evaluate the table alias created by the stored proc in the select to skip the rows we didn't want to generate PDFs for.

Happy to talk more about this as I'm sure I'll be touching some Zen stuff again.

@Benjamin De Boe 
 

I went nuts today working with some of this stuff.  Can I confirm a few points:

- $SYSTEM.SQL.Schema.QueryToTable() does not exist in my IRIS version however $SYSTEM.SQL.QueryToTable() does.  I have IRIS 2021. Are these different things?

- I ended up using a CREATE OR REPLACE VIEW because doing a CREATE TABLE ... AS SELECT would create a table but return no data (I would even do a DROP TABLE operation first to make sure all was clear.  Am I correct in saying that a create table as select is strictly as COPY operation of an existing table.  Since I was effectively creating a new table with new headers, the view worked better here.  Is that the difference?

- To clarify, I used the $SYSTEM command to create table from query, because I kept getting errors with create table as select (didn't like my AS statement).  What could have gone wrong there?