for the example you provided given that the methods invoked by the <routes> are classmethod I commonly do something along the lines of 

    <Route Url="/error/list" Method="GET" Call="GetErrors" />

ClassMethod GetErrors(pStartTime As %UTC = "", pEndTime As %UTC = "") As %Status

{

    #DIM tSC                    As %Status = $$$OK

    #DIM eException             As %Exception.AbstractException

    #DIM %request               As %CSP.Request

   

    Try {

            Set:$Get(%response)'="" %response.ContentType="application/json"

            If $Get(%request) {

                Set tStartDate = $Get(%request.Data("StartTime",1))

                Set tStopDate = $Get(%request.Data("EndTime",1))

            }

            Else {

                #;If they are defined in the parameter list

                If pStartTime'="" Set tStartDate=pStartTime

                If pEndTime'="" Set tStopDate=pEndTime

            }

removeEmpty is defined as 

The <table> removeEmpty attribute controls whether or not the empty nodes that Zen encounters in the XML data for this report display in the XHTML or PDF output generated by this <table> in the report. If removeEmpty is:

  • Not specified, the <table> inherits the removeEmpty value of its parent. If no element in the ancestry of this <table> specifies a removeEmpty value, then the default value, 0, applies to this <table>.
  • 0, empty element and attribute values are output to the XHTML or PDF generated for this <table> in the report.
  • 1, empty element and attribute values are not output to the XHTML or PDF generated for this <table> in the report. If orient is "row," any rows with all empty data values are omitted from the output. If orient is "col," any columns with all empty data values are omitted from the output. If there are some empty cells, but the entire row (or column) is not empty, then the row (or column) is displayed with the empty cells blank.

The group attribute must be set for removeEmpty to work.

This attribute has the underlying data type %ZEN.Datatype.booleanOpens in a new tab. See “Zen Reports Attribute Data Types.”

so I dont think removeEmpty addresses what you have asked.  I would consider using an 

ifxpath expression that would control whether or not to display the table

Interesting.. but if the user understands SQL and understands the db schema why dont they connect via ODBC/JDBC.  If they dont understand SQL and the database schema why not make a specific REST class that has endpoints for each type of query they might want to run.  Allow for arbitrary SQL statement here including INSERT, UPDATE, DELETE is probably not a good idea.. although it looks like /query may only allow Statement Type = 1 SELECT and Type=45 CALL.

I take the approach of adding my .DFI items to a class that extends %DeepSee.UserLibrary.Container following

Creating a Business Intelligence Container Class

as this allows me to treat like any other class.

This means I can

  • commit to the source control repository just like any other class
  • deploy like any other class
  • include the Container class in my module definition to be delivered via ZPM/IPM

This capability has been around since 2018 +/-.

If we look to what InterSystems provides as an application like Ensemble(Interoperability) or HealthShare we will find the class defintions exist in the ENSLIB or HSLIB namespace.  Then via Package mapping you will find that Interoperability enabled namespaces or HS enabled namespaces map the package definitions.  For example in the namespace HSEDGE1 we see

which says the classes from these packages are visible to the HSEDGE1 namespace.  

The data for the extent of these classes by default would live in the associated db for the namespace HSEDGE1.

In most cases the data for the extent is not mapped so as to live in a single univeral database but generally speaking it could be, although in the case iof the Ensemble/Interoperability and HealthShare data you shoud not do this.

The error you are seeing is a compile error, its not a run time error so the DC AI Bot isnt telling the correct answer.  The field in the RecordMap UI is a bit wonky.  If you hover over the label you get a tooltip which indicates you need to separate the parameter values by ;

So something like this works

MAXLEN=80;PATTERN=14N

which gets stored in the RecordMap definition class as

  <Field name="NPI" required="0" ignored="0" datatype="%String" index="1" params="MAXLEN=80;PATTERN=14N" sqlColumn="2" repeating="0">
  </Field>
and the generated RecordMap class now has

Property NPI As %String(MAXLEN = 80, PATTERN = "14N") [ SqlColumnNumber = 2 ];

so 

1. parameters should be separated by ;

2. for pattern you do not need to surround the pattern value by "s

3. If something like ResearchId is actually a pure numeric and number it might be better defined as %Integer

4. For something like FirstName you might want PATTERN=1.A  which means 1 to N alpha characters

You may have an issue with the format of the date.

What is the datatype of 

  • somedate
  • anotherdatefield

Are they %Date, %TimeStamp, %UTC, or PosixTime

I'm not certain but does MAX(somedate) cause the value to no longer be in a format that would support

WHERE dateField >= anotherDateField

When in Analyzer you can see the actual query being run aftet the query runs by clicking on the Show Query button in the toolbar

which it seems like you have done so as you have 

WHERE source.%ID IN (SELECT _DSsourceId FROM MyTable.Listing WHERE _DSqueryKey = 'en3460403753')

you can copy the query SQL Query Listing and paste into (albeit remove the portion that has

WHERE source.%ID IN (SELECT _DSsourceId FROM MyTable.Listing WHERE _DSqueryKey = 'en3460403753')

)

System Explorer->SQL and take special note of the Runtime mode of the query.

The Runtime mode has the greatest impact on columns that are 

%Date 

%UTC

as these columns will have different values based on the runtime mode(Logical/Display/ODBC)

Given that the routes in your enabled %CSP.REST class point to class methods you should be able to debug/step thru line by line by calling the class method.  In the case where you might have a POST/PUT for your route that calls the class method you can always define the formal specificaiton of the class method to accept parameters and then have the code in your class method look at either your parameter values or %request.Data to determine if the class method is being called by a Http call or your simple debugging.

When building data for a cube, either a full build or a synchronization, think about it this way.  The code generated to support this essentially does a SELECT AllOfYourDimensionsMeasuresRelationships FROM SourceTable.

when you have non primary field references these expressions are evauluated as ObjectScript expressions.

These are documented but not on a single page

%expression 

%cube

%source

%sourceID although this is typically thought of being used in a detail listing specifically if you chose the option for whereas the other's are when defining the cube dimension, measures etc

There were several questions asked during the pressentation.  One of them was 

 How can we restrict usage of the cubes so that we only allow users to specific data?

If this were SQL one way of approaching this is to define a View and then grant access to the View but not the table to users.  Fortunately Analytics has the same concept. Just as SQL tables are the resource that has the data and a view is defined representation of the data, Analytics has Cubes which has the data and Subject Areas.  

A subject is a virtual view of the cube but does not require the additional storage/build/sync time.  Within the Subject Area you can

1. Define a filter, consider we want to define a subject on Senior Citizens, we would define a filter on Age > X.  Then when we make the Senior Citizen Subject area to a user they will only ever see patients > X,

2. Define what dimensions are available 

3. Define listings

Answers to your questions

1. Think of the classes that describe the globals as being a meta data layer to the globals.  The existing application will continue to run and you will now have classes that expose the data so that you can write Object and SQL code.  If you create new indices in the SQL mapped classes the global representing the new indices would only be updated if something calls the Object.%Save/Delete or SQL Insert/Update/Delete or the legacy filers are updated to manage the new indices.  The legacy application might be unlikely to do this so it would mean your new indices will never be populated which would be bad as the SQL engine would not "know" this and would attempt to read data from the new index and there would be no data.

2. You are correct... if you add new indices to the class and the legacy application is not maintaining the index then it would cause issues.

Hopefully you have a common filer for the legacy application, meaning you have one common filer to save a subject area(global).  If this is the case then it's a matter of updating the common filer.  If the legacy application has a number of places where the data is updated then all of those places would need to be updated or consider adopting an approach of a common filer.

On projects I have been involved in where we have created classes to map the existing globals to enable Object and SQL access we have added to the class

Parameter READONLY=1 

so as to ensure no one could accidentially perform an Object Save/Delete or SQL Insert/Update/Delete operation.

One important note about QuickStream.  If you decide to go down this path you must make certain to Clear the QuickStream as part of your pipeline.  The Message Purge that is set up as a task will purge the message header and the associated message body but when it looks at the message body all it sees is a property called QuickStreamId and it doesnt know it should also Clear the associated QuickStream object.

The Temp file often times uses a process private global 

Process-private globals are written to the IRISTEMP database. In contrast to global variables, InterSystems IRIS does not treat a SET or KILL of a local variable or a process-private global as a journaled transaction event; rolling back the transaction has no effect on these operations.

That being said as Peter mentioned if you had an index on code_1_text you could greatly improve performance.  In fact I suspect this type of query would be completely index satisfiable, ie the query would only examine the index and would not have to go to the mastermap for even better performance.  

Depending on the legth of values for code_1_text if you do chose to add an index you might want to define the property as

Property code_!_text as %String(COLLATION="SQLUPPER(113)",MAXLEN=32000);

by setting the collation and length.  If the values exceed the maximum global subscript length and you do not do this you could encounter <subscript> errors.

A subscript has an illegal value or a global reference is too long. For further details, refer to $ZERROR. For more information on maximum length of global references, see “Determining the Maximum Length of a Subscript”.

The REST API is for

SQL Search

The InterSystems IRIS® SQL Search tool integrates with the InterSystems IRIS Natural Language Processor (NLP) to perform context-aware text search operations on unstructured data, with capabilities such as fuzzy search, regular expressions, stemming, and decompounding. Searching is performed using a standard SQL query with a WHERE clause containing special InterSystems SQL Search syntax.

Is your table using the NLP features?