I'm intrigued to hear about expression indices - sounds really cool.

Without those, another option is just to have a separate class/table. Suppose the key to the AR array is the address type (Home, Office, etc.); then you could have:

Class Sample.Person1 Extends (%Persistent, %Populate)
{

Property Name As %String;

Relationship Addresses As Sample.PersonAddress [ Cardinality = children, Inverse = Person ];

}

Class Sample.PersonAddress Extends (%Persistent, %Populate)
{

Relationship Person As Sample.Person1 [ Cardinality = parent, Inverse = Addresses ];

Property Type As %String;

Property Address As Sample.Address;

}

Sample.PersonAddress then can have whatever plain old normal indices you want (except bitmap indices - if you want those, make it one-to-many instead of parent/child).

Generally: any time you add an array property - especially an array of objects - it's worth stepping back and thinking about whether it should just be its own full-blown class/table.

On IRIS there's $zu(209,code) - e.g.:

USER>w $zu(209,3)
<3> The system cannot find the path specified.

In IRIS 2020.1+ you don't need the $zu:

USER>w $System.Util.GetOSErrorText(3)
<3> The system cannot find the path specified. 

AFAIK there's no Caché/Ensemble equivalent. (Or maybe this is in newer Caché/Ensemble versions.)

I've had a few times where I've needed to do a targeted restore based on a journal (e.g., restoring a week of work an intern accidentally reverted; this would work for class definition changes if you could find the right window). Just to add to what Dmitriy and Erik have said, assuming your case is eligible, here's a code sample using the %SYS.Journal classes (modified from one of the times I had to do this):

Class DC.Demo.JrnFix
{

/// Intended to be run from terminal. Find the right values to put in the variables at the top first.
/// Also, use at your own risk.
ClassMethod Run()
{
    // Path to journal file (find this based on timestamps)
    Set file = "/path/to/journal/file"
    
    // Path to database containing data that was killed
    // (assuming killed during transaction so individual nodes are journalled as ZKILL)
    Set dbJrn = "/path/to/database/directory/"
    
    // First problem offset/address (find a real value for this via management portal or further
    // %SYS.Journal scripting - e.g., output from below with full range of addresses used)
    Set addStart = 0
    
    // Last problem offset/address (find a real value for this via management portal or further
    // %SYS.Journal scripting - e.g., output from below with full range of addresses used)
    Set addEnd = 1000000000
    
    // Global that you're looking to restore - as much of the global reference as is possible
    Set global = "MyApp.DataD"
    
    Set jrn = ##class(%SYS.Journal.File).%OpenId(file)
    
    #dim rec As %SYS.Journal.SetKillRecord
    
    TSTART
    Set rec = jrn.GetRecordAt(addEnd)
    Do {
        If ((rec.%IsA("%SYS.Journal.SetKillRecord"))&&(rec.DatabaseName=dbJrn)) {
            If (rec.GlobalNode [ global) {
                w rec.Address,!
                Set @rec.GlobalNode = rec.OldValue
            } Else {
                // Keep track of other globals we see (optional)
                Set skippedList($p(rec.GlobalNode,"(")) = ""
            }
        }
        Set rec = rec.Prev
    } While (rec.Address > addStart)
    ZWrite skippedList
    Break //At this point, examine things, TCOMMIT, and quit if things look good.
    TROLLBACK
}

}

A good approach is adding application and/or matching roles for the web application (in the web application's security configuration).

An application role is granted to users of the web application while in that context only. A matching role provides additional privileges to users holding a particular specified role.

A lazy approach would be adding %All as an application role, but that likely exposes too much. This is better than giving UnknownUser %All, for sure, but it's best to provide more granular roles than %All (in this case and more generally) - say, a role that provides Read access on the namespace's default routine DB and R/W on the namespace's default global/data DB.

Hi Javier,

There are a few topics for running builds and unit tests via Jenkins (or really any CI tool):

  • Calling in to Caché (or IRIS; the approaches are very similar)
  • Reporting unit test results
  • Test coverage measurement and reporting

Here's a quick intro; if you have questions on any details I can drill down further.

Calling in to Caché:

The most common approach I've seen is writing out to a file and then using that as input to csession / iris session. You can see some examples of this (for IRIS, with containers, but quite transferrable) here: https://github.com/timleavitt/ObjectScript-Math/blob/master/.travis.yml - I'm planning to write an article on this soon.

Some rules for this:

  • Either enable OS authentication or put the username/password for the build user in the script or environment variables
  • End the script with Halt (in case of success) or $System.Process.Terminate($Job,1) (to signal an OS-level error you can pick up from errorlevel/etc.); alternatively, always end with Halt and create a "flag file" in the case of error, the existence of which indicates that the build failed.
  • Keep the script short - ideally, put the meat of the build logic in a class/routine that is loaded at the beginning, then run that.

Sample for Windows:

:: PREPARE OUTPUT FILE
set OUTFILE=%SRCDIR%\outFile
del "%OUTFILE%"

:: NOW, PREPARE TO CALL CACHE
::
:: Login with username and password
ECHO %CACHEUSERNAME%>inFile
echo %CACHEPASSWORD%>>inFile

:: MAKE SURE LATEST JENKINS BUILD CLASS HAS BEEN LOADED
echo do $system.OBJ.Load("","cb") >>inFile

:: RUN JENKINS BUILD METHOD
echo do ##class(Build.Class).JenkinsBuildAndTest("%WORKSPACE%") >>inFile

:: THAT'S IT
echo halt >>inFile

:: CALL CACHE
csession %INSTANCENAME% -U %NAMESPACE% <inFile

echo Build completed. Press enter to exit. :: PAUSE

pause > nul

:: TEST IF THERE WAS AN ERROR
IF EXIST "%OUTFILE%" EXIT 1

:: Clear the "errorlevel" variable that (it looks like) csession sets, causing successful builds to be marked as failure
(call )

Sample for Linux:

# PREPARE OUTPUT FILE
OUTFILE=${WORKSPACE}/outFile
rm -f $OUTFILE

# PREPARE TO CALL IRIS
# Login with username and password
echo $IRISUSERNAME > infile.txt
echo $IRISPASSWORD >> infile.txt

# MAKE SURE LATEST JENKINS BUILD CLASS HAS BEEN LOADED
echo 'do $system.OBJ.Load("'${WORKSPACE}'/path/to/build/class"),"cb")' >>infile.txt

# RUN JENKINS BUILD METHOD
echo 'do ##class(Build.Class).JenkinsBuildAndTest("'${WORKSPACE}'")' >>infile.txt

# THAT'S IT
echo halt >> infile.txt

# CALL IRIS
# csession is the equivalent for Caché
iris session $IRISINSTANCE -U $NAMESPACE < infile.txt

# TEST IF THERE WAS AN ERROR
if [ -f $OUTFILE ] ; then exit 1 ; fi

The next question is, what does Build.Class do? Given the Jenkins workspace root (WORKSPACE variable), it should load the code appropriately (likely after blowing away the code database to start with a clean slate; %Installer can help with this), then set ^UnitTestRoot based on the workspace directory, then run the tests, then report on results. Best to wrap the whole thing in a Try/Catch and throw/handle exceptions appropriately to ensure the error flag file / exit code is set.

Reporting Unit Test Results:

See https://github.com/intersystems-community/zpm/blob/master/src/cls/_ZPM/PackageManager/Developer/UnitTest/JUnitOutput.cls
(feel free to copy/rename this if you don't want the whole community package manager) for a sample of a jUnit export; Jenkins will pick this up and report on it quite easily. Just pass an output filename to the method, then add a post-build action in Jenkins to pick up the report. (You'll want to call this from your build script class.)

Measuring Test Coverage:

Seeing how much of your code is covered by unit tests helps to close the feedback loop and enable developers to write better tests - I presented on this at Global Summit a few years ago. See https://openexchange.intersystems.com/package/Test-Coverage-Tool - we've successfully used this with Jenkins for both HealthShare and internal applications at InterSystems. It can produce reports in the Cobertura format, which Jenkins will accept. Instead of using %UnitTest.Manager, call TestCoverage.Manager. The parameters detailed in the readme can be passed into the third argument of RunTest as subscripts of an array; to produce a Cobertura-style export (including reexporting all source in UDL for coverage reporting in the Jenkins UI), add a "CoverageReportFile" subscript pointing to an appropriate place in the Jenkins workspace, and set the "CoverageReportClass" subscript to "TestCoverage.Report.Cobertura.ReportGenerator".

If you want to use the Jenkins coverage/complexity scatter plot, use https://github.com/timleavitt/covcomplplot-plugin rather than the original; I've fixed some issues there and made it a bit more resilient to some oddities of our Cobertura-style export (relative to the data Cobertura actually produces).

Oof - by "newer tricks" you meant "objects." Yikes. Really, it'd be significantly lower risk to use the object-based approach than to roll your own without objects. (e.g., see my comment on automatic cleanup via %OnClose)

I don't have bandwidth to provide an object-free version, but you might look at the code for %IO.ServerSocket for inspiration.

I think the answers so far have missed the point. The number of arguments itself is variable. This is handy for things like building a complex SQL statement and set of arguments to pass to %SQL.Statement:%Execute, for example.

The data structure here is an integer-subscripted array with the top node set to the number of elements. (The top node is what's missing in the example above). Subscripts can be missing to leave the argument at that position undefined.

Here's a simple example:

Class DC.Demo.VarArgs
{

ClassMethod Driver()
{
    Do ..PrintN(1,2,3)
    
    Write !!
    For i=1:1:4 {
        Set arg($i(arg)) = i
    }
    Kill arg(3)
    ZWrite arg
    Write !
    Do ..PrintN(arg...)
}

ClassMethod PrintN(pArgs...)
{
    For i=1:1:$Get(pArgs) {
        Write $Get(pArgs(i),"<undefined>"),!
    }
}

}

Output is:

d ##class(DC.Demo.VarArgs).Driver()
1
2
3


arg=4
arg(1)=1
arg(2)=2
arg(4)=4

1
2
<undefined>
4

For bootstrap-table, I think the examples on their site are probably more useful than anything I could dig up. https://examples.bootstrap-table.com/#welcomes/large-data.html shows pretty good performance for a large dataset. Tabulator looks nice too though.

In any case it would probably be cleanest to load data via REST rather than rendering everything in the page in an HTML table and then using a library to make the table pretty.

Hi Steve,

We (Application Services - internal applications @ InterSystems) use %UnitTest with some extensions. We have a base Unit Test case that has a "run this test" helper method (among other things), a wrapper around %UnitTest.Manager that makes it easier to run all the tests without deleting them (and with some other features optionally enabled, like test coverage measurement - see a video of my 2018 Global Summit presentation on this here). Our wrapper also loads all of the unit tests before running any, which allows unit tests to extend classes within the unit test root in different packages from their own.

I agree with @Eduard Lebedyuk that %UnitTest meets our needs well aside from the lack of parallelization. A parallel %UnitTest runner would be an interesting project indeed...

We routinely run unit tests via Jenkins CI, report test results in the jUnit format, and also report on code coverage (Cobertura-style) and a complexity/coverage scatter plot.

For automated UI testing, Selenium/Cucumber has worked well for older Zen/CSP UIs. True unit testing of newer UIs (e.g,. Jasmine and Karma for Angular) is handy too.

Disclaimer: I know more about what John is doing than is covered in the post.

It looks like, for the prebuilt themes, Angular Material itself uses Bazel (see introduction at https://angular.io/guide/bazel). The relevant bits are here:

https://github.com/angular/components/blob/master/src/material/prebuilt-themes/BUILD.bazel
https://github.com/angular/components/blob/master/src/material/core/BUILD.bazel
https://github.com/angular/components/blob/master/src/material/BUILD.bazel

I think that's probably the place to start, in terms of Angular 8 best practices.

With correct web server configuration to route everything through the CSPGateway, the above example should handle URLs for other resources like that without issue (as long as the content served by the dashboard server has relative links to those endpoints, not absolute) - that's the point of the <base> element.

In my sample use case, it also handles several requests for images and other assets.