Hi Javier,

There are a few topics for running builds and unit tests via Jenkins (or really any CI tool):

  • Calling in to Caché (or IRIS; the approaches are very similar)
  • Reporting unit test results
  • Test coverage measurement and reporting

Here's a quick intro; if you have questions on any details I can drill down further.

Calling in to Caché:

The most common approach I've seen is writing out to a file and then using that as input to csession / iris session. You can see some examples of this (for IRIS, with containers, but quite transferrable) here: https://github.com/timleavitt/ObjectScript-Math/blob/master/.travis.yml - I'm planning to write an article on this soon.

Some rules for this:

  • Either enable OS authentication or put the username/password for the build user in the script or environment variables
  • End the script with Halt (in case of success) or $System.Process.Terminate($Job,1) (to signal an OS-level error you can pick up from errorlevel/etc.); alternatively, always end with Halt and create a "flag file" in the case of error, the existence of which indicates that the build failed.
  • Keep the script short - ideally, put the meat of the build logic in a class/routine that is loaded at the beginning, then run that.

Sample for Windows:

:: PREPARE OUTPUT FILE
set OUTFILE=%SRCDIR%\outFile
del "%OUTFILE%"

:: NOW, PREPARE TO CALL CACHE
::
:: Login with username and password
ECHO %CACHEUSERNAME%>inFile
echo %CACHEPASSWORD%>>inFile

:: MAKE SURE LATEST JENKINS BUILD CLASS HAS BEEN LOADED
echo do $system.OBJ.Load("","cb") >>inFile

:: RUN JENKINS BUILD METHOD
echo do ##class(Build.Class).JenkinsBuildAndTest("%WORKSPACE%") >>inFile

:: THAT'S IT
echo halt >>inFile

:: CALL CACHE
csession %INSTANCENAME% -U %NAMESPACE% <inFile

echo Build completed. Press enter to exit. :: PAUSE

pause > nul

:: TEST IF THERE WAS AN ERROR
IF EXIST "%OUTFILE%" EXIT 1

:: Clear the "errorlevel" variable that (it looks like) csession sets, causing successful builds to be marked as failure
(call )

Sample for Linux:

# PREPARE OUTPUT FILE
OUTFILE=${WORKSPACE}/outFile
rm -f $OUTFILE

# PREPARE TO CALL IRIS
# Login with username and password
echo $IRISUSERNAME > infile.txt
echo $IRISPASSWORD >> infile.txt

# MAKE SURE LATEST JENKINS BUILD CLASS HAS BEEN LOADED
echo 'do $system.OBJ.Load("'${WORKSPACE}'/path/to/build/class"),"cb")' >>infile.txt

# RUN JENKINS BUILD METHOD
echo 'do ##class(Build.Class).JenkinsBuildAndTest("'${WORKSPACE}'")' >>infile.txt

# THAT'S IT
echo halt >> infile.txt

# CALL IRIS
# csession is the equivalent for Caché
iris session $IRISINSTANCE -U $NAMESPACE < infile.txt

# TEST IF THERE WAS AN ERROR
if [ -f $OUTFILE ] ; then exit 1 ; fi

The next question is, what does Build.Class do? Given the Jenkins workspace root (WORKSPACE variable), it should load the code appropriately (likely after blowing away the code database to start with a clean slate; %Installer can help with this), then set ^UnitTestRoot based on the workspace directory, then run the tests, then report on results. Best to wrap the whole thing in a Try/Catch and throw/handle exceptions appropriately to ensure the error flag file / exit code is set.

Reporting Unit Test Results:

See https://github.com/intersystems-community/zpm/blob/master/src/cls/_ZPM/PackageManager/Developer/UnitTest/JUnitOutput.cls
(feel free to copy/rename this if you don't want the whole community package manager) for a sample of a jUnit export; Jenkins will pick this up and report on it quite easily. Just pass an output filename to the method, then add a post-build action in Jenkins to pick up the report. (You'll want to call this from your build script class.)

Measuring Test Coverage:

Seeing how much of your code is covered by unit tests helps to close the feedback loop and enable developers to write better tests - I presented on this at Global Summit a few years ago. See https://openexchange.intersystems.com/package/Test-Coverage-Tool - we've successfully used this with Jenkins for both HealthShare and internal applications at InterSystems. It can produce reports in the Cobertura format, which Jenkins will accept. Instead of using %UnitTest.Manager, call TestCoverage.Manager. The parameters detailed in the readme can be passed into the third argument of RunTest as subscripts of an array; to produce a Cobertura-style export (including reexporting all source in UDL for coverage reporting in the Jenkins UI), add a "CoverageReportFile" subscript pointing to an appropriate place in the Jenkins workspace, and set the "CoverageReportClass" subscript to "TestCoverage.Report.Cobertura.ReportGenerator".

If you want to use the Jenkins coverage/complexity scatter plot, use https://github.com/timleavitt/covcomplplot-plugin rather than the original; I've fixed some issues there and made it a bit more resilient to some oddities of our Cobertura-style export (relative to the data Cobertura actually produces).

Oof - by "newer tricks" you meant "objects." Yikes. Really, it'd be significantly lower risk to use the object-based approach than to roll your own without objects. (e.g., see my comment on automatic cleanup via %OnClose)

I don't have bandwidth to provide an object-free version, but you might look at the code for %IO.ServerSocket for inspiration.

Actually - if this is all on the same server (not seeing which ports are listening on a remote server), you could try starting to listen on a port and see if it fails. Presumably, a failure would only indicate that the port is already in use. Here's the code for that:

Class DC.Demo.PortAvailability
{

ClassMethod IsLocalPortInUse(pPort As %Integer) As %Boolean
{
    Quit '##class(%IO.ServerSocket).%New().Open(pPort,0)
}

}

That would probably require less convincing of the server guys. :)

@Enrico Parisi - great catch, thank you! I've updated the article to avoid spreading misinformation. smiley

This highlights an interesting general point about error handling - you're much more likely to have an undetected bug in code that only runs in edge cases that you haven't tested. Measuring test coverage to close the feedback loop on unit test quality is a great way to highlight these areas. (I'll be writing up a post about that soon.)

1. Suppose $TLevel > (tInitTLevel + 1). That means that someone else's transaction was left open. You can't always guarantee that the code you're calling will behave by matching tstart with tcommit or trollback 1, but you can account for the possibility of your dependency misbehaving in your own transaction cleanup. Agreed on never using argumentless trollback.

2. Great point - updated accordingly.

I think the answers so far have missed the point. The number of arguments itself is variable. This is handy for things like building a complex SQL statement and set of arguments to pass to %SQL.Statement:%Execute, for example.

The data structure here is an integer-subscripted array with the top node set to the number of elements. (The top node is what's missing in the example above). Subscripts can be missing to leave the argument at that position undefined.

Here's a simple example:

Class DC.Demo.VarArgs
{

ClassMethod Driver()
{
    Do ..PrintN(1,2,3)
    
    Write !!
    For i=1:1:4 {
        Set arg($i(arg)) = i
    }
    Kill arg(3)
    ZWrite arg
    Write !
    Do ..PrintN(arg...)
}

ClassMethod PrintN(pArgs...)
{
    For i=1:1:$Get(pArgs) {
        Write $Get(pArgs(i),"<undefined>"),!
    }
}

}

Output is:

d ##class(DC.Demo.VarArgs).Driver()
1
2
3


arg=4
arg(1)=1
arg(2)=2
arg(4)=4

1
2
<undefined>
4

For bootstrap-table, I think the examples on their site are probably more useful than anything I could dig up. https://examples.bootstrap-table.com/#welcomes/large-data.html shows pretty good performance for a large dataset. Tabulator looks nice too though.

In any case it would probably be cleanest to load data via REST rather than rendering everything in the page in an HTML table and then using a library to make the table pretty.

From the pros/cons, it seems the objectives are:

  • Maintain compatibility with normal installation (without ZPM)
  • Make side effects from installation/uninstallation auditable by putting them in module.xml

I'd suggest as one approach to accomplish both objectives:

  • Suppress the projection side effects when running in a package manager installation/uninstallation context (either by checking $STACK or using some trickier under-the-hood things with singletons from the package manager - regardless, be sure to unit test this behavior!).
  • Add "Resource Processor" classes (specified in module.xml with Preload="true" and not included in normal WebTerminal XML exports used for non-ZPM installation) - that is, classes extending %ZPM.PackageManager.Developer.Processor.Abstract and overriding the appropriate methods - to handle your custom installation things. You can then use these in your module manifest, provided that such inversion of control still works without bootstrapping issues following changes made in https://github.com/intersystems-community/zpm.
    • Generally-useful things like creating a %All namespace should probably be pushed back to zpm itself.

This is nifty! Note, you can make the extent manager happy by using:


Class DC.Demo.SometimesPersistent Extends %Persistent
{

Property Foo As %String;

ClassMethod Demo()
{
    New %storage,%fooD,%fooI,%fooS
    Set obj = ##class(DC.Demo.SometimesPersistent).%New()
    Set obj.Foo = "bar"
    Set %storage = "%foo"
    Write !,obj.%Save()
    Kill obj
    Set obj = ..%OpenId(1)
    w ! zw obj
    zw %fooD
}

Storage Default
{
<Data name="SometimesPersistentDefaultData">
<Value name="1">
<Value>%%CLASSNAME</Value>
</Value>
<Value name="2">
<Value>Foo</Value>
</Value>
</Data>
<DataLocation>@($Get(%storage,"^DC.Demo.SometimesPersistent")_"D")</DataLocation>
<DefaultData>SometimesPersistentDefaultData</DefaultData>
<IdLocation>@($Get(%storage,"^DC.Demo.SometimesPersistent")_"D")</IdLocation>
<IndexLocation>@($Get(%storage,"^DC.Demo.SometimesPersistent")_"I")</IndexLocation>
<StreamLocation>@($Get(%storage,"^DC.Demo.SometimesPersistent")_"S")</StreamLocation>
<Type>%Library.CacheStorage</Type>
}

}