Article
· Mar 24, 2020 5m read

Unit Tests and Test Coverage in the InterSystems Package Manager

This article will describe processes for running unit tests via the InterSystems Package Manager (aka IPM - see https://openexchange.intersystems.com/package/InterSystems-Package-Manager-1), including test coverage measurement (via https://openexchange.intersystems.com/package/Test-Coverage-Tool).

Unit testing in ObjectScript

There's already great documentation about writing unit tests in ObjectScript, so I won't repeat any of that. You can find the Unit Test tutorial here: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=TUNT_preface

It's best practice to include your unit tests somewhere separate in your source tree, whether it's just "/tests" or something fancier. Within InterSystems, we end up using /internal/testing/unit_tests/ as our de facto standard, which makes sense because tests are internal/non-distributed and there are types of tests other than unit tests, but this might be a bit complex for simple open source projects. You may see this structure in some of our GitHub repos.

From a workflow perspective, this is super easy in VSCode - you just create the directory and put the classes there. With older server-centric approaches to source control (those used in Studio) you'll need to map this package appropriately, and the approach for that varies by source control extension.

From a unit test class naming perspective, my personal preference (and the best practice for my group) is:

UnitTest.<package/class being tested>[.<method/feature being tested>]

For example, if unit tests for method Foo in class MyApplication.SomeClass, the unit test class would be named UnitTest.MyApplication.SomeClass.Foo; if the tests were for the class as a whole, it'd just be UnitTest.MyApplication.SomeClass.

Unit tests in IPM

Making the InterSystems Package Manager aware of your unit tests is easy! Just add a line to module.xml like the following (taken from https://github.com/timleavitt/ObjectScript-Math/blob/master/module.xml - a fork of @Peter Steiwer 's excellent math package from the Open Exchange, which I'm using as a simple motivating example):

<Module>
  ...
  <UnitTest Name="tests" Package="UnitTest.Math" Phase="test"/>
</Module>

What this all means:

  • The unit tests are in the "tests" directory underneath the module's root.
  • The unit tests are in the "UnitTest.Math" package. This makes sense, because the classes being tested are in the "Math" package.
  • The unit tests run in the "test" phase in the package lifecycle. (There's also a "verify" phase in which they could run, but that's a story for another day.)

Running Unit Tests

With unit tests defined as explained above, the package manager provides some really helpful tools for running them. You can still set ^UnitTestRoot, etc. as you usually would with %UnitTest.Manager, but you'll probably find the following options much easier - especially if you're working on several projects in the same environment.

You can try out all of these by cloning the objectscript-math repo listed above and then loading it with zpm "load /path/to/cloned/repo/", or on your own package by replacing "objectscript-math" with your package names (and test names).

To reload the module and then run all the unit tests:

zpm "objectscript-math test"

To just run the unit tests (without reloading):

zpm "objectscript-math test -only"

To just run the unit tests (without reloading) and provide verbose output:

zpm "objectscript-math test -only -verbose"

To just run a particular test suite (meaning a directory of tests - in this case, all the tests in UnitTest/Math/Utils) without reloading, and provide verbose output:

zpm "objectscript-math test -only -verbose -DUnitTest.Suite=UnitTest.Math.Utils"

To just run a particular test case (in this case, UnitTest.Math.Utils.TestValidateRange) without reloading, and provide verbose output:

zpm "objectscript-math test -only -verbose -DUnitTest.Case=UnitTest.Math.Utils.TestValidateRange"

Or, if you're just working out the kinks in a single test method:

zpm "objectscript-math test -only -verbose -DUnitTest.Case=UnitTest.Math.Utils.TestValidateRange -DUnitTest.Method=TestpValueNull"

Test coverage measurement via IPM

So you have some unit tests - but are they any good? Measuring test coverage won't fully answer that question, but it at least helps. I presented on this at Global Summit back in 2018 - see https://youtu.be/nUSeGHwN5pc .

The first thing you'll need to do is install the test coverage package:

zpm "install testcoverage"

Note that this doesn't require IPM to install/run; you can find more information on the Open Exchange: https://openexchange.intersystems.com/package/Test-Coverage-Tool

That said, you can get the most out of the test coverage tool if you're also using IPM.

Before running tests, you need to specify which classes/routines you expect your tests to cover. This is important because, in very large codebases (for example, HealthShare), measuring and collecting test coverage for all of the files in the project may require more memory than your system has. (Specifically, gmheap for the line-by-line monitor, if you're curious.)

The list of files goes in a file named coverage.list within your unit test root; different subdirectories (suites) of unit tests can have their own copy of this to override which classes/routines will be tracked while the test suite is running.

For a simple example with objectscript-math, see: https://github.com/timleavitt/ObjectScript-Math/blob/master/tests/UnitTest/coverage.list ; the user guide for the test coverage tool goes into further details.

To run the unit tests with test coverage measurement enabled, there's just one more argument to add to the command, specifying that TestCoverage.Manager should be used instead of %UnitTest.Manager to run the tests:

zpm "objectscript-math test -only -DUnitTest.ManagerClass=TestCoverage.Manager"
 
The output (even in non-verbose mode) will include a URL where you can view which lines of your classes/routines were covered by unit tests, as well as some aggregate statistics.

Next Steps

What about automating all of this in CI? What about reporting unit test results and coverage scores/diffs? You can do that too! For a simple example using Docker, Travis CI and codecov.io, see https://github.com/timleavitt/ObjectScript-Math ; I'm planning to write this up in a future article that looks at a few different approaches.

Discussion (23)6
Log in or sign up to continue

Hello @Timothy Leavitt
Thank you for this great article!

I tried to add "UnitTest" tag to my module.xml but something wrong during the publish process.
<UnitTest Name="tests" Package="UnitTest.Isc.JSONFiltering.Services" Phase="test"/>

tests directory contain a directory tree UnitTest/Isc/JSONFiltering/Services/ with a %UnitTest.TestCase sublcass.

Exported 'tests' to /tmp/dirLNgC2s/json-filter-1.2.0/tests/.tests
ERROR #5018: Routine 'tests' does not exist
[json-filter]   Package FAILURE - ERROR #5018: Routine 'tests' does not exist
ERROR #5018: Routine 'tests' does not exist


I also tried with objectscript-math project.  This is the output of objectscript-math publish -v :

Exported 'src/cls/UnitTests' to /tmp/dir7J1Fhz/objectscript-math-0.0.4/src/cls/unittests/.src/cls/unittests
ERROR #5018: Routine 'src/cls/UnitTests' does not exist
[objectscript-math]     Package FAILURE - ERROR #5018: Routine 'src/cls/UnitTests' does not exist
ERROR #5018: Routine 'src/cls/UnitTests' does not exist

Did I miss something or is a package manager issue ?
Thank you.

Thanks, @Timothy Leavitt!

For others working through this too, I wanted to sum some points up that I discussed with Tim over PM.

- Tim reiterated the usefulness of the Test Coverage tool and the Cobertura output for finding starting places based on complexity and what are the right blocks to test.

- When it comes to testing persistent data classes, it is indeed tricky but valuable (e.g. data validation steps).  Using transactions (TSTART and TROLLBACK) is a good approach for this.

I also discussed the video from some years ago on the mocking framework.  It's an awesome approach, but for me, it depends on retooling classes to fit the framework.  I'm not in a place where I want to or can rewrite classes for the sake of testing, however this might be a good approach for others.  There may be other open source frameworks for mocking available later. 

Hope this helps and encourages more conversation!  In a perfect world we'd start with our tests and code from there, but well, the world isn't perfect!

@Timothy Leavitt and others: I know this isn't Jenkins support, but I seem to be having trouble allowing the account running Jenkins to get into IRIS. Just trying to get this to work locally at the moment.  I'm running on Windows through an organizational account, so I created a new local account on the computer, jenkinsUser, which I'm to understand is the 'user' that logs in and runs everything on Jenkins.  When I launch IRIS in the build script using . . .

C:\MyPath\bin\irisdb -s C:\MyPath\mgr -U MYNAMESPACE  0<inFile

 . . . I can see in the console it's trying to login.  I turned on O/S authentication for the system and allowed the %System.Login function to use Kerbose.  I can launch Terminal from my tray and I'm logged in without a user/password prompt.  

I am guessing that IRIS doesn't know about my jenkinsUser local account, so it won't allow that user to us O/S authentication?  I'm trying to piece this together in my head.  How can I allow this computer user trying to run Jenkins access to IRIS without authentication?  

Hope this helps others who are trying to set this up.

Trial and error folks:

@Timothy Leavitt your presentation mentioned a custom version of the Coberutra plugin for the scatter plot . . . is that still necessary or does the current version support that?  Not sure if I see any mention of the custom plugin on the GitHub page.

Otherwise, I seem to me missing something key: I don't have build logic in my script. I suppose I just thought that step was for automation purposes so that the latest code would be compiled on whatever server.  I don't have anything like that yet and thought I could just run the test coverage utility but it's coming up with nothing.  I'll keep playing tomorrow but appreciate anyone's thoughts on this especially if you've set it up before!

For those following along, I got this to work finally by creating the "coverage.list" file in the unit test root.  I tried setting the parameter node "CoverageClasses" but that didn't work (maybe I used $LB wrong).  

Still not sure how to get the scatter plot for complexity as @Timothy Leavitt mentioned in the presentation the Cobertura plugin was customized.  Any thoughts on that are appreciated!

@Michael Davidovich I was out Friday, so still catching up on all this - glad you were able to figure out coverage.list. That's generally a better way to go for automation than setting a list of classes.

re: the plugin, yes, that's it! There's a GitHub issue that's probably the same here: https://github.com/timleavitt/covcomplplot-plugin/issues/1 - it's back on my radar given what you're seeing.

So I originally installed the scatter plot plugin from the library, not the one from your repo.  I uninstalled that and I'm trying to install the one you modified.  I'm having a little trouble because it seems I have to download your source, make sure I have a JDK installed and Maven and package the code into a .hpi file?  Does this sound right?  I'm getting some issues with the POM file while running 'mvn pacakge'.  Is it possible to provide the packaged file for those of us not Java-savvy? 

Edit: I created a separate thread about this so it gets more visibility: The thread can be found from here: https://community.intersystems.com/post/test-coverage-coverage-report-not-generating-when-running-unit-tests-zpm

...

Hello,

@Timothy Leavitt, thanks for the great article! I am facing a slight problem and was wondering if you, or someone else, might have some insight into the matter.

I am running my unit tests in the following way with ZPM, as instructed. They work well and test reports are generated correctly. Test coverage is also measured correctly according to the logs. However, even though I instructed ZPM to generate Cobertura-style coverage reports, it is not generating one. When I run the GenerateReport() method manually, the report is generated correctly.

I am wondering what I am doing wrong. I used the test flags from the ObjectScript-Math repository, but they seem not to work.

Here is the ZPM command I use to run the unit tests:

zpm "common-unit-tests test -only -verbose
-DUnitTest.ManagerClass=TestCoverage.Manager
-DUnitTest.UserParam.CoverageReportClass=TestCoverage.Report.Cobertura.ReportGenerator
-DUnitTest.UserParam.CoverageReportFile=/opt/iris/test/CoverageReports/coverage.xml
-DUnitTest.Suite=Test.UnitTests.Fw
-DUnitTest.JUnitOutput=/opt/iris/test/TestReports/junit.xml
-DUnitTest.FailuresAreFatal=1":1

The test suite runs okay, but coverage reports do not generate. However, when I run these commands stated in the TestCoverage documentation, the reports are generated.

Set reportFile = "/opt/iris/test/CoverageReports/coverage.xml"
Do ##class(TestCoverage.Report.Cobertura.ReportGenerator).GenerateReport(<index>, reportFile)

Here is a short snippet from the logs where you can see that test coverage analysis is run:

Collecting coverage data for Test: .036437 seconds
  Test passed

Mapping to class/routine coverage: .041223 seconds
Aggregating coverage data: .019707 seconds
Code coverage: 41.92%

Use the following URL to view the result:
http://192.168.208.2:52773/csp/sys/%25UnitTest.Portal.Indices.cls?Index=19&$NAMESPACE=COMMON
Use the following URL to view test coverage data:
http://IRIS-LOCALDEV:52773/csp/common/TestCoverage.UI.AggregateResultViewer.cls?Index=17
All PASSED

[COMMON|common-unit-tests] Test SUCCESS

What am I doing wrong?

Thank you, and have a good day!
Kari Vatjus-Anttila

Here we are using the mocking framework that we developed (GitHub - GendronAC/InterSystems-UnitTest-Mocking: This project contains a mocking framework to use with InterSystems' Products written in ObjectScript

Have a look at the https://github.com/GendronAC/InterSystems-UnitTest-Mocking/blob/master/S... class. Instead of calling ..SendRequestAsync we do ..ensHost.SendRequestAsync(...) Doing so enables us to create Expectations (..Expect(..ensHost.SendRequestAsync(....

Here a code sample : 

Class Sample.Src.CExampleService Extends Ens.BusinessService
{

/// The type of adapter used to communicate with external systems
Parameter ADAPTER = "Ens.InboundAdapter";

Property TargetConfigName As %String(MAXLEN = 1000);

Parameter SETTINGS = "TargetConfigName:Basic:selector?multiSelect=0&context={Ens.ContextSearch/ProductionItems?targets=1&productionName=@productionId}";

// -- Injected dependencies for unit tests

Property ensService As Ens.BusinessService [ Private ];

/// initialize Business Host object
Method %OnNew(
	pConfigName As %String,
	ensService As Ens.BusinessService = {$This}) As %Status
{
   set ..ensService = ensService
   return ##super(pConfigName)
}

/// Override this method to process incoming data. Do not call SendRequestSync/Async() from outside this method (e.g. in a SOAP Service or a CSP page).
Method OnProcessInput(
	pInput As %RegisteredObject,
	Output pOutput As %RegisteredObject,
	ByRef pHint As %String) As %Status
{
   set output = ##class(Ens.StringContainer).%New("Blabla")

   return ..ensService.SendRequestAsync(..TargetConfigName, output)
}

}
Import Sample.Src

Class Sample.Test.CTestExampleService Extends Tests.Fw.CUnitTestBase
{

Property exampleService As CExampleService [ Private ];

Property ensService As Ens.BusinessService [ Private ];

ClassMethod RunTests()
{
   do ##super()
}

Method OnBeforeOneTest(testName As %String) As %Status
{
   set ..ensService = ..CreateMock()

   set ..exampleService = ##class(CExampleService).%New("Unit test", ..ensService)
   set ..exampleService.TargetConfigName = "Some test target"

   return ##super(testName)
}

// -- OnProcessInput tests --

Method TestOnProcessInput()
{
   do ..Expect(..ensService.SendRequestAsync("Some test target",
                                             ..NotNullObject(##class(Ens.StringContainer).%ClassName(1)))
               ).AndReturn($$$OK)

   do ..ReplayAllMocks()

   do $$$AssertStatusOK(..exampleService.OnProcessInput())

   do ..VerifyAllMocks()
}

Method TestOnProcessInputFailure()
{
   do ..Expect(..ensService.SendRequestAsync("Some test target",
                                             ..NotNullObject(##class(Ens.StringContainer).%ClassName(1)))
               ).AndReturn($$$ERROR($$$GeneralError, "Some error"))

   do ..ReplayAllMocks()

   do $$$AssertStatusNotOK(..exampleService.OnProcessInput())

   do ..VerifyAllMocks()
}

}

The answer about mocking is great.

At the TestCoverage level, by default the tool tracks coverage for the current process only. This prevents noise / pollution of stats from other concurrent use of the system. You can override this (see readme at https://github.com/intersystems/TestCoverage - set tPidList to an empty string), but there are sometimes issues with the line-by-line monitor if you do; #14 has a bit more info on this.

Note - question also posted/answered at https://github.com/intersystems/TestCoverage/issues/33