Hi @Steve Pisani - the same issue was reported via GitHub issues a little while back (https://github.com/intersystems/git-source-control/issues/137) but discussion trailed off and there wasn't any information there on the resolution.

You should always be able to upgrade zpm. I think the issue with <CLASS DOES NOT EXIST> error could be solved by running:

do ##class(%Studio.SourceControl.Interface).SourceControlClassSet("")

then reinstalling, then reenabling SourceControl.Git.Extension as the source control class for the namespace.

Ultimately something funky is going on with SQL stats collection. Given a bit more info it might be possible to mitigate the issue in the git-source-control package. Happy to connect sometime to discuss/troubleshoot.

From a diagnostic perspective, I think the things to do (which we would do on such a call) would be:
* Force single-process compilation: Do $System.OBJ.SetQualifiers("/nomulticompile")
* Running a fancy zbreak command:
zbreak *%objlasterror:"N":"$d(%objlasterror)#2":"s ^mtempdbg($i(^mtempdbg))=%objlasterror"
* Force single-process load of the package (zpm "install git-source-control -DThreads=0")

Then look at the contents of ^mtempdbg to figure out where our errant %Status is coming from and go from there.

@Michael Davidovich we'd generally set ^UnitTestRoot to the path in the cloned repo and then run tests right from there. Another workflow might be to have unit tests run in their own namespace that has all the same code/supporting data but is distinct from the namespace where you're doing dev work - in that case, it's safe to delete the unit tests from that second namespace. (But practically speaking we just use one namespace and don't delete.)

Re: best practices/shortcuts as an individual developer, I'd honestly just say embrace the automation and iterate in a feature branch with the tests running on commit to feature branches (not just main). It also helps to use the package manager to make a more modular codebase where each module has its own associated tests, so you're not rebuilding and testing everything every time.

In a case where you're (a) using the package manager and (b) being really strict about test coverage, combining the techniques in the article I linked earlier is a good approach - e.g., saying to run specific tests *and* measure test coverage. That helps answer "how well do the tests I'm currently writing cover the code I'm currently adding/changing?"

Of course, you can do this with the normal APIs for running unit tests with test coverage too. (TestCoverage.Manager also has DebugRunTestCase - think of it like %UnitTest.Manager but with test coverage built in.)

Sure, here's one:

set req = ##class(%Net.HttpRequest).%New()
set req.SSLConfiguration="MBTA" // Set this up in the management portal under Security Management > SSL/TSL Configurations; this is the SSLConfig's "Name"
// For HTTP Basic authentication, simple route is e.g.:
set req.Username = "foo"
set req.Password = "bar"
// For other modes, e.g.:
do req.SetHeader("Authorization","Bearer abcd")
set sc = req.Get("https://api-v3.mbta.com/routes")
// TODO: check sc / $$$ThrowOnError(sc)
// TODO: check req.HttpResponse.StatusCode to make sure it's 200 and error handle appropriately
set obj = {}.%FromJSON(req.HttpResponse.Data)
zw obj // There's your object!