We'll be getting out the next release of git-source-control (https://github.com/intersystems/git-source-control) this month, which includes support back to 2016.2 via an artifact associated with (some) releases. We haven't produced this for the past few releases but will do so for the next one.

You can follow the project here to be notified about new releases: https://openexchange.intersystems.com/package/Git-for-Shared-Development...

Assuming this is with git-source-control, the approach would be to have the temp folder *and all subfolders* owned by SYSTEM (in this case), as discussed at https://github.com/intersystems/git-source-control?tab=readme-ov-file#du.... This is under folder Properties > Security > Advanced. Click "Change" next to "Owner", and under Object Names to Select type in "SYSTEM". Before applying, check the box with "Replace all child object permission entries..." at the bottom of the dialog. That should do it.

I discussed this with Jason earlier this week. The simplest solution is to use the InterSystems Package Manager (IPM) with a local development environment model - if you have multiple local repos all loaded via zpm with the "-dev" flag, git-source-control will know the right place for everything to go and you can edit multiple projects under a single isfs folder in VSCode (or via Studio). Note, you may need to be more explicit than previously needed with the Directory attribute on resources in module.xml to get things perfectly right.

If there are needs that this won't quite meet, it may be possible to improve git-source-control to provide further configuration options.

@Gautam Rishi some things that would be helpful / might be worth looking into:

  • Does the user IRIS runs as (most likely irisusr) have access to /Users/abc/workspace? If not I'd imagine all sorts of things could go wrong. (And git-source-control could be more helpful by making the root cause more obvious.)
  • Where specifically is the error coming from?
    • The error #5001 / message there doesn't seem to be in the git-source-control codebase, at least that I can find.
    • If you set ^%oddENV("callererrorinfo")=2 it'll typically put the full stack trace in the error message. Just be sure to kill ^%oddENV("callererrorinfo") later so you don't keep getting it because it's noisy/annoying if you don't need it.
  • What IRIS version are you running? (write $zv)

In general, feel free to file a GitHub issue at https://github.com/intersystems/git-source-control/issues

Hi @Michael Davidovich - it's been a while! Here's a quick sample for how I'd do this:

Class Mike.Demo.REST Extends %CSP.REST
{

/// This method gets called prior to dispatch of the request. Put any common code here
/// that you want to be executed for EVERY request. If pContinue is set to 0, the
/// request will NOT be dispatched according to the UrlMap. In this case it's the
/// responsibility of the user to return a response.
ClassMethod OnPreDispatch(pUrl As %String, pMethod As %String, ByRef pContinue As %Boolean) As %Status
{
    #dim %request As %CSP.Request
    Set pContinue = 0
    Set version = %request.GetCgiEnv("HTTP_X_API_VERSION","unspecified; use X-API-VERSION header")
    Set class = $Case(+version,
        1:"Mike.Demo.v1",
        2:"Mike.Demo.v2",
        :"")
    If (class = "") {
        Set error = $$$ERROR($$$GeneralError,$$$FormatText("Invalid API version: %1",version))
        // Shoud be HTTP 400, but you probably want to report this differently/better.
        Do ..ReportHttpStatusCode(..#HTTP400BADREQUEST,error)
        Quit $$$OK
    }
    
    Quit $classmethod(class,"DispatchRequest",pUrl,pMethod,1)
}

}

Class Mike.Demo.v1 Extends %CSP.REST
{

Parameter VERSION = 1;

XData UrlMap [ XMLNamespace = "http://www.intersystems.com/urlmap" ]
{
<Routes>
<Route Url="/version" Method="GET" Call="GetVersion" />
</Routes>
}

ClassMethod GetVersion() As %Status
{
    Write {"version":(..#VERSION)}.%ToJSON()
    Quit $$$OK
}

}

Class Mike.Demo.v2 Extends Mike.Demo.v1
{

Parameter VERSION = 2;

XData UrlMap [ XMLNamespace = "http://www.intersystems.com/urlmap" ]
{
<Routes>
<Route Url="/version" Method="GET" Call="GetVersion" />
</Routes>
}

}

You can use parameters on the return type. For example:

Class DC.Demo.SqlProcCollation
{

ClassMethod Test() As %String [ SqlProc ]
{
    return "Abe Lincoln"
}

ClassMethod Test2() As %String(COLLATION="SQLUPPER") [ SqlProc ]
{
    return "Abe Lincoln"
}

}

Given that:

select DC_Demo.SqlProcCollation_Test(),DC_Demo.SqlProcCollation_Test2()
where DC_Demo.SqlProcCollation_Test() = 'ABE LINCOLN'

Returns no results

select DC_Demo.SqlProcCollation_Test(),DC_Demo.SqlProcCollation_Test2()
where DC_Demo.SqlProcCollation_Test2() = 'ABE LINCOLN'

Returns 1 row

The general pattern that I would recommend is:

https://github.com/intersystems/isc-perf-ui is a simple example of how this all fits together - see especially https://github.com/intersystems/isc-perf-ui/blob/main/module.xml.

I just used this toolset to build a small but meaningful IRIS-based application in about a week. I didn't hand-code a single REST endpoint in ObjectScript, and I got my OpenAPI spec and all my Angular services and TypeScript interfaces for free*.

Of course, if you already have a significant hand-coded REST API, this doesn't help much. For one application my team manages we've added a Forward in our main REST dispatch class to one using isc.rest and gradually migrated endpoints over to use the standardized approach.

* ok, not totally free - there's the small price of writing better ObjectScript and e.g. having methods that return a registered JSON-enabled type rather than e.g. %DynamicArray and %DynamicObject.

@Dmitry Maslennikov it's not actually a REST service, I just want a web application where I have full control over behavior of URLs under the application root in ObjectScript. %CSP.REST is the easiest (maybe only?) way to do that.

I ended up implementing Login as follows (which at least mostly works):

/// Called for a REST page in the event of a login being required
ClassMethod Login(skipheader As %Boolean = 1) As %Status [ ProcedureBlock = 0 ]
{
    // Support including logo image (most of the time...)
    Set brokerApp = "/csp/broker/"
    Set brokerName = $Replace(%request.URL,$Piece(%request.URL,"/portal",1),brokerApp)
    If (brokerName '= brokerApp) {
        Set filename = $System.CSP.GetFileName(brokerName)
        If ##class(%Library.File).Exists(filename) {
            Set %response.ServerSideRedirect = brokerName
            Quit $$$OK
        }
    }

    // Redirect with trailing slash (supports above)
    If ($Extract(%request.CgiEnvs("REQUEST_URI"),*) '= "/") && (%request.URL = %request.Application) {
        Set %response.Redirect = %request.Application
        Do %response.WriteHTTPHeader()
        Quit $$$OK
    }

    // Suppress "Access Denied" error message
    If %request.Method = "GET" {
        Set %request.Data("Error:ErrorCode",1) = $$$ERROR($$$RequireAuthentication)
    }

    Quit ##class(%CSP.Login).Page()
}

Not sure if this is what you're getting at, but one of the most exciting and unique things about ObjectScript is how natural it is to do metaprogramming (in the sense of writing ObjectScript that treats other ObjectScript code as data). Because all of your code is in the database, you can work with it from an object or relational perspective - see the %Dictionary package.

There are to main ways of doing this: generators and projections. A generator produces the actual code that will run for a method or trigger in a class at compile time, typically based on the class definition. A projection adds side effects of compiling/recompiling/deleting the class (e.g., updating a global with metadata about the class, creating + queueing compilation of other classes).

isc.rest has a lots of examples of both of these; here's a representative sample:

%pkg.isc.rest.handlerProjection does a bunch of SQL/object updates to data after compilation

%pkg.isc.rest.handler (which uses that projection) has a CodeMode = objectgenerator method that fails compilation of subclasses if certain methods are not overridden.

%pkg.isc.rest.model.action.projection creates a separate class with dependencies on other classes properly declared and queues it to be compiled. The class created extends %pkg.isc.rest.model.action.handler which defines some methods with CodeMode = objectgenerator, though a separate class does the actual work of code generation.

@Ron Sweeney this seems like a good use case to define your own package with dependencies for the things you want to "install if not already installed" - installing your package will install those dependencies only if they haven't been installed. (And if the versions update, they'll be updated.)

Alternatively, you could do something like:
 

if '##class(%ZPM.PackageManager.Developer.Module).NameExists(packagename) { zpm "install "_packagename }

... ended up answering my own question in less time than it took to write it up. Solution (which might just be a workaround) is to force the content-type on the response to be application/octet-stream:

do inst.stream.SetAttribute("ContentDisposition","attachment; filename="""_inst.stream.GetAttribute("FileName")_"""")
do inst.stream.SetAttribute("ContentType","application/octet-stream")
set %response.Redirect = "%25CSP.StreamServer.cls?STREAMOID="_..Encrypt(inst.stream.GetStreamId())

@Eduard Lebedyuk it depends on the caller. In a CI process I could imagine doing different error handling for failed compilation vs. failed unit tests, this would be a way to signal those different modes of failure.

I've taken/seen approaches that are more shell-centric vs. more ObjectScript-centric which would be a driver for this being useful. With the package manager it's generally simpler to wrap things in <Invoke> or resource processors and then call IRIS with individual zpm commands (i.e., load then test) from CI. For some of my pre-package manager CI work we've had a big ObjectScript class that wraps all the build logic, running different sorts of tests, etc. In this case it would be useful to indicate the stage at which the failure occurred.

Regardless, $System.Process.Terminate is simpler to manage than "flag files" for sure, which would be the next best alternative. (IIRC in old days, maybe just pre-IRIS, there were Windows/Linux differences in $System.Process.Terminate's behavior, and that led us to use flag files.)