Hello Alan, we are lacking in documentation that explains what each of those menu items do. I logged a GitHub issue here to add that: https://github.com/intersystems/git-source-control/issues/296

You mention having an existing application with a lot of code already in source control that you would like to migrate to Git. What I might do for this situation is initialize a new Git repository and copy all the files from your older source control system into the repo. You can then configure git-source-control to use this new repository for source control. The "Import All" option will import the files from this new repository into IRIS.

In the meantime, here's a quick and dirty explanation of the options you mention:

  • Status: outputs the results of "git status" to the source control output
  • Settings: opens a web page where you can configure git-source-control settings
  • Launch Git UI: opens a web page where you can perform basic Git commands graphically
  • Push to remote branch: equivalent of "git push"
  • Fetch from remote: equivalent of "git fetch"
  • Pull changes from remote branch: equivalent of "git pull", plus a call to the pull event handler
  • Export All: exports all newly changed items in IRIS to the Git repository
  • Export All (Force): exports all items in IRIS to the Git repository, including those with older timestamps
  • Import All: imports all items in the Git repository to IRIS if the version in IRIS is outdated
  • Import All (Force): imports all items in the Git repository to IRIS

This can happen if the routine contains an ASCII character that cannot be printed to XML. Here is an example from a routine I created:

> set routine = ##class(%Routine).%OpenId("pbarton.test.MAC")
> zw routine.Read()
"pbarton"_$c(10)_" write ""hello"_$c(7)_""""

You can see the routine contains $c(7), which is a non-printable ASCII character. When I export the routine it looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<Export generator="IRIS" version="26" zv="IRIS for Windows (x86-64) 2023.2.0L (Build 159U)" ts="2023-09-13 11:20:00">
<RoutineBase64 name="pbarton.test" type="MAC" languagemode="0" timestamp="66730,40577.6434199">cGJhcnRvbgogd3JpdGUgImhlbGxvByI=
</RoutineBase64>
</Export>

Here's a simple example I wrote up and tested based on documentation.

A web service class on the server:

/// Sample.MyService
Class Sample.MyService Extends %SOAP.WebService
{

/// Name of the WebService.
Parameter SERVICENAME = "MyService";

/// TODO: change this to actual SOAP namespace.
/// SOAP Namespace for the WebService
Parameter NAMESPACE = "http://tempuri.org";

/// Namespaces of referenced classes will be used in the WSDL.
Parameter USECLASSNAMESPACES = 1;

Method ReceiveFile(attachment As %GlobalBinaryStream) [ WebMethod ]
{
    set filestream = ##class(%Stream.FileBinary).%New()
    $$$ThrowOnError(filestream.LinkToFile("C:\temp\file_"_$username_$zts_".out"))
    do filestream.CopyFrom(attachment)
    $$$ThrowOnError(filestream.%Save())
}

}

A web client class on the client. This was generated with the SOAP wizard in Studio. Only the datatype of the attachment argument to ReceiveFile has been modified.

Class MyService.Client.MyServiceSoap Extends %SOAP.WebClient [ ProcedureBlock ]
{

/// This is the URL used to access the web service.
Parameter LOCATION = "http://localhost:52773/csp/user/Sample.MyService.cls";

/// This is the namespace used by the Service
Parameter NAMESPACE = "http://tempuri.org";

/// Use xsi:type attribute for literal types.
Parameter OUTPUTTYPEATTRIBUTE = 1;

/// Determines handling of Security header.
Parameter SECURITYIN = "ALLOW";

/// This is the name of the Service
Parameter SERVICENAME = "MyService";

/// This is the SOAP version supported by the service.
Parameter SOAPVERSION = 1.1;

Method ReceiveFile(attachment As %GlobalBinaryStream) [ Final, ProcedureBlock = 1, SoapBindingStyle = document, SoapBodyUse = literal, WebMethod ]
{
 Do (..WebMethod("ReceiveFile")).Invoke($this,"http://tempuri.org/Sample.MyService.ReceiveFile",.attachment)
}

}

And some sample code for the client to use this class to send a file:

/// get the file
set filestream = ##class(%Stream.FileBinary).%New()
$$$ThrowOnError(filestream.LinkToFile(pFileName))

/// create the attachment
set attachment = ##class(%GlobalBinaryStream).%New()
do attachment.CopyFrom(filestream)

/// create the client and send the file
set client = ##class(MyService.Client.MyServiceSoap).%New()
set client.Username = "redacted"
set client.Password = "redacted"
do client.ReceiveFile(attachment)

This will include the entire base-64-encoded file in the body of the SOAP message. An even better way would be to use MTOM attachments for the file. See the documentation here for more details about how to do that: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...

It's possible to do this by using a trigger generator. Then you can run GetColumns at compile time of the class, and use the result to write out lines of code using the {fieldName*C} syntax. Just as a warning, using generators can be tricky because it adds a layer of indirection to your code. The best way to debug is to use the "View Other" command in Studio or VS Code and look directly at the generated code.

Here is some sample code for a trigger generator:

Trigger TestTrigger [ CodeMode = objectgenerator, Event = INSERT/UPDATE, Foreach = row/object ]
{
    set tableName = %compiledclass.SQLQualifiedNameQ
    set st = ##class(%SYSTEM.SQL).GetColumns(tableName,.byname,.bynum,1)
    $$$ThrowOnError(st)
    set i = 1
    while $d(bynum(i)) {
        set xColName = bynum(i)
        do %code.WriteLine(" set ^test("""_xColName_" changed"") = {"_xColName_"*C}")
        set i = i + 1
    }
}

Hi Michael, great questions. A lot of this will depend on your own practices for source code management and deployment. In my team's case we ended up overriding a lot of the %UnitTest behavior to provide reasonable defaults for our process. Hopefully this sparks some more discussion. I'm interested in how other peoples' answers will differ.

> Are all your unit tests added to .gitignore so they don't get wound up in source code?

No - we want source code for our unit tests to be in source control, for the same reason all other code is in source control. We make sure that unit tests don't end up on production systems by maintaining different branches for test and production. Unit tests are in a separate directory that we don't merge from the test branch to the production branch. This is using Perforce. There might be a different workflow recommended for Git that would give you the same results.

> Why does the normal RunTestCase() method automatically delete the extracted unit test class files from the folder?  Why is that the default?

If I had to guess, this is a good default for a deployment process where you compile everything, run tests, and then copy over the code database to production. In that case you would always want test classes to delete themselves after running. In our case we have a different way of deploying code, so we override the RunTest methods to use the "/nodelete" flag by default.

> When it comes to automated testing (Jenkins specifically) what is the lifecycle?

We use a very similar lifecycle for Jenkins automated testing to the one you describe.

  1. Jenkins pulls all code from the remote repo
  2. Run an %Installer class on the build instance that overwrites the code database so we start from scratch
  3. Load and compile all code from the local workspace into the build instance, including tests. Report any compilation error as a build failure.
  4. Run all tests.
  5. Generate JUnit-format test reports and Cobertura test coverage reports.

Good question - it looks like the VS Code plugins only support password authentication for now. I'd encourage opening an issue against the InterSystems Server Manager GitHub project if you have a need for it. In theory this would be possible with some implementation work on the VS Code plugin. You would also need to enable delegated authentication on the /api/atelier web application in IRIS with a custom ZAUTHENTICATE routine to support OAuth.

Hi Joseph, I agree on using Client Credentials for this use case. As far as I know this is the only OAuth 2.0 grant type that authorizes server-to-server communication without the context of a user agent logging in. You can implement this in InterSystems IRIS by overriding the ValidateClient() method of the OAuth validation class: https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic....

One thing to keep in mind is that by default anybody can register a new client with your authorization server by using the dynamic client registration endpoint. So the presence of a valid client isn't enough to authorize the API call. You will need some additional authorization logic.

The SSO system we use for this Developer Community has a "forgot password" implementation. Unfortunately it is down right now, but under normal circumstances  you would be able to try it out here: https://login.intersystems.com/login/SSO.UI.PasswordReset.cls

It works as follows:

  • The user enters their email address into a form. They are then taken to another form with an input for a token.
  • If the email address exists in the system, they are sent an email with a secure random token to input. Otherwise they are sent an email with instructions on how to register for an account.
  • Once the user inputs the token from their email to the page, they are taken to another form to set their new password.

It's important to avoid user enumeration by not revealing in the UI whether or not a user with the provided username or email address exists in the system. You should also hash the password reset tokens before storing them in a database, give them a short lifetime before they expire, and invalidate the token after it's used once.

I highly recommend OWASP for more resources on how to do this securely: https://cheatsheetseries.owasp.org/cheatsheets/Forgot_Password_Cheat_She...

Here are a couple of ways to avoid <STORE> errors by increasing the per-process memory available to IRIS processes:

  • Increase the 'bbsiz' parameter, either by editing the CPF file or in the System Management Portal under System Administration > Configuration > System Configuration > Memory and Startup.
  • In code in the specific process throwing the <STORE> error, set the $zstorage special variable to increase the memory available to that process.

Hello Martin,

Using "DROP COLUMN" deletes the property from the class definition and modifies the storage definition by removing the property name. The storage definition will still have a "Value" item for the data, but it no longer includes the name of the property.

If you have the class definition in source control, the easiest way to truly delete the data is to revert to the previous version. Then you can run DROP COLUMN again with the %DELDATA option to remove the column and delete the data.

If this is not possible, I would look at the storage definition and find the empty slot in the storage definition. The "name" property on that slot will give you the storage index. You could then iterate through the global where the data is stored and do something like set $list(value, name) = "" to delete the data. I would recommend contacting Support before doing this to see if they have better suggestions.

Hi Neil,

Using OAuth2 in a mirrored environment would require some additional scripting to keep the configuration in sync between mirror members, since as you note it's stored in %SYS.

The Server Configuration on the auth server won't be changing much over time so I'd recommend writing an installation script that sets up all relevant configuration. Below are some snippets from an installation class I'm using on a Caché authorization server:

ClassMethod CreateServerConfiguration(pOrigNamespace As %String = "%SYS", Output interval As %Integer, Output server) As %Status
{
    Set server = ##class(OAuth2.Server.Configuration).Open(.tSC)
    If $IsObject(server) {
        Set tSC = server.Delete()
        If $$$ISERR(tSC) Quit tSC
    }

    Set interval = 3600

    Set server = ##class(OAuth2.Server.Configuration).%New()
    Set server.Description = "Single Sign-On"
    Set issuer = ##class(OAuth2.Endpoint).%New()
    Set issuer.Host = ..#IssuerHost
    Set issuer.Prefix = ..#IssuerPrefix
    Set server.IssuerEndpoint = issuer

    Set scopes = ##class(%ArrayOfDataTypes).%New()
    Do scopes.SetAt("OpenID Connect","openid")
    Do scopes.SetAt("E-mail Address","email")
    Do scopes.SetAt("User Profile","profile")
    // Add whatever other custom scopes you need
    Set server.SupportedScopes = scopes

    Set server.AllowUnsupportedScope = 0
    Set server.SupportedGrantTypes = "APCI"
    Set server.ReturnRefreshToken = ""
    Set server.AudRequired = 0

    Set server.CustomizationRoles = "%DB_CACHESYS,%Manager"
    Set server.CustomizationNamespace = pOrigNamespace
    Set server.AuthenticateClass = ..#CustomAuthenticateClassName
    Set server.ValidateUserClass = ..#CustomValidateClassName
    Set server.GenerateTokenClass = "%OAuth2.Server.JWT"

    Set server.AccessTokenInterval = interval
    Set server.RefreshTokenInterval = 3*interval
    Set server.AuthorizationCodeInterval = 120
    Set server.ServerCredentials = ..#ServerX509Name
    Set server.SigningAlgorithm = "RS512"
    Set server.KeyAlgorithm = ""
    Set server.EncryptionAlgorithm = ""
    Set server.SSLConfiguration = ..#SSLConfig

    Quit server.Save()
}

ClassMethod CreateServerDefinition(Output server) As %Status
{
    Set tIssuer = ..#EndpointRoot

    Set server = ##class(OAuth2.ServerDefinition).%OpenId("singleton")
    Set:'$IsObject(server) server = ##class(OAuth2.ServerDefinition).%New()
    Set server.IssuerEndpoint = tIssuer
    Set server.AuthorizationEndpoint = tIssuer_"/authorize"
    Set server.TokenEndpoint = tIssuer_"/token"
    Set server.UserinfoEndpoint = tIssuer_"/userinfo"
    Set server.IntrospectionEndpoint = tIssuer_"/introspection"
    Set server.RevocationEndpoint = tIssuer_"/revocation"
    Set server.ServerCredentials = ..#ServerX509Name
    Quit server.%Save()
}

The client descriptions are likely to change over time as new clients are registered. I think to keep these in sync between mirror members you'll need to regularly export the relevant globals directly from the primary, transport them to the secondary, and import them into the %SYS namespace. Below are some methods that do the export and import:

ClassMethod ExportClientConfiguration(pDestFile As %String) As %Status
{
    new $namespace
    set $namespace = "%SYS"
    for type = "D","I" {
        set tList("OAuth2.Server.Client"_type_".GBL") = ""
        set tList("OAuth2.Client.Metadata"_type_".GBL") = ""
    }
    set tSC = ##class(%File).CreateDirectoryChain(##class(%File).GetDirectory(pDestFile))
    return:$$$ISERR(tSC) tSC
    return $System.OBJ.Export(.tList,pDestFile,,.errorlog)
}

ClassMethod ImportClientConfiguration(pSourceFile As %String) As %Status
{
    new $namespace
    set $namespace = "%SYS"
    return $System.OBJ.Load(.pSourceFile,,.errorlog)
}

You could use a task to do this regularly on a short schedule.

Hi Stephen,
I think you're on the right track by using custom claims for access control. This is what I've done in the past. Scopes in OAuth are intended to be granted by the user, which is not quite what you want here.

As far as I know there's no way to customize the token response. Your best option is to add the custom claims to the userinfo response. This means adding logic to your ValidateUser() method that will set the claim values and also add them to the list of user info claims.

Set tClaim = ##class(%OAuth2.Server.Claim).%New()
Do properties.UserinfoClaims.SetAt(tClaim,"MyCustomNamespace/MyCustomClaim")
Do properties.SetClaimValue("MyCustomNamespace/MyCustomClaim","something based on the user")

Then when the "resource server" part of your app validates the access token, you can call the userinfo endpoint to get this claim and determine the user's permissions.

$$$ThrowOnError(##class(%SYS.OAuth2.AccessToken).GetUserinfo(appName,accessToken,,.pUserInfo))
set myCustomClaim = pUserInfo."MyCustomNamespace/MyCustomClaim"

If you'd rather not make a separate call to userinfo each time, your other option is to add them to the body of the JWT. That would look similar in the ValidateUser() method, but with properties.JWTClaims instead of UserinfoClaims. Then your resource server can validate the signature on the JWT and get the claim from the token body using the ##class(%SYS.OAuth2.Validation).ValidateJWT() method. This is a little more complicated because you have to enforce that the JWT does have signing enabled (unfortunately the ValidateJWT() method will accept a token with no signature.)

I don't know of anything similar to Hibernate in Caché. If you want to encapsulate some data access logic inside of a class, it's helpful to define a class query that other objects can access through dynamic SQL.

More specific to your getByCode example, I use the index auto-generated methods a lot. For example in your dictionary table I would create a unique index on Code Index CodeIndex On Code [ Unique ]; and then use ##class(Whatever.Dictionary).CodeIndexOpen() to open the object I want.

For the first part of your question, here's a sample method that gets a bearer token from the request:

ClassMethod GetAccessTokenFromRequest(pRequest As %CSP.Request = {%request}) As %String
{
    Set accessToken=""
    Set authorizationHeader=pRequest.GetCgiEnv("HTTP_AUTHORIZATION")
    If $zcvt($piece(authorizationHeader," ",1),"U")="BEARER" {
        If $length(authorizationHeader," ")=2 {
            Set accessToken=$piece(authorizationHeader," ",2)
        }
    }
    return accessToken
}

EDIT:
And here is a full sample of a REST handler that retrieves a bearer token and reuses it to make a request against another REST service.

Class API.DemoBearerToken Extends %CSP.REST
{

Parameter APIHOST = "localhost";

Parameter APIPORT = 52773;

Parameter APIPATH = "/some/other/path";

XData UrlMap [ XMLNamespace = "http://www.intersystems.com/urlmap" ]
{
<Routes>
<Route Url="/example" Method="GET" Call="example"/>
</Routes>
}

ClassMethod example()
{
    set accessToken = ..GetAccessTokenFromRequest(%request)
    set req = ##class(%Net.HttpRequest).%New()
    set req.Https = 1
    set req.SSLConfiguration = "some ssl config"
    set req.Server = ..#APIHOST
    set req.Port = ..#APIPORT
    set req.Authorization = "Bearer "_accessToken
    $$$ThrowOnError(req.Get(..#APIPATH))
    set %response.Status = req.HttpResponse.StatusCode
    set %response.ContentType = req.HttpResponse.ContentType
    if req.HttpResponse.StatusCode <= 400 { //if not an error response
        set jsonData = {}.%FromJSON(req.HttpResponse.Data)
        write jsonData.%ToJSON()
    }
    return $$$OK
}

ClassMethod GetAccessTokenFromRequest(pRequest As %CSP.Request = {%request}) As %String
{
    Set accessToken=""
    Set authorizationHeader=pRequest.GetCgiEnv("HTTP_AUTHORIZATION")
    If $zcvt($piece(authorizationHeader," ",1),"U")="BEARER" {
        If $length(authorizationHeader," ")=2 {
            Set accessToken=$piece(authorizationHeader," ",2)
        }
    }
    return accessToken
}

}

Hi Arun,

The simplest solution is to use the CSP session with a custom login page. That way you can use the built-in CSP authentication. Sergey's answer to this post has a good example: https://community.intersystems.com/post/authentication-options-cach%C3%A.... The downside is that it's not truly stateless, and it requires you to serve your web files through CSP.

If your web application isn't connected to CSP, I recommend using OAuth 2.0. This is a little more work since it involves setting up an authorization server. There's an excellent series of tutorials here:

https://community.intersystems.com/post/intersystems-iris-open-authoriza....

Here's the code I'm using to test btw. If you uncomment the commented line it gives the SAX parser error.

Class XML.Sample2 Extends (%RegisteredObject, %XML.Adaptor)
{

Property StringEmpty As %String;

Property StringZerowidth As %String;

Property IntegerEmpty As %Integer;

Property IntegerZerowidth As %Integer;

Property BoolTrue As %Boolean;

Property BoolFalse As %Boolean;

Property BoolZerowidth As %Boolean;

ClassMethod Test() As %Status
{
    set sample = ..%New()
    set sample.StringEmpty = ""
    set sample.StringZerowidth = $c(0)
    set sample.IntegerEmpty = ""
    //set sample.IntegerZerowidth = $c(0)
    set sample.BoolTrue = 1
    set sample.BoolFalse = 0
    set sample.BoolZerowidth = $c(0)
    set writer = ##class(%XML.Writer).%New()
    $$$QuitOnError(writer.OutputToString())
    $$$QuitOnError(writer.RootElement("root"))
    $$$QuitOnError(writer.Object(sample,"sample"))
    $$$QuitOnError(writer.EndRootElement())
    set string = writer.GetXMLString()
    w !, string
    set reader = ##class(%XML.Reader).%New()
    $$$QuitOnError(reader.OpenString(string))
    do reader.Correlate("sample","XML.Sample2")
    do reader.Next(.object, .sc)
    $$$QuitOnError(sc)
    for prop = "StringEmpty","StringZerowidth","IntegerEmpty","IntegerZerowidth","BoolTrue","BoolFalse","BoolZerowidth" {
        write !, prop, ": ", $replace($property(object,prop),$c(0),"$c(0)")
    }
    return $$$OK
}

}