I usually follow these steps when I have two similar but distinct codebases:

  1. Create a new repo.
  2. Export everything from the LIVE server into the repo. Commit.
  3. Export everything from the TEST server into the repo. Commit.

Commit from step (3) would have all the differences between LIVE and TEST. I assume the code on TEST is newer, so that should be a later commit, but it you want to, you can swap the export order.

Before making a commit (3) you might want to remove trivial differences such as whitespaces, etc. Also Gitlab has a compare mode for commits which automatically ignores whitespace differences.

While testing, I see I can easily set %session.Data to hold data I want to preserve.  

No problem! I thought you were having issues with that part.

how, on my next API call can I use that session  

You just need to supply the cookies CSPSESSIONID and CSPWSERVERID. With that you'll have the same session. In browsers (and I think in postman) that's automatic, so you don't have to do anything. It should work out of the box as long as you have UseSession set to 1.

From the documentation (even better docs):

1. Open the spec class.

2. Add

Parameter UseSession As BOOLEAN = 1;

3. Recompile the spec class.

4. Now your disp class has the same parameter and you can use sessions in your impl class.

If you need a larger change than adding a parameter to a dispatcher class, do this (docs):

1. Create a custom subclass of %CSP.REST, i.e. test.REST.
2. Modify your swagger spec by adding x-ISC_DispatchParent:

  "info":{
    "version":"1.0.0",
    "x-ISC_DispatchParent":"test.REST",

3. Recompile.

Now your disp class extends test.REST and you can modify anything there.

Pythonic way is to use with. In that case close is automatic as soon as we get outsude of the context:

ClassMethod ReadFileUsingPython(pFile As %String) [ Language = python ]
{
  from datetime import datetime
  import iris
  time1 = datetime.timestamp(datetime.now())
  print(time1)
  if pFile=="":
    raise Exception("filename is required.")

  with open(pFile,"r", encoding="utf-8", errors="ignore") as file:
    log = iris.cls('otw.log.Log')
    for line in file:
      status = log.ImportLine(line)

  time2 = datetime.timestamp(datetime.now())
  print(time2)
  print("Execution time: ",(time2-time1))
}

Also you can simplify your code:

ClassMethod ReadFileUsingPython(pFile As %String) [ Language = python ]
{
  from datetime import datetime
  import iris
  time1 = datetime.timestamp(datetime.now())
  print(time1)
  if pFile=="":
    raise Exception("filename is required.")

  file = open(pFile,"r", encoding="utf-8", errors="ignore")
  log = iris.cls('otw.log.Log')
  for line in file:
    status = log.ImportLine(line)

  time2 = datetime.timestamp(datetime.now())
  print(time2)
  print("Execution time: ",(time2-time1))
}

Interrupt causes rollback, try this code:

Class User.Del Extends (%Persistent, %Populate) [ Final ]
{

ClassMethod HangBool(seconds, id) As %Boolean [ SqlProc ]
{
    hang seconds
    quit $$$YES
}

/// do ##class(User.Del).Test()
ClassMethod Test()
{
    do ..%KillExtent()
    do ..Populate(10,,,,$$$NO)
    
    set start = $zh 
    &sql(DELETE FROM Del WHERE Del_HangBool(1, id)=1)
    set end = $zh
    w "Delete took: ", end-start,!
}
}

Regardless of when you send the interrupt, the ^User.DelD global would have 10 records.

For cross-namespace queries the easiest way is to map packages/globals but that might not be a recommended approach for an audit table.

You can do this:

  1. In your production namespace create a new table with the same structure as your audit select query backed by PPG storage.
  2. Switch to the audit namespace.
  3. Run audit query, iterate the results and write them into the PPG.
  4. Switch into a production namespace.
  5. Run query against your PPG table, joining any local tables.