If you're using git/Gitlab you can have System Default Settings file (export, import) specific for each deployment.

  • /<repo>/config/SDS/dev.xml
  • /<repo>/config/SDS/test.xml
  • /<repo>/config/SDS/live.xml

And get the path as: config/SDS/$CI_COMMIT_BRANCH.xml

This way the code is the same everywhere, and you only load one SDS file, depending on the current environment.

At this point I would highly recommend just writing a cfn template, rather than clicking through 30 screens.

Great article though.

Notes:

  1. Split cluster and service/task creations into separate stacks.
  2. By default your cluster would have FARGATE and FARGATE_SPOT capacity providers, but if you use Launch type compute configuration you'll only get Fargate On-Demand. To use Spot you need to use Capacity provider strategy compute configuration and specify Spot.  

As long as you insert only with SQL (without %NOINDEX flag) or objects (%Save), you don't have to rebuild indices. But some gotchas to remember:

  • If the table has some data before you added an index, you need to build the index after adding it (as adding an index does not build it for preexisting data)
  • If you ran sql queries which filtered based on an indexed column value, they won't automatically take advantage of the index, you need to purge all queries associated with the newly indexed table.
  • If you use %NOINDEX or direct global access to add rows, indexes must be built manually later

Here's the code to create task to purge messages in all interoperability namespaces:

set sc = ##class(%SYS.Namespace).ListAll(.result) 
kill result("%SYS"), result("HSCUSTOM"), result("HSLIB"), result("HSSYS"), result("REPO") 
while 1 {set ns = $o(result($g(ns))) quit:ns="" continue:$e(ns,1,2)="^^" set $namespace = ns continue:'##class(%Dictionary.ClassDefinition).%ExistsId("Ens.MessageHeader") set task=##class(%SYS.Task).%New(),task.Name = "Purge old Interoperability data in " _ $namespace,task.NameSpace=$Namespace,task.TimePeriod=0,task.TimePeriodEvery=1,task.DailyFrequency=0,task.DailyFrequencyTime="",task.DailyIncrement="",task.DailyStartTime = 3600,task.DailyEndTime = "",task.StartDate = $p($H,",",1)+1,task.Priority = 2,task.Expires = 0,taskdef = ##class(Ens.Util.Tasks.Purge).%New(),taskdef.BodiesToo = 1,taskdef.KeepIntegrity = 1,taskdef.NumberOfDaysToKeep = 1,taskdef.TypesToPurge = "all",sc = task.AssignSettings(taskdef),task.TaskClass=$classname(taskdef),sc = task.%Save()}

Here's how to fix that.

1. Create Inbound adapter which extends default inbound adapter and exposes DeleteFromServer setting:

Class Test.InboundAdapter Extends EnsLib.File.InboundAdapter
{
Parameter SETTINGS = "DeleteFromServer:Basic";
}

2. Create Passthrough Service, which uses your custom adapter:

Class Test.PassthroughService Extends EnsLib.File.PassthroughService
{
Parameter ADAPTER = "Test.InboundAdapter";
}

3. Use your class (2) when you create a new BS, it will have DeleteFromServer property:

I filed an enhancement request, please use DP-422980 as an identifier if you would contact WRC on this topic.

Recently I wrote a snippet to determine which Business Host took to long to stop:

Class Test.ProdStop
{

/// do ##class(Test.ProdStop).Try()
ClassMethod Try()
{
	set production = ##class(Ens.Director).GetActiveProductionName()
	set rs = ..EnabledFunc(production)
	if rs.%SQLCODE && (rs.%SQLCODE '= 100) {
		write $$$FormatText("Can't get enabled items in %1, SQLCode: %2, Message: %3", production, rs.%SQLCODE, rs.%Message)
		quit
	} 
	
	while rs.%Next() {
		set bh = rs.Name
		set start = $zh
		set sc = ##class(Ens.Director).EnableConfigItem(bh, $$$NO, $$$YES)
		set end = $zh
		set duration = $fn(end-start,"",1)
		write !, $$$FormatText("BH: %1, Stopped in: %2, sc: %3", bh,  duration, $case($$$ISOK(sc), $$$YES:1, :$system.Status.GetErrorText(sc))), !
		if duration>60 {
			write !, $$$FormatText("!!!!!!! BH: %1 TOOK TOO lONG !!!!!!!", bh),!
		}
	}
}

Query Enabled(production) As %SQLQuery
{
SELECT 
	Name 
	, PoolSize
FROM Ens_Config.Item 
WHERE 1=1
	AND Production = :production
	AND Enabled = 1
}

}

It stops BHs one by one, measuring how long it took to stop each one.

I would recommend you try to determine which items are taking too long to stop.

Export production before running this code to avoid manually reenabling all the hosts.