Although I have seen environments where namespaces are used to separate Dev/Test/Prod etc. I have found that having the Prod environment on the same environment as the Non-Prod Namespaces is a risk to the Prod environment should an active piece of development take down the underlying machine (one example was a developer* making a mistake when working with Streams and had created an infinite loop in writing a stream and the server very quickly gained a 10GB pdf that filled the disk to capacity and the environment stopped working).

A common use case for multiple namespace for me would be for instances where the activity within the namespace is significantly distinct from the others. For example, we have a namespace that is dedicated to DICOM activity. While we could have put this in a single "LIVE" themed namespace, the volume of activity we'd see for DICOM would have filled our servers disk if kept to the same retention period as other standard retention period. So we have a DICOM namespace that has a retention period of around 14 days compared to others that are between 30 and 120 days.

*It was me. I was that developer.

Thanks Scott.

I'm also not rushing to delete based on counts, but it's still interesting to review.

I ran the "Complete Ensemble Message Body Report" from Suriya's post's Gist against a namespace and it ran for about 8 hours, which now has me nervous to run the Delete option. Although, to be fair, this is a namespace that has been in operation for about 10 years, so I might start smaller and work my way up.

Hi Joshua.

Is it possible that there is a group policy in place that is being applied to you and not your colleagues? Have you tried forcing an update of your group policies applied to your profile - from the windows terminal/command line:

gpupdate /force

Alternatively, do you have any extensions installed in Edge that you don't have installed for Chrome? Maybe an adblocker?

Finally, have you tried opening devtools on this page, refreshing, and then seeing if there are any meaningful errors appearing under the Console or Network tab?

The only way I can think of doing this would be to split out the Helper() ClassMethod into its own class, and then ensure the order of compilation is in such a way that the class containing the Helper class method is compiled. Depending on how you're achieving this, could you use the CompileAfter class keyword?

So something like:

Class 1:

Class Foo.Bar.1
{

ClassMethod Helper()
{
// do something
}

}

Class 2:

Class Foo.Bar.2 [ CompileAfter = (Foo.Bar.1) ]
{

ClassMethod Generated() [ CodeMode = objectgenerator ]
{
do ##Class(Foo.Bar.1).Helper()
// do something else
}

}

There's a good thread from a few years ago that goes into various ways of converting date formats which you can find here.

My approach in that thread was to suggest the use of the class method ##class(Ens.Util.Time).ConvertDateTime()

In your case using this option, you could do this for your main question:

Set Output = ##class(Ens.Util.Time).ConvertDateTime(input,"%Y-%m-%d %H:%M:%S.%N","%Y%m%d")

And then for your followup to include seconds:

Set Output = ##class(Ens.Util.Time).ConvertDateTime(input,"%Y-%m-%d %H:%M:%S.%N","%Y%m%d%H%M%S")

If I were to write this in an operation using the EnsLib.HTTP.OutboundAdapter, my approach would be something similar to:


	Set tSC = ..Adapter.SendFormData(.webresponse,"GET",webrequest)

	//begin backoff algorithm
	
	//Get start time in seconds
	Set startInSeconds = $ZDTH($H,-2)
	
	//Set initial params for algorithm
	Set wait = 1, maximumBackoff=64, deadline=300
	
	//Only run while Status Code is 504
	While (webresponse.StatusCode = "504"){
		
		//HANG for x.xx seconds
		HANG wait_"."_$RANDOM(9)_$RANDOM(9)
		
		//Call endpoint
		Set tSC = ..Adapter.SendFormData(.webresponse,"GET",webrequest)
		
		//Increment potential wait periods
		If wait < maximumBackoff Set wait = wait*2

		//Adjust wait if previous action takes us above the maximum backoff
		If wait > maximumBackoff Set wait = maximumBackoff
		
		//Check if deadline has been hit, exiting the While loop if we have
		Set currentTimeInSeconds = $ZDTH($H,-2)
		If (currentTimeInSeconds-startInSeconds>=deadline){Quit}
	}

This is untested however, so massive pinch of salt is required 😅

I'm not sure if it's related, but my colleagues and I all notice random performance issues with the management portal when accessing our non-production environment that's using IIS.

It was deployed to our non-production environment for similar experimentation reasons, but I never took it further due to these issues (and it dropped from my radar due to still being on 2022.1.2 with no immediate pressures to upgrade)

I need to upgrade the version of web gateway following a recent email from Intersystems, so I'm going to run that now and then reboot the machine and see if I see any changes.

Beyond that, I'm going to be following this discussion closely to see if our issues are related and if there is a solution.

Hi Mary.

If you did want to create your own method to do this, you could do something like this:

Class Demo.StopThings
{

/// Stops all services in the specified production
ClassMethod StopAllServices(ProductionName As %String) As %Status
{
    Set sc = $$$OK, currService=""
    Set rs = ##class(Ens.Config.Production).EnumerateConfigItemsFunc(ProductionName, 1)
    While rs.%Next() {
        Set currService = rs.Get("ConfigName")
        Write "Stopping the following service: "_currService_"..."
        Set tSC = ##class(Ens.Director).EnableConfigItem(currService, 0)
        If tSC = $$$OK{Write "Complete!",!}
        Else{Write "Failed",!}
        }
    Return sc
}

}

And then you could call it from terminal from relevant namespace by running:

Set Status = ##class(Demo.StopThings).StopAllServices("Insert.ProductionName.Here")

To use the same code for Operations, change the 1 to a 3 in the call to "EnumerateConfigItemsFunc" (and swap out the bits that say service for operation).

The above is some quick and dirty code to demonstrate the approach you could go for, but you may want to add in some error handling etc to make things more robust given your usecase.

As a short term approach, you may want to look into using Stunnel in client mode to encrypt the traffic and then set something up similar to:

This would mean that the traffic between your 2016 instance and stunnel is unencrypted but all on the same machine, and then stunnel handles the encryption between your machine and the external site using TLS1.3.

However, even if you go this route, I would still recommend getting the process started for upgrading to a newer version.

Increasing the pool value will have some effect on the RAM and CPU usage, but no different than having other jobs running through the production. If you move all the components over to using the actor pool (by setting the individual pool settings to 0) it should be easy enough to raise the value bit by bit while keeping an eye on the performance of the production and the cpu/ram usage of the machine to find a sweet spot.

If the API just needs a bit of extra resource when there's a small spike in the inbound requests, then this should not be of too much concern as it will just calm down once it's processed what has been requested.

If, however, there's a chance that it could be overloaded from inbound requests and the worry is that the server won't cope, then maybe look at using Intersystems API Manager to sit in front of the environment and make use of things like the rate limiting feature.

Or you could go even further and begin caching responses to return if the API is being queried for data that isn't changing much so that there's less processing being done for each request if it's going to get called for the same information by multiple requests in quick succession. You could make your own solution with caché/Iris, or look at something like redis.