If I understood the question correctly I think there might be some confusion...

Indeed internally Ensemble stores the Message Header times (TimeCreated & TimeProcessed) in a UTC time (per the considerations Robert mentioned) but when these times are displayed in the Message Viewer they go through conversion and are displayed in local time.

So if you would open the object via code, or run an SQL query directly, you would be dependent on running the relevant LogicalToDisplay() or LogicalToOdbc() methods for objects, or using the ODBC or Display runtime modes, but on the default Message Viewer web page these times should appear local.

You can examine this by doing a simple select on the Message Header table and you can see the difference between the Logical and Display modes.

For example, for a certain message, the Logical (internal UTC) time is 10:42 -

And now the same time using Display mode, the time is 13:42 -

And compare this with what you see in the Message Viewer, you see 13:42 -

and -

Note if you take a look at the Ens.DataType.UTC data-type class you can see it's conversion methods.

For example the LogicalToDisplay() method.

Here's an example of what it looks like when it's code runs.

In the following code I show my current local time (using $H, taking into account Time Zone and Daylight Saving), then I show the current general time (using $ZTS, per GMT and no Daylight Saving), and then applying the code run by the method, calling the method, and explicitly.

TEST>set currentLocalTime = $ZDateTime($Horolog,3,,3) 

TEST>set currentGeneralTime = $ZDateTime($ZTimeStamp,3,,3) 

TEST>write currentLocalTime
2018-09-20 10:46:07.000 

TEST>write currentGeneralTime 
2018-09-20 07:46:15.107 

TEST>write $zdatetime($zdTH($zdatetimeh(currentGeneralTime,3,,,,,,,,0),-3),3,,3) 
2018-09-20 10:46:15.107 

TEST>write ##class(Ens.DataType.UTC).LogicalToDisplay(currentGeneralTime) 
2018-09-20 10:46:15.107

See the -3 argument in the $ZDTH function call, from the docs this does the following -

$ZDATETIMEH takes a datetime value specified in $ZTIMESTAMP internal format, converts that value from UTC Universal time to local time, and returns the resulting value in the same internal format

Hi Dmitry,

Recommending your very cool and useful utility to someone I realized I did not find installation instructions, not here in this article, nor in the GitHub readme.

Could you please point me to them, or in case they indeed do not exist currently can you please provide them, for the benefit not only of the person I'm sharing this with, but with other Community members who will want to use this in the future. 

Tx!

What is the class name of your Web Service?

Is it also a Business Service? If so - what is the name of the BS component in the Production?

What URL are you using to access this WebService?

I have encountered this error when trying to call the WebService in a way that did not allow Ensemble to understand what is the name of the business service class it needs to use (for example by using the CfgItem URL parameter).

If I understand your question correctly then I think you'd benefit from defining and using a Search Table.

See from the Documentation -

If you need to query specifically via SQL (and not using the Message Viewer) then see also this answer (though relating to HL7, but relevant in the same way to the XML case) regarding using the Search Table within a query.

Please note that by default when a Persistent instance is %Saved()'d it gets done inside a Transaction, and therefore, even if the Database the Class' storage globals' are in, is not journaled, these SETs (and KILLs) will get journaled (for supporting rollback).

See this article for more details regarding avoiding journaling data. Specifically - the options of using CACHETEMP or turning off transactions for object filing.

Note if you are concerned with the audit logs generated for every turning off and on the journal for your process - you could simply turn off that System Audit event - %System/%System/JournalChange. Taking of course into consideration the drawback of not logging these kind of events outside the context of this specific scenario.

Hi Wolf,

I am using v2017.2.1 and Atelier 1.2.118 and I'm not experiencing this.

What I have encountered with regard to license consumption and Atelier (also with version 1.1 of Atelier) was that if I did not define a proper user for the server login (i.e. UnknownUser) then multiple users where consumed (per CSP session of the REST calls). So the recommendation is (also regardless of this licensing aspect) to define a real User for login. Note the phenomena I saw was that of multiple Users, not multiple connections of the same User (if that was what you were seeing).

I suggest that if, also when you define a proper User for login, the issue persists (and you don't get any other swift answer from the Community), you should work with the WRC to resolve this (and then hopefully post the answer back here on the Community).


Thanks Michelle.

When I tried to use the Tools menu I was in an "Atelier element" (e.g. an Ensemble class) so I don't think that was the cause.

Just in case, I double-checked now again - I'm editing an Ensemble class and the items in the Tools menu are disabled... (in the workspace that originally gave the problem; in the newer workspace this does not happen)

Thanks Alexander.

Indeed I had the Error Log view open and saw various errors - but those were to some projects I had in my workspace that connected to servers that were offline.  So I ignored them.

But now you confirmed it should work on this version - I kicked off a new workspace, defined just one project to an online server, and I got to see my Add-Ins... laugh

So unfortunately I don't know exactly what the problem was... but a new workspace solved it...

Note that apart from Export and Import options -

If you are using a %Installer Manifest for your deployment (for any environment - test or prod) - you can include in that manifest also the creation of Security elements such as Resources and Roles, etc.

For example:

<Resource
    Name="%accounting_user" 
    Description="Accounting"
    Permission="RW"/>

And:

<Role 
    Name="%DB_USER"
    Description="Database user"
    Resources="MyResource:RW,MyResource1:RWU"
    RolesGranted= />

See more information here (in the docs).

[Defining a Role as part of a manifest is also included in an example in [@Eduard Lebedyuk]'s post here]

Hi Tom,

What is the Type of the classes that you are creating - are they Serial? 

If so, this could explain the error as Serial classes cannot embed themselves (directly or indirectly).

If you can try creating Persistent classes instead of Serial let us know how that works for you.

Note - with Persistent classes, as I understand this is Ensemble, you will need to take care of deleting the various instances generated, once you purge the messages. See this Post for further details if relevant.

Hope this helps.

As I've communicated with Simcha directly, there is a built-in mechanism for this based on the Ensemble Security Audit functionality.

Enabling the '%Ensemble / %Production / ModifyConfiguration' event will yield an audit record for Ensemble Production configuration changes.

For example if you changed the "ArchivePath" setting of a Business Service called "XML In" the event would look like this in the Audit database search results:

And the event details will show this:

Note this will include not only the user that performed the change, the time and such general information, but also the:

  • Name of the Production
  • Name of the Item changed
  • Name of the Setting changed (for example)
  • Previous value (of setting)
  • New value (of setting)

Hi Uri,

Adding to what Vitaliy wrote I would like to share with you some input I got from our Development -

According to the question, there are a few situations to consider:

  1. Using a single gateway and multiplexing the data sent thru that one connection to a single gateway thread.
  2. Using a single gateway process with multiple threads communicating with individual Cache processes
  3. Using multiple gateway processes dedicated to different tasks that could have multiple connections/threads to each individual gateway process. 

Addressing this requires intimate knowledge of what the DotNet app is doing.  If the dotNet app has shared resources (making it single threaded) then option 1 is probably best.  Options 2 and 3 will tax the system with lots of process  creation and socket connections.  If they are using those options, it best to hold onto the connection established and not be constantly connecting and disconnecting.  With this in mind, they might combine the idea of multiplexing similar functions via a single connection that remains connected as a service.  Whether this uses 2 or 3 does not make a lot of difference, with the exception of situations where the DotNet application is not stable and can cause crashes, then it is best to isolate them to track down issues.

I thought it would be worthwhile to point out in the context of this question that we have an online course that addresses this topic:

Searching Messages Using the Message Viewer

https://learning.intersystems.com/enrol/index.php?id=34

Here's an outline of the course:

  1. The Message Viewer — describes how to navigate to and around the message viewer.
  2. Searching using Basic Criteria — describes how to search using basic criteria such as target/source, id, and time.
  3. Extended Criteria: Header and Body fields — describes how to search for messages using header and body fields.
  4. Search Tables, Virtual Documents, and SQL — describes how to search for messages using search tables, segment fields, property paths, and SQL

You can use a Message Router as your Business Process.

In the Routing Rule you can define the Message Class in the Rule's constraint to be Ens.StreamContainer, which is the default message type for file passthrough.

Then you can use the OriginalFilename property of the message to check regarding the extension of the file, for example.

Here's a sample screenshot to illustrate -

Hope this helps.

Nikita, this is really an excellent tool.

One question: The documentation states that Web Terminal is supported from version 2013.1 and up of Caché (Ensemble, etc.). But it seems (at least v4) to require functionality (e.g. %CSP.REST) that exists only in newer versions than 2013.1.

I assume older versions of Web Terminal supported 2013.1.

Could you please restate what is the minimal Caché version for v4 of Web Terminal, and what is the latest version of Web Terminal that did support Caché v2013.1.

(And perhaps update for every ("released") version of Web Terminal which is the minimal required Caché version.)

Thanks!