"What files/directories should I keep track of from the durable %SYS directory? (e.g: I want to bind-mount those files/directories and import the code and see the Namespace and all the instance configuration )."

Answer: None.

Your configuration needs to be code. Right now, the best approach to have your namespace/databases created/configured, CSP applications, security, etc. is by using %Installer Manifests during the build process of the Dockerfile. I personally don't use Durable %SYS on my development machine. I prefer to use %Installer to configure everything and if I need to load pre-tables with data, I load it from CSV files that are on GitHub as well as source code. 

That allows you to source control your Code Table contents as well. Test records can be inserted as well with the same procedure.

For an example of this, look at this demo (based on IRIS):

https://github.com/intersystems-community/irisdemo-demo-fraudprevention

Look at the normalized_datalake image. It loads CSV files into the tables as part of the Dockerfile build process. You will notice that this image is based on a base image that has some standard reusable code. This source for this base image is here:

https://github.com/intersystems-community/irisdemo-base-irisdb-community

I was using Atelier when I built this image. But the principle is the same. I am now using VSCode doing the same.

This is another demo based on IRIS for Health:

https://github.com/intersystems-community/irisdemo-demo-readmission

Look at the riskengine image. It loads data from JSON files into the data model as part of the build process. The JSON files are created by synthea. An open source tool for generating synthetic patient data.

If you use this method, any developer will be able to jump between versions of your software very quickly. If you need to fix a problem on an old version, you can just checkout that version tag, build the image (which will load the tables) and make the changes you want looking at that exact version with the exact data needed for it to work.

When you are done, you can go back to your previous branch, rebuild the image again using the current source code (and data in CSV/JSON files) and keep going with your new features.

Just to be clear: I don't mean you shouldn't use Durable %SYS. You must use Durable %SYS on Production!

But I have strong reservations about using it on your PC (developer environment). That's all. Even the central development environment (where Unit Tests could be run) wouldn't need it.

But your UAT/QA and Production should definitely use Durable %SYS and you should come up with your own DevOps approach to deploying your software on these environments so you can test the upgrade procedure.

Kind regards,

AS

Hi!

It is not hard to build a task that will search for your message headers from a specific service, process or operation in a date range and delete the headers and the messages. You can start by searching on Ens.MessageHeader. You will notice that there are indices that will allow you to search by date range and source config item pretty quickly.

BUT: 

I always recommend to my customers to create a production for each different business needs. That allow you to have different back up strategies, purging strategies and even maintenance strategies (some times you want to touch one production and be sure that the others will not be touched). You are also able to take these productions to other servers when you need them to scale independently (ops! now this new service is receiving a big volume of messages and is disturbing the others!). On different servers, you may be able to patch one production (one server) while the others are running. This is specially more feasible now with InterSystems IRIS. You can easily migrate from Ensemble to IRIS and have a production running on a VM with 2 cores, another on another VM with 4 cores, and so on. You don't have to start with 8 cores and scale by 4 like ensemble. So you can pay the same you are paying today, but with the possibility of having more governance.

PS: Mirroring on IRIS is paid differently from mirroring with Ensemble. With IRIS, you do have to pay for the mirrors as well while with Ensemble you don't have to.

So, sincerely I believe you will be happier having another production. It's a good start to have more governance.

Kind regards,

Amir Samary

Senior Sales Engineer - InterSystems

Hi!

I would add that they exist for compatibility with systems migrated from other databases. On other SQL databases, it's a common practice to use as the primary key a meaningful attribute of the tuple such as SSN, Code, Part Number, etc. I think this is a very bad practice even on normal SQL databases since a Primary Key/IdKey, once created, can't be easily changed. If you want to have one of your meaningful fields of your tuple as your primary key, use IdKey. I strongly advise against it though.

So, instead of having as Primary Key one of the attributes of the tuple (using IdKey), it's much better to have a sequential number. This is provided by different databases in different ways. Oracle has it's SEQUENCES. SQL Server has its AUTO_INCREMENT. Continuing with Oracle and SQL Server examples, many fields/columns on a table can be populated with values from a sequence (Oracle) or be auto incremented (SQL server). But only one of the fields can be the Primary Key. The Primary Key is used by other rows on the table or other tables to reference this row. It can't be changed just because there may be other rows relying on it and that is fine.

In Caché, we have our ID field that is an auto_increment kind of primary key. Other rows/objects from this or other tables will be referencing this object through it's ID. You can create your own unique fields. As many as you want. Code, PartNum, SSN, etc. You can define them Unique indices for them all. Don't use IdKey to do that. That is not its purpose. IdKey will make that field THE Primary Key of the class. It will make its ID be the field you picked. That is very bad (IMHO).

On the other hand, there are cases where performance can be increased by using IdKey. It will save you a trip to the indices global to get the real id for that field before going to the data global to get the other fields you need. If we are talking about millions of accesses per second and you are pretty sure you won't need to change the value of that field once it was created, use IdKey. It will give you better performance. But if you do, beware that by choosing you own IdKey, you may not be able to use special kinds of indices like bitmap indices or even iFind. It will depend on the type of IdKey you pick. If it's a string, for instance, iFind and BitMap indices won't work with your class anymore. They rely on having a numeric sequential ID to work.

Kind regards,

AS

Hi!

I am not sure if I understood your questions. But here is an explanation that may help you...

If you want to run a SQL query filtering by a date

Let's take Sample.Person class on the SAMPLES namespace as an example. There is a DOB (date of birth) field of type %Date. This stores dates in the $Horolog format of Caché (an integer that counts the number of dates since 12/32/1940.

If your date is in the format DD/MM/YYYY (for instance), you can use TO_DATE() function to run your query and convert this date string to the $Horolog number:

select * from Sample.Person where DOB=TO_DATE('27/11/1950','DD/MM/YYYY')

That will work independently of the runtime mode you are on (Display, ODBC or Logical).

On the other hand, if you are running your query with Runtime Select Mode ODBC, you could reformat your date string to the ODBC format (YYYY-MM-DD) and don't use TO_DATE():

select * from Sample.Person where DOB='1950-11-27'

That still is converting the string '1950-11-27' to the internal $Horolog number that is:

USER>w $ZDateH("1950-11-27",3)

40142

If you already has the date on the internal $Horolog format you could run your query using Runtime Select Mode Logical:

select * from Sample.Person where DOB=40142

You can try these queries on the management portal. Just remember changing the 

If you are using dynamic queries with %Library.ResultSet or %SQL.Statement, set the Runtime Mode (%SelectMode property on %SQL.Statement) before running your query.

If you want to find records from a moving window of 30 days

The previous query brought, on my system, the person "Jafari,Zeke K.".  He was born on 1950-11-27. The following query will bring all people that was born on '1950-11-27' and 30 days before '1950-11-27'. I will use DATE_ADD function to calculate this window. I have also selected ODBC Runtime Select Mode to run the query:

select Name, DOB from Sample.Person where DOB between DATEADD(dd,-30,'1950-11-27') and '1950-11-27'

Two people will appear on my system: Jafari and Quixote. Quixote was born '1950-11-04'. That is inside the window. 

Moving window with current_date

You can use current_date to write queries such as "who has been born between today and 365 days ago?":

select Name, DOB from Sample.Person where DOB between DATEADD(dd,-365,current_date) and current_date

Using greater than or less than

You can also use >, >=, < or <= with dates like this:

select Name, DOB from Sample.Person where DOB >= DATEADD(dd,-365,current_date) 

Just be careful with the Runtime Select Mode. The following works with ODBC Runtime Select Mode, but won't work with Display or Logical Mode:

select Name, DOB from Sample.Person where DOB >= DATEADD(dd,-30,'1950-11-27') and DOB<='1950-11-27'

To make this work with Logical Mode, you would have to apply TO_DATE to the dates first:

select Name, DOB from Sample.Person where DOB >= DATEADD(dd,-30,TO_DATE('1950-11-27','YYYY-MM-DD')) and DOB<=TO_DATE('1950-11-27','YYYY-MM-DD')

To make it work with display mode, format the date accordingly to your NLS configuration. Mine would be 'DD/MM/YYYY' because I am using a spanish location.

Hi Eduard!

Here is a simple way of finding it out:

select top 1 TimeLogged from ens_util.log

where configname='ABC_HL7FileService' 

and SourceMethod='Start' 

and Type='4' --Info

order by %ID desc

You put the logical name of your component on the configname. There is a bitmap index on both Type and ConfigName so this should be blazing fast too! Although, for some reason, the query plan is not using Type:
 
Relative cost = 329.11
    Read bitmap index Ens_Util.Log.ConfigName, using the given %SQLUPPER(ConfigName), and looping on ID.

    For each row:
    - Read master map Ens_Util.Log.IDKEY, using the given idkey value.
    - Output the row.
     
    Kind regards,
    AS

    Ok... I think I have found how to do it.

    The problem was that I use a Main dispatcher %CSP.REST class that routes the REST calls to other %CSP.REST classes that I will call the delegates.

    I had the CHARSET parameter on the delegates but not on the main router class! I just added it to the main router class and it worked!

    So, in summary, to avoid doing $ZConvert everywhere with REST applications, make sure you have both parameters CONVERTINPUTSTREAM=1 and CHARSET="utf-8". It won't hurt having the CHARSET declarations on your CSP and HTML pages as well like:

    <!DOCTYPE html>
    <html>
    <head>
        <CSP:PARAMETER Name="CHARSET" Value="utf-8">
        <title>My tasks</title>
        <meta charset="utf-8" />
    </head>

    Kind regards,

    Amir Samary

    Hi!

    You don't actually need to configure a certificate on your Apache or even to encrypt the communication between Apache and the SuperServer with SSL/TLS.

    You can create a CSP Application that is Unauthenticated and give it privileges to do whatever your web services need to do (Application Roles - more info here). I would also configure a "Permitted Classes" with a pattern to only allow your specific web services to be called. I would also block CSP/ZEN and DeepSee on this CSP Application.

    More info on configuring CSP Applications here.

    Then, for each web service you want to publish on this application (that is mentioned on the Permitted Classes), you will create a Web Service Security Policy using an existing Caché Studio wizard for that (more info here).

    The wizard will allow you to choose from a set of options and several variations for each option on securing your web service. You may choose from the combobox "Mutual X.509 Certificates Security". Here is the description for this option:

    This policy requires all peers to sign the message body and timestamp, as well as WS-Addressing headers, if included. It also optionally encrypts the message body with the public key of the peer's certificate.

    You can configure Caché PKI (Public Key Infrastructure) to have your own CA (Certificate Authority) and generate the certificates that your server and clients will use.

    This guarantees that only a client that has the certificate given by you will be able to authenticate and call this web service. The body of the call will be encrypted. 

    If you restrict the entry points of this "Unauthenticated" csp application using "Permitted Classes" and if these permitted classes are web services protected by these policies, you are good to go. Remember to give to this application the privileges (Application Roles) for your service to be able to run properly (privilege on the database resource, SQL tables, etc.).

    This doesn't require a username token. If you still want to use a username/password token, you can require that using the same wizard. Here is an additional description that the wizard provides:

    Include Encrypted UsernameToken: This policy may optionally require the client to send a Username Token (with username and password). The Username Token must be specified at runtime. To specify the Username Token, set the Username and Password properties or add an instance of %SOAP.Security.UsernameToken to the Security header with the default $$$SOAPWSPasswordText type.

    If you decide to do that, make sure your CSP application is configure for authentication  "Password" and do not check "Unauthenticated". 

    Also, don't forget to use a real Apache web server. My point is that you don't need to configure your apache or its connection to the super server with a SSL certificate for all this to work. Caché will do the work, not Apache. Apache will receive a SOAP call that won't be totally encrypted. But If you look into it, you will notice that the body is encrypted, the header includes a signed timestamp, the username/password token will be encrypted, etc, etc. So, although this is not HTTPS, the certificates are being used to do all sort of things in the header and the body of the call that will give you a lot more protection that plain HTTPS.

    But please, don't get me wrong. You need HTTPS if you are building an HTML web application or if you are using other kinds of web services such as REST, that don't have all the alternative enterprise security provided by SOAP. SOAP can stand alone, secure, without HTTPS. Your web application can't.