Thanks Anzelem.

Its good to hear that backups off of the DR is working for your situation.

I'm concerned that the online backups proved problematic for you (and by online backups, I'm assuming you refer to External Freeze/Thaw() process).  Be sure that, after ExternalFreeze() is called, that you copy not only folders where the database exist,  but also the folders where journal files, WIJ files and you application files are.   

A restore procedure would require all these be restored (which include the WIJ + Journal Files) to ensure the when Cache starts up after a restore, that databases are in an integral and consistent state from the time of the backup. 

I know of lots of people successfully using the ExternalFreeze/Thaw on production systems.

thanks -  

Steve 

So I guess that is the question...

Is it possible to identify a point-in-time on the Async Mirror that it is logically consistent ?

I could shut down the Async member before backing it up, restarting it, and then, have it re-join and catchup with the mirror set. I'm seeking confirmation that this process will leave me with cache.dat's that are in a consistent state, and if restored, could be use like any other backup, and accept play-forward journal files.

Would this also apply if I performed an online Cache backup on the async member without shutting it down, or is the mirror de-journal activity on the async member ignorant of the final passes in the Cache Backup process ? 

It is impossible to determine a point-in-time logical consistency for the Async Member which is receiving mirror data from 5 busy Ensemble productions.  I'm hoping shutting it down, or taking it 'off-line' in some controlled manner,  would leave it in a state that could be backed up for later use.

Steve

Hi,

Rather than programmatically modifying the source code and recompiling the class -  if your page already defines, by default, the full set of columns available, you can show and hide each and every one of them programmatically by manipulating the table columns collection which is already part of the DOM:

Save this class in the SAMPLES namespace.  In this example, the Buttons hide some columns and un-hide  others on client and server:

/// Created using the page template: Default
Class sp14.tableTest Extends %ZEN.Component.page
{

/// Class name of application this page belongs to.
Parameter APPLICATION;
 
/// This Style block contains page-specific CSS style definitions.
XData Style
{
<style type="text/css">
</style>
}

/// This XML block defines the contents of this page.
XData Contents [ XMLNamespace = "http://www.intersystems.com/zen]
{
<page xmlns="http://www.intersystems.com/zentitle="">
<tablePane id="tblPerson" sql="Select ID,Age,DOB,Name,Home_City,Home_State from Sample.Person" >
<column id="c0" colName="ID"  hidden="false"/>
<column id="c1" colName="Age"  hidden="false"/>
<column id="c2" colName="DOB"   hidden="false"/>
<column id="c3" colName="Name"   hidden="true"/>
<column id="c4" colName="Home_City" hidden="true"/>
<column id="c5" colName="Home_State" hidden="true"/>
</tablePane>
<hgroup width="350" align="left">
<button id="clearColumns" caption="Clear Columns (client)" onclick="zenPage.clearTableColumns()/>
<spacer width="10px"/>
<button id="basicColumns" caption="Add other Columns (client)" onclick="zenPage.addBasicTableColumns()"/>
</hgroup>
<spacer height="5px"/>
<hgroup width="350" align="left">
<button id="clearColumnsSS" caption="Clear Columns (Server)" onclick="zenPage.clearTableColumnsSS()/>
<spacer width="10px"/>
<button id="basicColumnsSS" caption="Add other Columns (Server)" onclick="zenPage.addBasicTableColumnsSS()"/>
</hgroup>
</page>
}

/// clearTableColumns
ClientMethod clearTableColumns() [ Language = javascript ]
 {
  tbl=zen("tblPerson")
  for (i 0; i tbl.columns.length; i++) {
     tbl.columns[i].setProperty("hidden",true)
   }
  tbl.executeQuery()
  return
 }

ClientMethod addBasicTableColumns() [ Language = javascript ]
 {
  // Add ID, Name, and City columns 
  tbl=zen("tblPerson")
  for (i 0; i tbl.columns.length; i++) {
  if ((tbl.columns[i].getProperty("colName")=="Name") ||
      (tbl.columns[i].getProperty("colName")=="Home_City") ||
      (tbl.columns[i].getProperty("colName")=="Home_State")) {
           tbl.columns[i].setProperty("hidden",false)
   }
 }
  tbl.executeQuery()
  return
 }

Method clearTableColumnsSS() [ ZenMethod ]
 {
    set tbl=%page.%GetComponentById("tblPerson")
    for i=1:1:tbl.columns.Count() {   
        set column=tbl.columns.GetAt(i)
        set column.hidden=1
    }
    set tbl.refreshRequired=1
    Quit
 }

Method addBasicTableColumnsSS() [ ZenMethod ]
 {
      set tbl=%page.%GetComponentById("tblPerson"
      for i=1:1:tbl.columns.Count()
          set column=tbl.columns.GetAt(i)
          if "Name,Home_City,Home_State"[column.colName {
             set column.hidden=0
          }
      }
      set tbl.refreshRequired=1
      Quit
 }

}
 

Even if your default <table> definition did not hard-code the full set of columns in the XDATA block, which you later manipulate, you can also programmatically add the table and column objects of the table, something like this - from server side instance method - 

        // Add Table of transactions
        set nTable=##class(%ZEN.Component.tablePane).%New()
        set nTable.id="tblTrans"
        set nTable.queryClass="myAccounts.QryUtils"
        set nTable.queryName="Transaction"

        // Define a column and add to table

        set column=##class(%ZEN.Auxiliary.column).%New() 
        set column.colName="Narrative"
        set column.hidden=1
        do nTable.%AddColumn(column)

        do %page.%AddComponent(nTable)

- Steve

Hi Arun,

If you want a different tooltip (aka 'title' property), for each different item in your dropdown box, then my guess is that, you need to trap the onchange event of the datacombo box.  That is - when the user selects a new item from your dropdown box,  and it fires off an event - add code to that event (onchange?)  which redefines the title property of that datacombo component, based on the selection made.

Of course, somewhere you need to supply to the page a list of title texts that need to be applied for each option in the datacombo box.

Steve

Hi,

I had a closer look, and it is not actually an "OnValidate" method per se that exists for use,..- but the feature does exist...

You need to create a <setting>IsValid() method. something like this:

 Class qmd.CustomAdapter Extends EnsLib.TCP.InboundAdapter [ ProcedureBlock ]
{

Parameter SETTINGS = "MyCustomSetting";

Property MyCustomSetting As %Integer;

ClassMethod MyCustomSettingIsValid(%val) As %Status
{
if %val'?1.n quit $$$ERROR(5001,"My Custom Setting should be an integer.")
Quit $$$OK
}

}

Hope this helps.

Steve

Hi

I've not validate this suggestion by trying this myself yet,...so just an idea:  I do know that when defining custom settings for an adapter, you are given an OnValidate callback of sorts to validate user input. 

You could put security checks (check if user holds a custom resources) in that method.

You may need to do a simple subclass of your to adapter to implement correctly though.

Steve 

Hi Satheesh,

ECP allows you to 'remotely mount' a database on one instance from another.  For example, let's say you define (as you have) a Cache instance with a Database 'C' and you have a second instance running Ensemble now.

When the basic ECP configuration is done, you will then be able to define a 'remote' database on the Ensemble instance - the remote database being, the 'C' database managed by the Cache instance. The contents within the 'C' database on Cache will be available as a database on Ensemble just as though it was a locally mounted one.

In this scenario, the Cache instance is the ECP Server (or ECP Database server),  and the Ensemble instance is the ECP Client (or ECP Application server instance).

The basic ECP configuration is carried our via ECP settings accessible through the management portal, and are done on the ECP client side (Ensemble side)  - see Admin-> Settings -> ECP Settings. You need to tell Ensemble where the ECP Database Server (Cache) is.

See the documentation for details, but when that is done, your Cache databases will be available to Ensemble as databases that can be defined as remote databases. 

Note: that ECP works at the database level - not at the namespace level... so the configuration of Ensemble Namespace (where you are building your production), needs to be enhanced such that the Ensemble namespaces sees the globals, packages and routines it needs. Edit the Ensemble namespace's Global, Package, and Routine mapping definitions.

Regarding SQL projections when you are on the ECP Client (Ensemble), if you do not Package Map your Class definition, then, you wont have those classes available for use. With only Global Mapping defined between your Ensemble namespace and the remotely mounted Cache database,  you will only get direct global access.

The online documentation for ECP overview is here: http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GDDM_overview 

Finally, be aware that ECP is an enterprise feature, and that the corresponding license keys for your instances should include the Multi-Server feature. and be ECP capable.

Sincerely,

Steve

Hi Tom,

For much of what you requested, you should look at InterSystems IRIS's new Document database model.

The Document databases accepts and persists collections of JSON Documents inside of Cache . Additionally, it allows you to define a set of properties for documents in this document databse.  These properties correspond to elements in inserted documents. (ie, you can define the "LastName" as being the element 'lastname') - which effectively updates an index for every document added to the collection that has an element 'lastname'.

You can then perform an SQL query, using any one of these columns, like 'lastname' within the query to restrict/select the JSON documents you are after.

I think this new feature will satisfy your requirements . There is even a generic REST API that allows you to Add/Delete/Update JSON documents.

See: https://learning.intersystems.com/course/view.php?id=687 

HTH, 

Steve

Hi,

If you want to protect the database, start by creating a resource via the options Security Management > Resources  in  the management portal,  giving it an appropriate name that makes sense to you - for example, if your database is called "myAppDB", create a security resource "%DB_MYAPPDB".

Prefixing with '%DB' in the name is convention, not a requirement.  During setup, add a description, and, select whether by default, users have Read, Write and/or Use privileges.  

This is only the first step.  Now that you have an identifiable security asset you want to secure, you can proceed.

You need to decide how users that fall under this new role of yours, will interact with this DB, so you need to build up this role definition accordingly. Using the Security Management > Roles section, select your new role, and, add the Database resource that protects your database (in my example above '%DB_MYAPPDB'), identifying if users of this role can only READ or can also WRITE data in this database.

This action assigns the privilege for this database afforded to users who belong to this new role.

Actually working with this database, however, would require that you add some resources to this role.  For example, if these users are developers, and you want to give these developers access for development, then, add the %Development resource to your new role too.

You will also need to more than likely add a %Service_ type resource that allows users of this role service access into Cache, for example, via TELNET, or via ODBC, etc.  Your requirements will differ from others, but is Studio access is required, definitely include %Service_Object (Use).

Finally - have a look at a pre-defined Role on the system called a "%Developer" which is setup by default on most installations., and is something you can use for reference.  Have a look at this role, and its resources+permissions (privileges) you will see it has some databases under protection, and allows %Development, and a bunch of %Service_ resource types for allowing different access, as explained above.

Sincerely,

Steve

Hi,

If I understand you correctly, I think you want to know whether there is a function that you can invoke, from the condition field of a Business rule, that checks if a given date is in between two other dates.

If you want to see what functions are available, the best way is to get assistance from the Expression editor (that is invoked when you double-click in the Condition field, or click 'fx', on the main Business Rules definition window when the condition field is in focus)

Whilst in the Expression Editor, clicking 'fx' again, will list the available functions, and the editor will help you through the process of building an expression that includes the use of these functions, and wrapping them around AND, OR and other operators.

I would suspect, however, that by default, we do not supply an in-built Business Rules function for your current scenario - however - (and I think this is a powerful feature), you can create whatever custom function and add these to the ones already available.   When you create your function, this will appear in the Expression Editor and can be used just as if it came with Ensemble.

Building your own function for this purpose is easy, it is just a class that extends Ens.Rule.FunctionSet, and must follow some basic rules.  See the documentation here: 

http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=EGDV_adv_custom_utilfunctions 

Your function would take 3 arguments, value, date1 and date2 to determine whether the date in 'value' is in between date1 and date2.  Here I'm assuming the date format is CCYYMMDD, and I'm converting the incoming date to a numeric $horolog date, using the $zdh (or $zdateh) function.  Change accordingly if your date format is different.

Class User.CustomFunctions Extends Ens.Rule.FunctionSet
{

/// take 3 date values in CCYYMMDD format, (value, date1 and date2) and verify if value falls between date1 and date2
ClassMethod DateBetween(value, date1, date2) As %Boolean
{
if ($zdh(date1,8)<$zdh(value,8)) && ($zdh(value,8)<$zdh(date2,8)) quit 1
quit 0
}

}

Defining a class like the sample above in your Ensemble-enabled namespace and compiling it, will expose this function (and the comment after the 3 '/') to the Ensemble Expression editor like any other function that already comes with Ensemble.

Hope this helps.

Steve

Hi Soufiane

The graphical rules editor used in the UI generates a class, and an xml block in that class, to represent your rules.

If you want a target to be different, based on some code, why don't you create multiple conditions for the different targets you have, then, base each condition on some database setting.

The rule itself remains static (except when you need to define a new target)  and it will show all the possible paths that can be taken by the rule.

Then.. programatically...you change that value in the database which is behind all the conditional statements you have and thus - programatically, you effect a target change.

The other alternative is to use a Business Process. You can send your message to be handled and routed by a Business Process in BPL.  The process, can programatically resolve the name of the target component in a context variable (let's say, context.TargetName). After context.TargetName has been programatically resolved, make a BPL Call action, and for the Call action's property "target" don't hardcode a value.

Instead supply "@context.TargetName", and the message will be sent to whatever the value of context.TargetName is at that point in time.

Steve

So it would seem, that I need to differentiate, in the DeepSee index tables, which of the two readings on the last day of the month, is actually the last one.

I do not have time. :)

The data is being loaded from a flat file into the persistent cache table, which has just Sensor/date/value.. (prior to the DeepSee indices being built).   So...., the sequence is important, and the row ID in the persistent cache class becomes the differentiation - and potentially - something I can use. 

In the absence of time, I thought I should create a dimension that is calculated at index build time based on the source.%Id(), or a combination of Date_source.%Id().  The ID is an integer that is incremented with every new row in the source table.

I feel certain I can use the source ID to differentiate what is the last reading for the day. I'll be trying an approach along the lines of using the raw ID for help.  Any other ideas welcome..

I do not always have sensor reading on the last day of a month (Jan 31st, Feb 28, etc).

So similar to the issue you ran into, I get blank results when there is no reading on the last day. Of course, I want to get what essentially would be the last reading, by looking for the latest reading in that month.

 Close, but still not solution... :(

Hi Randall.

The early implementation of JSON support (which exposes system methods with a '$' prefix like '$toJSON()', etc), was deprecated after 2016.1 releases (which you are using).

Unfortunately the articles you are referencing where posted at the time the old syntax was supported, hence the <SYNTAX> error.

This post in the community explains the rationale for this change in more  detail.

This link is the on-line equivalent to the JSON documentation available for 2017.1, and covers all the new syntax you should be using.   In there you will find the correct methods to instantiate a dynamic object, and how to work with it.

Sincerely,

Steve

Hi Stephen,

I would avoid using the system database CACHESYS to hold your application globals.

You should create another database that  is specifically intended to hold the globals which would be common to all the productions you are running.

To enable the globals residing in this one, common database, to be visible from your various production namespaces - you would "map" these using Global Mapping rules defined against your production namespaces.

See the System Management Portal's Namespace configuration screens, and the Global Mappings feature, or look here: 

http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GSA_config_namespace_addmap_global  

 

Sincerely,

Steve

Hi Kenneth

Your container is based on a ubuntu distribution per the "FROM ubuntu" command in the dockerfile.

I believe ubuntu by default comes with and uses the application "apt-get" to do a similar function as yum, so, your yum commands running inside the ubuntu container you are starting with needs to retrieve yum, or instead, use apt commands.

however .. you say you believe that yum is installed because you executed:

which yum
/usr/bin/yum

- so the question is - did you execute this inside the container or outside of it, ie, on your VM.  Your VM (although running UBUNTU16.04), may have had yum downloaded into it - but the base ubuntu container you are using for your container definitely thinks you need this.

Equivalent apt-get commands you can use perhaps,  in place of the yum ones would be:

RUN apt-get update \
  && apt-get install which tar hostname net-tools wget \
  && apt-get clean all\ 
  && ln -sf /etc/locatime /usr/share/zoneinfo/Europe/Prague

Try that -

sincerely,

Steve