Hi Utsavi

I have sent you an email with the classes.

I have managed to get all of the scemas to be generated correctly and exported to JSON correctly. the only issue is that the if I just use the class name as the 'key' in the array you will only get one instance of each schema however if I append the counter to the classname then it will create multiple instances of each schema in the class

 
JSON

I haven't explored the XDATA mapping associated with %JSON nor investigated the other %JSON classes.

One solution if you were importing the json would be to pre-process the json string and remove the counter appended to the class name but that is a bit cludgy

Nigel

Hi

I have done some more experienting and in the Contact class I have a ContactPhoneNumbers which I defined as %ListOfDataTypes and I noticed that they were being generated but not exported to JSON so I changed the type to %ArrayOfDataTpes and that didn't work either. I played around with the %JSON attributes to no avail. I read the documentation on the %JSON.Adapter class and there are strict rules about Arrays and Lists must contain literals or objects and  so I wrapped the Phone Numbers in quotes even though I was generating them as +27nn nnn nnn but that made no difference. I suspect that the Attribute ElementType should be set. In the ParentClass I specify that the array of object Oid's has an ElementType of %Persistent (the default is %RegisteredObject) and I think that I should do the same with the Phone Number array/list.

Nigel

Hi

I should have included the class definition for Parent

Include DFIInclude Class DFI.Common.JSON.ParentClass Extends (%Persistent, %JSON.Adaptor, %XML.Adaptor, %ZEN.DataModel.Adaptor)
{ Property ParentId As %String(%JSONFIELDNAME = "parentId", %JSONIGNORENULL = 1, %JSONINCLUDE = "INOUT") [ InitialExpression = {"Parent"_$i(^Parent)} ];

Property ParentName As %String(%JSONFIELDNAME = "parentName", %JSONIGNORENULL = 1, %JSONINCLUDE = "INOUT") [ InitialExpression = {..ParentName()} ];

Property Schemas As %ArrayOfObjectsWithClassName(%JSONFIELDNAME = "schemas", %JSONIGNORENULL = 1, %JSONINCLUDE = "INOUT", CLASSNAME = 2, ELEMENTQUALIFIED = 1, REFELEMENTQUALIFIED = 1);

ClassMethod ParentName() As %String
{
quit "Example: "_$i(^Example)
}

ClassMethod BuildData(pCount As %Integer = 1) As %Status
{
set tSC=$$$OK
set array(1)="DFI.Common.JSON.Contact"
set array(2)="DFI.Common.JSON.Patient"
set array(3)="DFI.Common.JSON.Practitioner"
set array(4)="DFI.Common.JSON.Reference"
try {
for i=1:1:pCount {
set obj=##class(DFI.Common.JSON.ParentClass).%New()
set obj.Schemas.ElementType="%Persistent"
set count=$r(12)
for j=1:1:count {
set k=$r(4)+1
set schema=$classmethod(array(k),"%New"),tSC=schema.%Save() quit:'tSC do obj.Schemas.SetObjectAt(schema.%Oid(),$p(array(k),".",4)_"_"_j)
}
quit:'tSC
set tSC=obj.%Save() quit:'tSC
}
}
catch ex {set tSC=ex.AsStatus()}
write !,"Status: "_$s(tSC:"OK",1:$$$GetErrorText(tSC))
quit tSC
}

Nigel

Hi

I believe that I have a solution for this.

I worked on the basis that there is a 'Parent' object that has a property Schemas of type  %ArrayOfObjectsWithClassName. This allows you to create an array of Objects where 'key' is the schema name and the 'id' is the instance.%Oid()

I then defined 4 classes:

Reference, Contact, Patient, Practitioner

I then created a method to Build N instances of the ParentClass. That code reads as follows:

ClassMethod BuildData(pCount As %Integer = 1) As %Status
{
    set tSC=$$$OK
    set array(1)="DFI.Common.JSON.Contact"
    set array(2)="DFI.Common.JSON.Patient"
    set array(3)="DFI.Common.JSON.Practitioner"
    set array(4)="DFI.Common.JSON.Reference"
    try {
        for i=1:1:pCount {
            set obj=##class(DFI.Common.JSON.ParentClass).%New()
            set obj.Schemas.ElementType="%Persistent"
            set count=$r(10)
            for j=1:1:count {
                 set k=$r(4)+1
                 set schema=$classmethod(array(k),"%New"),tSC=schema.%Save() quit:'tSC

                 do obj.Schemas.SetObjectAt(schema.%Oid(),$p(array(k),".",4))
             }
        set tSC=obj.%Save() quit:'tSC
        }
    }
    catch ex {set tSC=ex.AsStatus()}
    write !,"Status: "_$s(tSC:"OK",1:$$$GetErrorText(tSC))
    quit tSC
}

Initially I wanted to see if I could (a) insert different object types into the Array and (b) Export the Parent Object to JSON and so to make life easier I specified [ initialexpression = {some expression}] to generate a value for the field. Sort of like %Populate would do but I didn't want to pre-create instances in the 4 schema tables and then manually go and link them together.

When I ran my Method to create 10 Parents it created them and as you can see in the logic I generate a random number of schemas.

That all worked and I then exported to JSON to String resulting in this:

{"%seriesCount":"1","parentId":"Parent36","parentName":"Example: 38","schemas":{"Contact_1":{"%seriesCount":"1","contactGivenName":"Zeke","contactSurname":"Zucherro"},"Contact_11":{"%seriesCount":"1","contactGivenName":"Mark","contactSurname":"Nagel"},"Contact_3":{"%seriesCount":"1","contactGivenName":"Brendan","contactSurname":"King"},"Contact_8":{"%seriesCount":"1","contactGivenName":"George","contactSurname":"O'Brien"},"Patient_10":{"%seriesCount":"1","patientId":"PAT-000-251","patientDateOfBirth":"2021-05-05T03:38:33Z"},"Patient_2":{"%seriesCount":"1","patientId":"PAT-000-401","patientDateOfBirth":"2017-09-30T21:56:00Z"},"Patient_4":{"%seriesCount":"1","patientId":"PAT-000-305","patientDateOfBirth":"2019-04-19T14:04:11Z"},"Patient_5":{"%seriesCount":"1","patientId":"PAT-000-366","patientDateOfBirth":"2017-07-03T18:57:58Z"},"Patient_7":{"%seriesCount":"1","patientId":"PAT-000-50","patientDateOfBirth":"2016-11-26T03:39:36Z"},"Patient_9":{"%seriesCount":"1","patientId":"PAT-000-874","patientDateOfBirth":"2019-03-28T15:22:37Z"},"Practitioner_6":{"%seriesCount":"1","practitionerId":{"%seriesCount":"1","practitionerId":"PR0089","practitionerTitle":"Dr.","practitionerGivenName":"Angela","practitionerSurname":"Noodleman","practitionerSpeciality":"GP"},"practitionerIsActive":false}}}

Because I am using effectively an array of Objects the array is subscripted by 'key' and so if there are multiple instances of say "Patient" then each instance of "Patient" would over write the existing "Patient" in the array and so in creating the array I concatenated the counter 'j' to the Schema Name.

in object terms if you open an Instance of ParentClass and you use the GetAt('key') method on the Schemas array you will be returned with a full object Oid() and from that you can extract the ClassName and the %Id()

The only way I can see around not having to uniquely identify the 'Schema' %dynamicObject in the JSON string is in the Parent class you need to have an array for each schema type. i.e. Array of Patient, Array of Contact.

In terms of nesting you will see that Patient has a Practitioner and Practioner is linked to a Table of Practitioners and in the JSON above you can see that it picks up the Patient, Practitioner and the Practitioner Details from the Table Practitioners

I havent tried importing the JSON as I would have to remove all of the code that I put in the Schema classes to generate values if the field is NULL but that can be overcome by setting the attribute  %JSONIGNORENULL to 0 and then make sure that you specify NULL for the property that has no value.

I would carry on experimenting but we are in the middle of a Power Cut (Thank you South African State Utility Company)

If you want to see the classes I wrote and play with them let me know and I'll email them as I can't upload them

Nigel

Hi

According to the documentation you can GRANT priveledges to a Class/Table and you can use a wildcard "*" for a collection of Classes/Tables

The documentation reference in the Ensemble documentation is:

http://localhost:57772/csp/docbook/DocBook.UI.Page.cls?KEY=RSQL_grant

and in the explanation there is an example:

GRANT object-privilege ON object-list TO grantee [WITH GRANT OPTION]

and further on the documentation says:

object-list

A comma-separated list of one or more tablesviews, stored procedures, or cubes for which the object-privilege(s) are being granted. You can use the SCHEMA keyword to specify granting the object-privilege to all objects in the specified schema. You can use “*” to specify granting the object-privilege to all tables, or to all non-hidden Stored Procedures, in the current namespace. Note that a cubes object-list requires the CUBE (or CUBES) keyword, and can only be granted SELECT privilege.

The full syntax is:

grantee
 
A comma-separated list of one or more users or roles. Valid values are a list of users, a list of roles, "*", or _PUBLIC. The asterisk (*) specifies all currently defined users who do not have the %All role. The _PUBLIC keyword specifies all currently defined and yet-to-be-defined users.
admin-privilege

An administrative-level privilege or a comma-separated list of administrative-level privileges being granted. The list may consist of one or more of the following in any order:

%CREATE_METHOD, %DROP_METHOD, %CREATE_FUNCTION, %DROP_FUNCTION, %CREATE_PROCEDURE, %DROP_PROCEDURE, %CREATE_QUERY, %DROP_QUERY, %CREATE_TABLE, %ALTER_TABLE, %DROP_TABLE, %CREATE_VIEW, %ALTER_VIEW, %DROP_VIEW, %CREATE_TRIGGER, %DROP_TRIGGER

%DB_OBJECT_DEFINITION, which grants all 16 of the above privileges.

%NOCHECK, %NOINDEX, %NOLOCK, %NOTRIGGER privileges for INSERT, UPDATE, and DELETE operations.

role A role or comma-separated list of roles whose privileges are being granted.
object-privilege A basic-level privilege or comma-separated list of basic-level privileges being granted. The list may consist of one or more of the following: %ALTER, DELETE, SELECT, INSERT, UPDATE, EXECUTE, and REFERENCES. You can confer all table and view privileges using either "ALL [PRIVILEGES]" or “*” as the argument value. Note that you can only grant SELECT privilege to CUBES.
object-list A comma-separated list of one or more tablesviews, stored procedures, or cubes for which the object-privilege(s) are being granted. You can use the SCHEMA keyword to specify granting the object-privilege to all objects in the specified schema. You can use “*” to specify granting the object-privilege to all tables, or to all non-hidden Stored Procedures, in the current namespace. Note that a cubes object-list requires the CUBE (or CUBES) keyword, and can only be granted SELECT privilege.
column-privilege A basic-level privilege being granted to one or more listed columns. Available options are SELECT, INSERT, UPDATE, and REFERENCES.
column-list A list of one or more column names, separated by commas and enclosed in parentheses.
table The name of the table or view that contains the column-list columns.


  


In IRIS look at the documentation at this link:

%SYSTEM.SQL.Security

You can check priveledges with:

Methods

classmethod CheckPrivilege(Username As %String, ObjectType As %Integer, Object As %String, Action As %String, Namespace As %String = "") as %Boolean [ Language = objectscript ]

Check if user has SQL privilege for a particular action. This does not check grant privileges. Parameters:

Username
Name of the user to check. Required.
ObjectType
Required. Specifies the type to check the privilege of. ObjectTypes are 1 (table), 3 (view), 9 (procedure).
Object
Required. The name the object to check the privilege of.
For example, ObjectType and Object could be "1" and "Sample.Person", or "9" and "SQLUser.My_Procedure".
Action
Comma delimited string of actions letters to check privileges for. Actions are one or more of the letters "a,s,i,u,d,r,e" (in any order) which stands for ALTER,SELECT,INSERT,UPDATE,DELETE,REFERENCES,EXECUTE. Privilege "e" is only allowed for Procedures. CheckPrivilege will only return 1 if the user has privileges on all Actions specified. Required.
Namespace
Namespace object resides in (optional) default is current namespace

Returns:

  • 1 - if the Username does have the privilege
  • 0 - if the Username does not have the privilege
  • %Status - if CheckPrivilege call is reporting an error

Notes:

  • If Username is a user with the %All role, CheckPrivilege will return 1 even if the Object does not exist.
  • If the user calling CheckPrivilege is not the same as Username, the calling user must hold the %Admin_Secure:"U" privilege. Example:
    • Do $SYSTEM.SQL.Security.CheckPrivilege("Miranda",3,"SQLUser.Person","s","PRODUCT")

and you can set Priveledges with

classmethod GrantPrivilege(ObjPriv As %String, ObjList As %String, Type As %String, User As %String) as %Status [ Language = objectscript ]

GrantPrivilege lets you grant an ObjPriv to a User via this call instead of using the SQL GRANT statement. This does not include grant privileges.

$SYSTEM.SQL.Security.GrantPrivilege(ObjPriv,ObjList,Type,User)

Parameters:
ObjPriv
Comma delimited string of actions to grant. * for all actions:
  • Alter
  • Select
  • Insert
  • Update
  • Delete
  • References
  • Execute
  • or any combination
ObjList
* for all objects, else a comma delimited list of SQL object names (tables, views, procedures, schemas)
Type
Table, View, Schema or Stored Procedures
User
Comma delimited list of users

classmethod GrantPrivilegeWithGrant(ObjPriv As %String, ObjList As %String, Type As %String, User As %String) as %Status [ Language = objectscript ]

GrantPrivilegeWithGrant lets you grant an ObjPriv, WITH GRANT OPTION, to a User

$SYSTEM.SQL.Security.GrantPrivilegeWithGrant(ObjPriv,ObjList,Type,User)

Parameters:
ObjPriv
Comma delimited string of actions to grant. * for all actions:
  • Alter
  • Select
  • Insert
  • Update
  • Delete
  • References
  • Execute
  • or any combination
ObjList
* for all objects, else a comma delimited list of SQL object names (tables, views, procedures, schemas)
Type
Table, View, Schema or Stored Procedures
User
Comma delimited list of users

Nigel

Hi

About a year ago while I was experimenting with IRIS for Health 2019.1 to see what advantages it would give us with the Enterprize Master Patient Index (EMCI) application that HST has been developing. HST had started developing this before IRIS for Health had been released and so they had created FHIR Classes based on the FIR STU3 specification. They created the FHIR classes as %Persistant classes and we use DTL's to convert a FHIR Patient JSON into the HST FHIR Patient class. The one issue we ran into was the Patient's picture which is a Binary Stream. The developer who was writing the UI using Outsystems was using ODBC to retrieve data from the FHIR classes and she was able to specify 'Picture' in her SQL queries and then pass the binary data through some render utility to get it to display correctly on the UI form however there is a restriction in ODBC that the max size for any column is 4000 bytes. In SQL Server the maximum size of a row of data is 8000 bytes and blobs are stored in separate structures from the main table. (Similar to our 'S' global in Cache Default Storage) so though Blobs can be up to 2GB in size you are still restricted if you intend to use ODBC to retrieve or update data in a table.  The ODBC restriction is specific to the version of ODBC. Earlier versions of ODBC had a limit of 2000 bytes. This is irrespective of whether you are working with SQ Server, Oracle or IRIS ODBC drivers. In the documentation I was reading on SQL Server there are performance issues as well and they basically recommend keeping your binary or character stream data in files and they support a couple of techniques that allow their users to execute queries that include BLOB data but the SQL statements are hectic.

Hi

The reason I created this DTL was as follows: I had created an Interface that sent HL7 Messages to an HTTP Operation and provided the message reaches the target server it will respond with an HL7 ACK message (There HL7 HTTP server is working in the equivalent of 'ACK Immediate" mode as opposed to "application" mode. The OnRequest() method in my Business Process calls the HTTP Operation and then calls the FILE Operation. The HTTP Operation will return an HL7 ACK message and I process that in the OnResponse() method of the Business Process. The File Operation writes the HL7 Message, that I sent to the HTTP Operation, to file and by default thats all the File Operation will do but the reason I have the file operation at all is to simulate what should happen when i send the message to HTTP and so I generate an HL7 ACK message to return to the Business Process. That is why I have the DTL to transform the source HL7 Message into a corresponding HL7 ACK Message, and I think that I randomly generate a NACK code, again to be able to test what to do if I were to get an NACK code from the HL7 HTTP Operation. So back in my Business Process OnResponse method I have code to test the response HL7 ACK message and if there is an error I handle it.

With regards the question can it be used for any message structure. Strictly speaking, Yes because the source is an EnsLib.HL7.Message and the response is an EnsLib.HL7.Message and I basically copy the MSH from the source to the target, I swap the Sending and Receiving fields around and I think I generate a new timestamp. The message Control ID remains the same. Any fields that I need to change I pass in using the AUX object. I basically have an AUX class for ever DTL I create when ever I need some runtime value that is not part of the source message in the DTL so the Event Type of the MSH:MessageType is ACK and the Message Structure is just ACK. Then the Code and the Error Text if there is one are assigned in the MSA Segment. I have noticed a mistake in my DTL. When I create the HL7 message that I am sending to the external application the Message Structure is computed based on the Message Type and Trigger Event and I have a lookup table that maps the various Trigger Events to their base Message Structure. For instance an ADT_A08 where the Message Type i ADT and the Trigger Event is A08 then the Base Message Structure for an ADT_A08 is ADT_A01 and I have a method that I call in my business Process that determines the actual message structure from the Message Type and Trigger Event but that is not being called in the DTL so you will see that I am just assigning ACK to the Message structure where the assign should actually read "ACK"_target.{MSH:MessageType.TriggerEvent}

That would give you ACK_A08

Nigel

Hi

Your request class tsRequest has a property DocumentData which I assume has a data type of %String. There is one thing you need to check and to things you can do in the request class:

1) Make sure that the setting for "Support Long Strings" setting in your Cache Instance is set to true. The setting can be found in the Management Portal at System Administration > Configuration > System Configuration > Memory and Startup
 

2) in the class definition for your request class use the following definition for your DocumentData specify the definition as follows:

Property DocumentData as %String(MAXLEN="", TRUNCATE);

Turning on Long Srinngs enables you to store up to 3,641,144 characters.

The Attribute MAXLEN="" says the Maximum Length of the property is the maximum string length supported i.e. 3,641,144 characters. 

The TRUNCATE attribute will chop off any characters that exceed the MAXLEN without throwing a MAXSTRING error

Nigel

Hi

I'm sure that the documentation will answer your question but if you want a short answer that explains the difference here you go.

Globals are defined as persisted sparse multidimensional arrays.

A Multidimensional array basically means that you can create an array with multiple subscripts and visually this is represented by a tree structure where you have a trunk that branches and each branch then branches and each of those branches can have many branches and so on and so on and eventually at the tips of the very outer branches you have a leaf. This tree however can also have leaves at the point where the branch branches again. 

The leaves in this analogy are your data pages i.e. a database block that contains some data.  In your application you want to go and fetch the data in those leaves.

But as in nature, some of those branches don't have leaves, it might be a new branch and a leaf is beginning to develop. Likewise some of the leaves at the join between a branch and the branches that sprout from that branch have fallen off or for whatever reason a leaf never grew at that particular intersection between a branch and the branches that branch off that branch.

So what is the most effective way of finding all of the leaves on the tree? Worse still, depending on how old that tree is we don't necessarily know how many branches on branches on branches... on branches there are.

So, if you only had $order and you were a hungry grasshopper 🦗 you would have to walk up the trunk, choose the furthest branch to the left and walk up it. Ah! success, you find a leaf. You eat the leaf. Your still hungry so you take the branch that is the furthest branch on the left of the next level of branches and eventually you reach the very top of the tree. The thinest little twig and you eat the little leaf that has just budded on that twig then you walk back down the twig and you move one twig to the right and up you climb and eat the leaf at the end of that twig and you repeat this process until you have processed each twig on that outer most branch -1. Now you move one branch to the right and up you climb and  you climb each twig on that outer most branch -1 and so on and once you have traversed every single branch on every single branch you will eventually get back to the trunk where a very hungy sparrow has been watching your progress with interest and as soon as you stumble back down the trunk, tired, dusty, full to the brim with leaf 🍃 the sparrow makes his move and eats you.

Bummer. So much effort, so little reward, the amount of leaf you ate barely replenished the energy you used to traverse the tree 🌴.

Now, if you are a smart grasshopper and you have good eyesight you remember you can hop, in fact you can hop like a grasshopper on steroids. So you bound up the trunk and you scan the branches and you spot a leaf and it is within hopping distance and so you hop to the leaf and eat it and on you go, hopping from one leafy branch to the next, ignoring all of the branches that have no leaves.

Well, that's how $query works, $query will return the next set of subscripts in your sparse array that has some data at the nth subscript level.

Of course, the grasshopper was so pleased with himself and how well he hopped that he forgot to keep his eyes open for sparrows and that sparrow that had sat patiently watching the first grasshopper traverse the tree using $order has been quietly sitting (do sparrows sit or clutch?) watching you bound effortlessly from ;leafy branch to leafy branch and when the grasshopper eventually gets back to the trunk the sparrow eats him too.

Moral of the story: I'd rather be a sparrow than a grasshopper and if I am a developer and I have a large sparse array I will use $query rather than $order.

Nigel

I can't disagree with Ben, there is a cut-off point where it makes more sense to store the files external to IRIS however it should be noted that if I was working with any other database technology such as Oracle or SQL Server I wouldn't even consider storing 'Blobs' in the database. However Cache/Ensemble/IRIS is extremely efficient at storing stream data especially binary steams. 

I agree with Ben that by storing the files in the database you will have the benefits of Journallng and Backups which support 24/7 up time. If you are using Mirroring as part of your Disaster Recovery strategy then restoring your system will be faster.

If you store the files externally you will need to back up the files as a separate process from Cache/Ensemble/IRIS backups. I assume that you would have a seperate file server as you wouldn't want to keep the external files on the same server as your Cach/Ensemble/IRIS server for two reasons:

1) You would not want the files to be stored on the same disk as your database .dat files as the disk I/O might be compromised

2) If your database server crashes you may lose the external files unless they are are on separate server. 

3) You would have to backup your file server to another server or suitable media

4) If the steam data is stored in IRIS then you can use iFind and iKnow on the file content which leads you into the realms of ML, NLP and AI

5) If your Cache.dat files and the External files are sored on the same disk system you potentially run into disk fragmentation issues over time and the system will get slower as the fragmentation gets worse. Far better to have your Cache.dat files on a disk system of their own where the database growth factor is set quite high but the database growth will be contiguous and fragmentation is considerably reduced and the stream data will be managed as effectively as any other global structure in Cache/Ensemble/IRIS.

Yours

Nigel

Does HealthShare use the IO Device Table like LabTrak does? And are users tied to the default windows printer when they print? in which case can you not change the label size via the choice of stationary in the windows print preferences? It also depends of course on whether the text content will adjust to fit the label size. If the font and relative positioning are fixed then my idea won't work. It may print on a larger or smaller label size but the text itself won't scale. 

Nigel

You could also try

quit ##class(Ens.Director).GetCurrProductionSettings(.pSettings)

and then

according to the documentation you should be able to use:

in Ens.Production

classmethod ApplySettings(pProductionName As %String, ByRef pSettings) as %Status

Apply multiple settings to a production pProductionName is the name of the Production to which to apply the settings
pSettings is a local array of settings structured in the following way:
pSettings(<itemName>,<target>,<settingName>)=<settingValue>
Where:
<itemName> is the configuration item name in the production
<target> Is one of:
Item: Means the setting is a property of the item itself, such as PoolSize
Host: Sets a host setting
Adapter: Sets an adapter setting
<settingName> is the setting name
<settingValue> is the desired value of the setting.

and also in Ens.Production to get the production settings use: \

classmethod GetSettingsArray(Output pSettings) as %Status
 

I haven't tried them but thats what the documentation in the class says

Nigel 

Hi

There are a couple of things to note here:

1) You can programmatically update the Production Item settings as well as the settings of the Adapter bound to the Service or Operation. What you won't see are those modified values on the Production Item Settings TAB though if you do a $$$TRACE in your code you will see that the setting value is the value to assigned to the setting.

2) The Production Item has groups of common attributes such as Information, Connection, Alerting, Development. Most of these are common to almost all production items. 

3) If you want to add a custom setting to the Production Item define it as a property. for example:

Property MaxNumberOfLoops As %Integer [ InitialExpression = 200 ]; 

and then if there isn't already a PARAMETER Settings include the following:

Parameter SETTINGS = "MaxNumberOfLoops:Basic";

This will add a field to the Basic section of the Settings TAB and in the OnProcess(), OnProcessInput() or equivalent methods you can access the property value using  normal object instance syntax:

set maxloops=..MaxNumberOfLoops

4) For a more complex example where you might want to include a drop down list of values derived from a class/table in the database you can do the following:

Property Organization As %String;

Property OrganizationId As %String [ Calculated ];

Parameter SETTINGS = "Organization:Basic:selector?context={Nigel.Production.ContextSearch/Organizations}"';

The code for this ContextSearch is specified in a class method called "Organizations" (the name that follows the ContextSearch/Name} in a class that has the following attributes:

Class Nigel.Production.ContextSearch Extends Ens.ContextSearch
{

ClassMethod Organizations(Output pCaption As %String, Output pTopResults, Output pResults, ByRef pParms As %String, pSearchKey As %String = "") As %Status
{
     set tSC=$$$OK,count=1,pCaption="" kill pResults,pTopResults
     try {
           &sql(declare cursor for SELECT Name into :tName FROM Nigel_Organization.Organization)
           &sql(open x)                                                                                                                                                                             
           for &sql(fetch x)  quit:SQLCODE  set pResults(count)=tName,count=count+1 }

            &sql(close x)
       }
       catch ex {set tSC=ex.AsStatus()}
       quit tSC
    }

}

This will populate the list with the Organization Name and you will want to maintain an array of "Organization Name" back to Organization Internal Id. You can specify the Caption to introduce spaces and other cosmetic features.

There are also some built in functions that you can make use of for the selector such as a Calendar for Dates/Date&Time, A Directory Selector, File selector, basically the standard Zen custom components which canbe handy.

As a matter of interest what production item doesn't have the Alerting section. If it doesn't, and given that apart from the development and testing section if there isn't an Alerting Section then there is probably a very good reason why alerting is just not appropriate for that item. Even Ens.Alert has an Alerting section. LOL

If you wanted to add some additional field to the Alerting section then define the property but link it to:

PARAMETER Settings="myProperty:Alerting;"

on the other hand:

I once designed an architecture where I had a an abstract device class and a whole lot of abstract adapter classes which I inherited into persistent classes and each persisted device had a unique name and in the OnInit method I would get the production item name and if i could find an entry for that name in my device Index global I would use the values from my device table to populate the production item with specific device or adapter setting values. It is a bit complex to explain here so let me know if you want any more information on how I made it work but I think that Ensemble ultimately added that sort of functionality in later releases. I think i developed my own version for Ensemble 2014 or earlier.

Finally if you are not sure if a Production Item Setting exists use the following code:

Set outputContentType=$GET(pInput.Attributes("Content-Type"),"text/xml")

and if apart from the

PARAMETER Adapter ="......";

property Adapter as EnsLib.......;

you can use something like:

set xyz=$property(..Adapter,{Property_Name}) if the property doesn't exist then it will return a <property does not exist> error if it is not a property of the Adapter class and you can check that by using the code:

if ##class(%Dictionary.CompiledProperty).%ExistsId("Classname||PropertyName") is true then the property is a property of the Adapter class 

Nigel