I've used this approach before and it generally works fine, but there are some additional things to consider:

1) If your code contains SQL that references tables within the package you're renaming, you need to find/replace the schema as well as the package name.  E.g., package my.demo would be schema my_demo.  If you don't do this  the SQL in your new package will reference the tables in your old package.

2) If you have Persistent classes in your package, you'll likely want to export with the /skipstorage qualifier. That will cause export to omit storage maps, so that the new package gets new storage maps and new globals when you compile.  If you don't do this your new package might use the same globals as the old one because find/replace wouldn't change compressed global names like ^package.foobar9876.Class1D.

3) If you follow the previous suggestion, you may run into another problem if you try to copy your old globals to the new ones, so that you're bringing forward your data a well as code into the new package.  The issue is that the new storage map will have all properties in the order they're declared.  If you've added properties over time to the original class they may be in a different order, making the storage maps incompatible.  That happens because new properties are always added at the end of an existing storage map, regardless of where they're declared.  That prevents the need to convert data to a new structure if new properties are added to a deployed class.  In that case you'll need to manually fix the new storage map by copying forward the old <Value> tags, while retaining the new global references.

Thanks Jorge, this is helpful.  Your example of computing BMI is actually what the customer is trying to do, although they want to do it in Health Insight.  That allows them to report BMI and use it in analytics, such as to compute a risk score.  After discussing with them we agreed that converting on consumption is safer than on ingestion, which is consistent with what you showed for the CV.

The premise is that when you have a timestamp property that’s set at the time of row insert, there will be a guarantee of those timestamps being “in order” with respect to Row IDs.

At first glance that sounds reasonable, and is probably almost always true, but I’m not sure it’s guaranteed.  Isn’t there a race condition around saving the timestamp and generating the new row ID?

That is, couldn’t you have a flow like this:

Process 1, Step 1:  Call %Save.  In %OnSave:  set ..CreatedAt = $zts  (let’s say this gives me 2018-06-01 16:00:00.000)

Process 2, Step 1:  Call %Save.  In %OnSave:  set ..CreatedAt = $zts   (let’s say this gives me 2018-06-01 16:00:00.010)  << +10ms

Process 2, Step 2: Generate new Row ID using $increment, and complete %Save (let’s say this gives me RowID = 1)

Process 1, Step 2: Generate new Row ID using $increment, and complete %Save (let’s say this gives me RowID = 2)

Is that likely?  Definitely not, but I don't think it's impossible.


Actually, it might be fairly likely in an ECP environment where multiple ECP Clients are inserting data into the same table, one reason being that system clocks could be out of sync by a few milliseconds.


Does that make sense, or am I missing something?  For example, would this all be okay unless I did something dumb with Concurrency?  If so, would that still be the case in an ECP environment?

Okay, thanks for updating.  That error didn't seem to make sense based on what you showed before.

This very simple BPL might help you see how to declare and use a List:

<process language='objectscript' request='Ens.Request' response='Ens.Response' height='2000' width='2000' >
<context>
       <property name='MyList' type='%Integer' collection='list' instantiate='0' />
</context>
<sequence xend='227' yend='451' >
        <assign name="Append 1" property="context.MyListvalue="1action="append" xpos='278' ypos='291' />
</sequence>
</process>

Note that I don't need to initialize my list property.  That will happen automatically.

Also note that I'm using action='append'.  That will insert the new value to the end of the list.  It corresponds to this in COS:

do context.MyList.Insert(1)

BPL also has action='insert', but that inserts into a specific location.  It's equivalent to InsertAt for lists, or SetAt for arrays.

Probably unrelated, but you most likely do want MSI.IN835.EOBList to have storage.

The reason is that if your BPL suspends for any reason (such as a Call) between the time you set and use that data, you'll lose whatever values you were trying to save.  That's because the job that's executing your BPL will %Save the Context object and go work on another task while it's waiting.  When the Call returns it will reload the Context object and resume work.  If you extend %RegisteredObject your context property won't survive the save/reload.

It might be tempting to ignore that if you're not currently doing any intervening Calls, but things tend to change over time, so doing it now could prevent a hard-to-find bug later.

%SerialObject is probably better that %Persistent for this because that way you won't have to implement your own purge of old objects.

Or, if you only need to store a list of integers, you could just declare your context property as that and skip the custom wrapper class.

You need to provide subscript values for the three loops.

This will give you the 1st member of each collection:

source.{loop2000A(1).loop2000B(1).loop2300(1).CLM:ClaimSubmittersIdentifier}

It's likely that in a real DTL you'll want to loop over each collection, because there will probably be multiple claims in the message.  Use ForEach to do that:

<foreach property='source.{loop2000A()}' key='k2000A' >
    <foreach property='source.{loop2000A(k2000A).loop2000B()}' key='k2000B' >
        <foreach property='source.{loop2000A(k2000A).loop2000B(k2000B).loop2300()}' key='k2300' >
            <assign value='source.{loop2000A(k2000A).loop2000B(k2000B).loop2300(k2300).CLM:ClaimSubmittersIdentifier}' property='target.ClaimInvoiceNo' action='set' />

         </foreach>
    </foreach>
</foreach>

Note that the way I have that now you'll end up with the last ClaimInvoiceNo in your target.  You'll need to adjust to make sure you process each of them.