I think the 4000 character read from the unencoded source is the problem.  That's not an "even number" for base64 encoding, meaning that the encoded version will have trailing "=" signs as padding, in this case 2 of them.

Changing to 3000 characters got it to work for me, meaning that the encoded chunks did not have trailing "=".

An easy test is to look at your final encoded text.  If you see "=" chars anywhere except the end, this is your problem.

The trailing "=" is a problem because when you concatenate the encoded chunks those end up embedded in the result, making it invalid base64.  You need to rig it so you don't get trailing "=" for any chunk except the last one.  You do that by ensuring that your unconverted chunks have a number of bytes such that the bit count is divisible by 6.

This is helpful, but I feel like it's still just a workaround for the real problem, which is that the compiler is reformatting code.  Even if I allow that the compiler knows more about how code should be formatted than I do (it doesn't, if only because correct formatting is defined by personal opinion), it shouldn't be doing that until it's fully parsed and validated the code.  If it hasn't successfully parsed the code it can't possibly make good decisions about how it should be formatted.

You need to look at timestamps in 2 globals:

^ROUTINE contains the source code of the routine.

^ROUTINE(RoutineName,0) = Timestamp when the routine was last saved.  ($ZTS format, local timezone).

^rOBJ contains the compiled object code.  This will not exist if the routine has never been compiled.

^rOBJ(RoutineName,"INT") = Timestamp when the INT routine was last compiled.

If the date in ^ROUTINE is later than in ^rOBJ, the compiled code may be out of synch with the source.  This isn't guaranteed though, since the last save doesn't necessarily reflect a code change that would require recompile.

For MAC routines you'll want to look at:

^rOBJ(RoutineName,"MAC") = Timestamp when MAC routine was last compiled

^rMAC(RoutineName,0) = Timestamp when MAC routine was last saved

For Classes I believe you want TimeChanged from %Dictionary.CompiledClass.  You can get that using SQL:

select TimeChanged from %Dictionary.CompiledClass where ID = FullyQualifiedClassName

I agree that developers need this information presented in a better way, but I'm thinking the approach would be to have the UI read the Message Map rather than adding properties to the Operation class.  Here's my reasoning.

One concern with this idea is that a Business Operation can accept many kinds of request messages. In a sense the Business Operation doesn't actually have a request message at all.  It's the individual methods within the Operation that do, and the Message Map specifies how they align.

I suppose the Operation class could have a property for each method-level request class, but that doesn't seem to be very helpful.  To make it useful there'd need to be a way to match the properties with their respective methods, such as by naming convention.

Even then, it would just be a form of documentation because the Operation code has no technical need for properties of that kind.

We could certainly rework things so it does use them, but that seems forced and I think it would make things more complicated rather than simpler. For one thing, it would mean that the calling framework needs to set the correct property of the Operation object based on what method it's about to call.  Then it needs to clear the reference after the method returns.  If you didn't clear the reference it would persist until the next call to that method, in the meantime preventing the request object from being garbage collected.  That could cause any number of problems related to resources like locks, network connections, file handles, etc. not being released.  You'd have similar bookkeeping for the response message if you moved that to a property.

I've used this approach before and it generally works fine, but there are some additional things to consider:

1) If your code contains SQL that references tables within the package you're renaming, you need to find/replace the schema as well as the package name.  E.g., package my.demo would be schema my_demo.  If you don't do this  the SQL in your new package will reference the tables in your old package.

2) If you have Persistent classes in your package, you'll likely want to export with the /skipstorage qualifier. That will cause export to omit storage maps, so that the new package gets new storage maps and new globals when you compile.  If you don't do this your new package might use the same globals as the old one because find/replace wouldn't change compressed global names like ^package.foobar9876.Class1D.

3) If you follow the previous suggestion, you may run into another problem if you try to copy your old globals to the new ones, so that you're bringing forward your data a well as code into the new package.  The issue is that the new storage map will have all properties in the order they're declared.  If you've added properties over time to the original class they may be in a different order, making the storage maps incompatible.  That happens because new properties are always added at the end of an existing storage map, regardless of where they're declared.  That prevents the need to convert data to a new structure if new properties are added to a deployed class.  In that case you'll need to manually fix the new storage map by copying forward the old <Value> tags, while retaining the new global references.

Thanks Jorge, this is helpful.  Your example of computing BMI is actually what the customer is trying to do, although they want to do it in Health Insight.  That allows them to report BMI and use it in analytics, such as to compute a risk score.  After discussing with them we agreed that converting on consumption is safer than on ingestion, which is consistent with what you showed for the CV.

The premise is that when you have a timestamp property that’s set at the time of row insert, there will be a guarantee of those timestamps being “in order” with respect to Row IDs.

At first glance that sounds reasonable, and is probably almost always true, but I’m not sure it’s guaranteed.  Isn’t there a race condition around saving the timestamp and generating the new row ID?

That is, couldn’t you have a flow like this:

Process 1, Step 1:  Call %Save.  In %OnSave:  set ..CreatedAt = $zts  (let’s say this gives me 2018-06-01 16:00:00.000)

Process 2, Step 1:  Call %Save.  In %OnSave:  set ..CreatedAt = $zts   (let’s say this gives me 2018-06-01 16:00:00.010)  << +10ms

Process 2, Step 2: Generate new Row ID using $increment, and complete %Save (let’s say this gives me RowID = 1)

Process 1, Step 2: Generate new Row ID using $increment, and complete %Save (let’s say this gives me RowID = 2)

Is that likely?  Definitely not, but I don't think it's impossible.


Actually, it might be fairly likely in an ECP environment where multiple ECP Clients are inserting data into the same table, one reason being that system clocks could be out of sync by a few milliseconds.


Does that make sense, or am I missing something?  For example, would this all be okay unless I did something dumb with Concurrency?  If so, would that still be the case in an ECP environment?

Okay, thanks for updating.  That error didn't seem to make sense based on what you showed before.

This very simple BPL might help you see how to declare and use a List:

<process language='objectscript' request='Ens.Request' response='Ens.Response' height='2000' width='2000' >
<context>
       <property name='MyList' type='%Integer' collection='list' instantiate='0' />
</context>
<sequence xend='227' yend='451' >
        <assign name="Append 1" property="context.MyListvalue="1action="append" xpos='278' ypos='291' />
</sequence>
</process>

Note that I don't need to initialize my list property.  That will happen automatically.

Also note that I'm using action='append'.  That will insert the new value to the end of the list.  It corresponds to this in COS:

do context.MyList.Insert(1)

BPL also has action='insert', but that inserts into a specific location.  It's equivalent to InsertAt for lists, or SetAt for arrays.

Probably unrelated, but you most likely do want MSI.IN835.EOBList to have storage.

The reason is that if your BPL suspends for any reason (such as a Call) between the time you set and use that data, you'll lose whatever values you were trying to save.  That's because the job that's executing your BPL will %Save the Context object and go work on another task while it's waiting.  When the Call returns it will reload the Context object and resume work.  If you extend %RegisteredObject your context property won't survive the save/reload.

It might be tempting to ignore that if you're not currently doing any intervening Calls, but things tend to change over time, so doing it now could prevent a hard-to-find bug later.

%SerialObject is probably better that %Persistent for this because that way you won't have to implement your own purge of old objects.

Or, if you only need to store a list of integers, you could just declare your context property as that and skip the custom wrapper class.

You need to provide subscript values for the three loops.

This will give you the 1st member of each collection:

source.{loop2000A(1).loop2000B(1).loop2300(1).CLM:ClaimSubmittersIdentifier}

It's likely that in a real DTL you'll want to loop over each collection, because there will probably be multiple claims in the message.  Use ForEach to do that:

<foreach property='source.{loop2000A()}' key='k2000A' >
    <foreach property='source.{loop2000A(k2000A).loop2000B()}' key='k2000B' >
        <foreach property='source.{loop2000A(k2000A).loop2000B(k2000B).loop2300()}' key='k2300' >
            <assign value='source.{loop2000A(k2000A).loop2000B(k2000B).loop2300(k2300).CLM:ClaimSubmittersIdentifier}' property='target.ClaimInvoiceNo' action='set' />

         </foreach>
    </foreach>
</foreach>

Note that the way I have that now you'll end up with the last ClaimInvoiceNo in your target.  You'll need to adjust to make sure you process each of them.