If you are defining a Parent-Child Relationship with Default Storage the global structure of the child with change from ^child(id) to ^child(parentid,id) so you will not see any of the old data, no hint will fix that issue, sorry.

If your current child data is valid and you want to keep it I would suggest defining a Foreign Key constraint with Cascade on Delete.  This will keep the global structure the same but let you enforce your relationship.

Nice clear description Benjamin.  This should be a big help for query performance.  It is such a common problem in support when we have to help customers deal with outliers.  Now the RTPC should do that work for them!

So there is a zip file attached to this article that contains a bunch of examples.  the Class Mapping.ChildPiece should be close to what you need.  the global for that class Parent Child looks like this:

/// ^Par(1)=Parent1\Child11#child12#child13
///      2)=Parent2\Child21#child22
///      3)=Parent3

and the mapping for the child is:

<SQLMap name="Map1">
<Data name="ChildData">
<Subscript name="1">
<Subscript name="2">
<Invalidcondition name="1">

If you can get your working let me know and I can make an example for your global.


Hi Limyandi

I am not a fan of creating List of Objects as it make it hard to query the data.  I would rather expose this data as a child class somehow so SQL can be used to look at the data.  I would look at doing something along the lines of what is in Mapping.ChildPiece.

But you ask for an ugly List of Lists so that is what I did. 

The Serial class is just a normal one:

Class Mapping.Address2 Extends %SerialObject [ ClassType = serial, ProcedureBlock ]

Property RoadName As %String(TRUNCATE = 1);

Property SuburbanName As %String(TRUNCATE = 1);

Property StreetNumber As %String(TRUNCATE = 1);

Storage Default
<Data name="Address2State">
<Value name="1">
<Value name="2">
<Value name="3">


Then for the Person class I needed to write Retrieval code that would get the different address out of the global and put them into the Nested $LIST() that the class is expecting.

Class Mapping.Person2 Extends %Persistent [ StorageStrategy = NewStorage1 ]
{ Property Name As %String;

Property Addresses As list Of Mapping.Address2;

Property Sub1 As %Integer;

Index Master On Sub1 [ IdKey ]; Storage NewStorage1
<SQLMap name="Map1">
<Data name="Addresses">
      set string=$P(^Person({L1}),"|",2)
      for i=1:1:$L(string,"^") {
           set address=$P(string,"^",i)
           set $LIST({*},i)=$LISTFROMSTRING(address,"~")
<Data name="Name">
<Subscript name="1">


This looks like it is just a bug.  I reproduced this in InterSystems IRIS latest.  Looking at the generated code we are able to resolve the the index global in other places, but not in this one case.

I will report the bug to development to get it fixed.

My question is what do you plan to use this index for?

Couple of this this could be.

The storage def is using SQL field names, so {SQLUser.Parent.IdAProp} might fix the error.

Do you have an IDKey index defined in User.Parent on both IdAProp and IdBProp?

OK so I would try:  SELECT PROE,ID... and SELECT DESP,ID... to figure out which rows is causing the problem.

I would then look at how that %String is defined and then look at the data returned in the portal or shell to make sure the data matches how the property is defined. 

I have seen problems with some client side tools where they want to treat something we think is a simple VARCHAR as a CLOB.  If your %String has a large MAXLEN size then maybe this is what is going wrong?  To get to the bottom of that we would need to get an CacheODBC.log file ( or IRISODBC.log) and dig through the HEX dump.

What are the datatypes for the Properties in the SELECT list?  This error is coming from VB and most likely is caused by the data not matching up with the datatype def. of the Property.  Doing a quick google search for this error the first three hits were related to Streams.

When you run a query in the portal we do not do any type validation so if you have a MAXLEN on a %String set we are not going to check that, we assume it passed on data input.  External tools might be typed and can fall over if a string is longer than we said it was.

Limiting the fields in the SELECT list to figure out what data type is causing the problem is step 1.  Step 2 would be limiting the rows to figure out which one is producing the error and then looking at that data to see why it might be causing the problem.

If you load a new version of your classes, or compile your classes on the system that will also purge Cached Queries so you would want to run the method then as well.

If you want to be sure your customer will get the same generated code you tested in house you could freeze all the plans after running GenerateEmbedded():  Do $SYSTEM.SQL.FreezePlans(1,1,,.Errors)

To generate the Cached queries for your testing and after installing your product on a customer system you can call:  d $SYSTEM.OBJ.CompilePackage("%sqlcq")

These 3 steps should get you back the behaviors you liked about Embedded SQL.