The managed provider uses TCP/IP.

The confusion probably stems from XEP (eXtreme Event Persistence).

XEP for dotNet still has an in-memory and a TCP/IP connection mode.  As Bill indicated we are moving away from the in-memory mode, as we are improving the throughput of the TCP/IP mode. We already deprecated the in-memory connection mode for XEP for Java where the performance is already pretty much the same. The in-memory connection mode has two issues as a) crashes on the client can interfere with the server and b) it does not allow the client to run on a different machine, which is a problem for scaling up.

What I have done in previous projects is to link the correlated records, e.g.:

 ID           CitizenRef           RelocationDate                 City                        Surname         NextLocationId
...
 48          1000                       2015-04-01                          Boston                  Smith               49
 49          1000                       2015-07-01                          Seattle                  Smith               50
 50          1000                       2015-10-01                          Boston                  Smith               51
 51          1000                       2016-01-15                          NewYork              Smith               NULL

 

You can quickly filter for current locations by setting your filter to "NextLocationId IS NULL". This is not very different from the other comments so far, but this approach is more flexible.

If you ask the question: "Who lived in Boston and relocated to Seattle" you can check for the case (pseudo-filter)

City="Boston" AND NextLocationId.City="Seattle"

You can even ask more complex questions like "Who relocated to Boston and stayed longer than a year after living in NewYork?"

Hi Michael,

did you check the trace for errors? A first guess is that the process throws a <STORE> error when it tries to validate the response. Increasing the maximum per process memory might help. Here is the related documentation:

http://docs.intersystems.com/enscomm20152/csp/docbook/DocBook.UI.Page.cls?KEY=GSA_config#GSA_config_system_startup

Stefan

Personally, I would not go for 4) and 1).

4) Allows invalid data to be entered and a class definition helps you to query users based on user data later on. This approach just limits your options in the future.

1) I don't like this approach just because it does not separate the concerns and it requires privileges.

2) and 3) are both good approaches. 2) Includes a sanity check because of the reference, but you have to be aware of this if you want to move user data between servers in case the users are not in sync.

I guess you are asking for ways to persist user data, but just in case: If you are building a CSP/Zen/Zen Mojo application you can store session data for a user in %session.Data. As soon as a transaction is complete and you want to store it you can go with approach 2) and 3) and persist the relevant data in your own class structure. 

Hi Tim,

$compose works with deep structures but it replaces values of key-value pairs at the top level. It does not merge sub-arrays or sub-objects by default. Consider the following code:

SAMPLES>set o1 = {"a":1,"c":2,"deep":{"f":1,"g":1}}
 
SAMPLES>set o2 = {"b":1,"c":1,"deep":{"f":3,"j":1}}
 
SAMPLES>s o3 = o2.$compose({"returnValue":o1})
 
SAMPLES>w o3.$toJSON()
{"a":1,"c":1,"deep":{"f":3,"j":1},"b":1

You can easily spot, that the path "deep" in o3 has the value from o2 and completely misses "deep.g" from o1. We just copied the path "deep" from o2 into o1. We do provide a way  to override this behavior if you want to, but in a future release, we plan to ease this by introducing compose maps. I will cover this topic in more detail in an upcoming post.

I have double checked this topic with our documentation group. The documentation talks about the concept of system methods in general here:

/csp/docbook/DocBook.UI.Page.cls?KEY=GJSON_intro#GJSON_intro_dao

We do not document formally how to define system methods, as this is reserved for InterSystems. See also the following quote from the above documentation link:

Note that there is no supported mechanism for customers to create system methods.

Nevertheless, we have to introduce the concept, so that you are aware what problem system methods are solving and how you can call them (using the dot dollar syntax).

Thanks,

Stefan

In my personal opinion, this is already a case where you want to switch from a routine to a class design. This way it is so much easier to keep track of the scope of your variables by defining them as properties of your class.

Working with publiclist and %-variables in routines can introduce a couple of side-effects, especially if these are not documented properly.

This is a new feature, introduced in Caché 2016.2. I am currently investigating whether this is a current limitation or a bug.

Update: %Object can be mapped back to registered objects, but we are missing  a code path for composing a %Array instance into a registered object. Investigation still ongoing, but we are working on it. This code path will be supported in Caché 2016.2.

I will update this comment as soon as the functionality is available in a field test build.

Thank you very much for spotting and reporting!

The current architecture of Caché is process-based and I don't see an efficient way to achieve what you describe. If the set is huge enough it may pay off to start up a pool processes that can do the post processing. It does depend on the desired output: Another in-memory structure or persisted data.

Anyway, you want to make that you have a pool of processes that work on your set, you don't want to spin them up on demand as this will slow you down.

The work queue manager may be helpful to manage this.

https://community.intersystems.com/post/using-work-queue-manager-process-data-multiple-cores