1 sql request per property, or only one "big" sql request ?

How are you planning on merging distinct requests? One case I've seen where you need something like this is when you want to get a valid subset of filter values (which would return 1+ rows) for each next filter, but in the end all filters are still combined into one query.

You need to decide on:

  • what predicates are allowed (=, %STARTSWITH, <, >, LIKE, IN, ISNULL, BETWEEN)
  • what logic operations are allowed (OR, AND, NOT)
  • are nesting and brackets allowed
  • is it free form (auto constructed from metadata) or fixed (from a pre-set list of properties).

After that on a client you construct filter object, validate and parse it on a server and execute the query.

If you're not satisfied with performance, server-side part can be tweaked without modifying the client.

what's the disadvantage of using lot of indexes ?

INSERTS/UPDATES are slower, more space is needed.

Great project, @Henrique.GonçalvesDias!

Sent you a small pr.

I have three questions:

  1. Why CSP instead of REST? It doesn't seem too highload of an app, being an admin tool essentially, so I just wonder why you decided to go with CSP.
  2. Let's move filtering to the server. Currently only the last 200 messages can be retrieved.
  3. Are there any plans to add visual trace? Our default visual trace is great, but when a single session exceeds 300-500 thousands of messages it's not as responsive. So I've been searching for a enhanced tool for session viewing.

Ensemble uses $$$GetLocalizedName macro to get localized settings names.

You can add caption directly:

set ^CacheMsg("EnsColumns","en-us","HelloWorld")="Hello World"

Where HelloWorld is a property name, value is a caption and en-us is a session language.

It should work like this:

ClassMethod Add()
{
    quit $$$Text("@HelloWorld@Hello World", "EnsColumns", "en-us")
}

but for me it doesn't set the global. Not sure why.

As other commenters have stated you can use Python Gateway to drive Python process, in that case check this article, the Data Transfer section specifically. Also join our MLToolkit@intersystems.com usergroup - it's focused on AI/ML scenarios.

I'm a user of Python Native API so I can give the following advice. There are two cases here:

  • You need to iterate the result set - in that case you may want to call %Next() and retrieve values as you go.
  • You just need a complete dataset for downstream processing (your case). In that case yes, serialization is required.  I see two ways to do that: JSON and CSV. If the size of the dataset is small you can use JSON, no problem, but (I haven't run any tests but fairly sure) CSV serialization can be faster and less CPU/Memory intensive.