The Jupyter support is very exciting, adding a neat and highly appropriate mechanism for exposing IRIS-side concepts to a typical Python environment (Jupyter). This release is introducing a first taste of such an interaction, but we're very interested in learning from your experiences and ideas on making this even more effective at adding process control to your Python work. yes

InterSystems IRIS (and Caché before that) will indeed make this decision for you. The SQL optimizer will analyze all the conditions in your query and select the best query plan based on the available table statistics, which includes column selectivity. See also this article on collecting those stats with the TuneTable command.

As a matter of fact, our development team is making some exciting enhancements to the cost functions used to turn those table statistics into the actual cost estimates for the possible query plans. More about that at our upcoming Global Summit!

IRIS NLP, previously known as iKnow, is an embedded technology, meaning it's there in the form of APIs. These articles on building a domain and using the knowledge portal should be a helpful start, as is this series of step-by-step videos (which are a little older I'll admit; start with the "fundamentals" one) and of course other articles on the developer community tagged for iKnow.

Hi Guillaume,

iFind indices, like bitmap indices before, require a bitmap-friendly ID key (positive integer). When you make a table the child in a parent-child relationship, the underlying storage structure will use a composite key that therefore no longer satisfies the bitmap friendliness. We do plan to lift this limitation in a future release, as it's already the case for bitmap indices, but for now you'll have to review your schema and see if a one-to-many or (preferred) foreign key would work for you.

Thanks,
benjamin

Good question. Given our long history, we have a fair number of utility methods whose signatures grew organically and for which backwards compatibility goals have prevented us from doing significant cleanup. We're now in the process of reviewing the SQL utility methods and for the ones that really need as many parameters, are considering the use of an options string (aka compile flags), as is being used by some more modern infrastructure such as the work queue manager and/or just making separate specific methods rather than super-generic 10-argument ones. While we're at it, I'm eager to read suggestions or feedback on how others developing utility functions are dealing with this.

Depends a bit on what you want. If you want to use them in the WHERE clause, you can leave the "list of" structure as-is and use our %FOR SOME ELEMENT syntax. In your case, for retrieving them in the SELECT list, changing the projection or your data model is probably the pragmatic choice. Note that collections are a really powerful feature when working in the Object paradigm, but somewhat constrained by SQL standard operations when accessing through the relational paradigm.

Horita-san,

not sure whether you mean the projection (table) itself is missing or the row you created through the API isn't showing up. This works fine for me, but in order to combine the use of APIs with a domain definition, you have to set the allowCustomUpdates flag to true (off by default).  See also the notes in this article on the dictionary builder demo

When set to false, the API methods like CreateDictionary() will return an error (passed by reference, the returned ID will be below zero to indicate a failure).

Hope this helps,
benjamin

Thanks for all your input thus far, which is proving very helpful inspiration for our planning process. Feel free to participate if you haven't done so or share with your colleagues, as we're still watching new inputs. Also, don't hesitate to share your thoughts directly on this thread. Positive feedback is great, but critical is often even more helpful for us :-)

There is a simple regression calculator that is used internally for similar trend line work, iirc. The class reference is not spectacularly elaborate, but it's fairly straightforward to use. First you use the add() function to load up points and then the result() function will calculate a simple trend line and populate Slope and Intercept properties:

USER>s stat = ##class(%DeepSee.extensions.utils.SimpleRegression).%New()

USER>w stat.add(0,1)
1
USER>w stat.add(1,2)
1
USER>w stat.result(.b,.y0,.r)
1
USER>zw b,y0,r
b=1
y0=1
r=1
USER>w stat.Slope
1

you can keep adding data and re-calculate:

USER>w stat.add(1,1)
1
USER>w stat.result(.b,.y0,.r)
1
USER>zw b,y0,r
b=.5
y0=1
r=.5