I don't know whether this is documented, but you can use array accessors in the property expression. To create a property for the productID of the first entry in the OrderContent array, you should be able to use an expression like this in a %CreateProperty() call: $.OrderContent[0].productID.

Unfortunately, this doesn't address the general case of searching an array. I don't think you can create a collection property in DocDB, nor do the %FindDocuments() operators seem to be of much use. You might try poking around in your generated class to see if you can use an existing property as a template for creating your own computed property that aggregates the productID. If that works, you may still find the %FindDocuments() operators to be inadequate, but the property would then be accessible to SQL queries.

I took the spec. from your reply to Dmitriy's question, changed "query" to "path", smooshed it into one line, then pasted it into a terminal:

USER>s swagger={...}

USER>s status=##class(%REST.API).CreateApplication("neerav",swagger)

USER>zw status
... /* ERROR #8722: Path parameter country is not used in url /Demo of route with GETDemo in class: neerav.spec

I then changed "/Demo" to "/Demo/{country}":

USER>s swagger={ "swagger":"2.0","info":{ "title":"O","version":"1" },"schemes":["http","https"],"consumes":["application/json]"],"produces":["application/json]"],"paths":{ "/Demo/{country}":{ "get":{ "summary":"Get Demo ","description":"Demo ","operationId":"GETDemo","x-ISC_CORS":true,"consumes":["application/json]","text/html]"],"produces":["application/json]","text/html]"],"parameters":[{ "name":"country","type":"string","in":"path","description":"Country Name","allowEmptyValue": true } ],"responses":{ "200":{ "description":"Returns Country","schema": { "type":"string" } } } } } } }

USER>s status=##class(%REST.API).CreateApplication("neerav",swagger)

USER>zw status
status=1

I'm curious to know in what version this previously worked. I get the same error in 2019.1.2.

I wasn't sure whether the Caché ODBC driver works with IRIS. Even if it does now, it's not guaranteed to do so in the future. I did find that the C++ binding SDK appears to include everything you need to generate proxy classes. You just need to call Generator::gen_classes() in generator.cpp.

Again, everything I've described is an unsupported hack. If a C++ binding application is holding up an upgrade from Caché to IRIS, let your Support or Sales Engineering rep. know.

You are correct that callin is not a replacement for the C++ binding. Callin is a lower-level API that only works for a local instance. The C++ binding has two flavors: the light C++ binding (LCB), which depends on callin, and the full C++ binding, which depends on the ODBC driver.

It's not clear to me from your questions how you're currently using C++. Do you actually have an existing C++ binding application, or are you writing something new with a requirement to use C++? I suggest that you get in touch with your friendly InterSystems representative in Support or Sales Engineering to help you find the best path forward.

That said, I was curious about whether the C++ binding from Caché can be used with InterSystems IRIS. Although I've never used this API on purpose, I was able to get it working after a fashion with a few modifications. I presume this is not supported by InterSystems, and I would hesitate to use it in production. The following overview describes my experiment using the C++ binding SDK from a Caché 2018.1.4 kit for lnxrhx64 with the Community edition of IRIS 2020.3.0 in a container. Apologies for any copy-paste mistakes. I started this on a lark, and didn't take very good notes.

I installed g++, make, and vim in the container:

$ docker exec -it --user root cppbind bash
root:/home/irisowner# apt update && apt install -y g++ make vim
...

I extracted cppsdk-lnxrhx64.tar.gz, and found that the C++ binding uses dlsym() and dlopen() to call entry points in the ODBC driver library. (Search for the init_pfn macro in odbc_common.cpp.) These are still present in the IRIS library, as you can see with the nm command:

$ docker exec -it cppbind bash
irisowner:~$ nm /usr/irissys/bin/libirisodbciw35.so |grep cpp_
000000000005cf90 T cpp_binding_alloc_messenger
000000000005dd40 T cpp_binding_clear_odbc_cache
...

I changed the ODBCLIBNAME symbol in libcppbind.mak from the libcacheodbciw Caché driver library to the libirisodbciw35 IRIS driver library.

I changed the Caché "CacheUni" symbols to the IRIS "ServerUni" symbols. For those three symbols, instead of

*((void**)(&(func))) = dlsym(dll_handle, #func);

You want something like this, where name is the new name of the symbol in the IRIS driver:

*((void**)(&(func))) = dlsym(dll_handle, #name);

I did the same for the ODBC 2.x SQLColAttributesW() function, which has been replaced by the ODBC 3.x SQLColAttributeW() function.

I fixed an apparent bug in obj_types.h by marking get_cs_handle() const and cs_handle mutable. I didn't spend much time investigating, so I'm not certain that this is the best fix.

I then built the C++ binding:

irisowner:~/dev/cpp/src/modules/cpp/lnxrhx64$ make libcppbind.so

It appears that the cpp_generator utility is only shipped as a binary, which obviously doesn't have any of these changes, so it won't work with an IRIS driver library. However, I was able to do a simple test using Dyn_obj:

#include <iostream>
#include "cppbind.h"

using namespace InterSystems;

int
main()
{
    auto conn = tcp_conn::connect(
        "localhost[1972]:USER", "_system", "SYS");
    Database db{conn};
    d_string res;
    Dyn_obj::run_class_method(&db, L"%SYSTEM.Util", L"CreateGUID",
        nullptr, 0, &res);
    std::cout << res.value() << std::endl;

    return 0;
}

I compiled and ran the test program like so:

$ docker exec -it cppbind bash
irisowner:~$ g++ -o cppbindtest cppbindtest.cpp -I dev/cpp/include \
    -L dev/cpp/src/kit/lnxrhx64/release/bin -lcppbind
irisowner:~$ export LD_LIBRARY_PATH=/home/irisowner/dev/cpp/src/kit/lnxrhx64/release/bin:/usr/irissys/bin
irisowner:~$ ./cppbindtest 
7325F26A-21E2-11EB-BB36-0242AC110002

I don't know what the TRACE macro does, but I would add that you have a couple of ways to visualize the contents of a dynamic object:

USER>read json
{"foo":"bar"}
USER>s object={}.%FromJSON(json)

USER>do object.%ToJSON()
{"foo":"bar"}
USER>write object.%ToJSON()     
{"foo":"bar"}
USER>zwrite object
object={"foo":"bar"}  ; <DYNAMIC OBJECT>

If object.%Get("forename") or object.forename returns "", you can try object.%IsDefined("forename") to see whether that property is in fact defined.

I would not say that try-catch is slow. If you reach the end of a try block, it simply branches over the catch block. This is a very cheap operation. In the nominal case (i.e., no error or exception), the cost of a try block is probably less than setting a %Status variable, and it would take several try blocks to match the cost of setting $zt once.

The catch block does almost all of the work. It instantiates the exception object, unwinds the stack, etc. I don't know offhand how catch compares to $zt, but the performance of the exceptional cases is usually not as important.

As I recall, some of the conventions of the object library date back to Caché 2.0 in the late nineties, whereas try-catch wasn't introduced until Caché 4.0.

One arguable advantage of %Status-based interfaces is that they work well with the noisy %UnitTest framework. If you $$$AssertStatusOK on a series of method calls, the test log reads like a transcript of your activity. You can often understand the failure without even needing to read the test program.

Also, try-catch is kind of annoying to use without finally, which was never implemented.

You're getting a "Datatype validation" error, which suggests that the XML parsed fine, but that the value "/shared/BENANDERSON" is not valid for the PArray property of the ExecuteResult class.

If PArray is an array, I don't remember how that's typically projected to XML, and I can't find an example in the documentation. Lists are projected as a container element with the contents nested as elements within. Are you reading in something that was written using an %XML.Writer, or are you designing a class to read an XML file from somewhere else?

It might help to see more context in the XML, and the relevant definitions from the ExecuteResult class.

FIPS 180-4 describes SHA-512 et al., FIPS 198-1 describes HMAC, and PKCS #5 describes PBKDF2, which depends on HMAC-SHA. As for NIST, special publication 800-132 (now ten years old) states: "This Recommendation approves PBKDF2 as the PBKDF using HMAC with any approved hash function as the PRF." For more recent guidance, consider special publication 800-63B.

As I understand it, none of the weaknesses in SHA affect HMAC or PBKDF2. However, if SHA-1 is no longer FIPS approved, the NIST guidance would indicate replacing it with, say, SHA-2 or SHA-3.

In terms of strength, PBKDF2 essentially has two parameters, the hash function, and the iteration count. For the hash function, bigger is usually slower, therefore stronger. For the iteration count, PKCS #5 and NIST 800-132 both suggest a minimum of 1,000. NIST 800-63B states: "the iteration count SHOULD be as large as verification server performance will allow, typically at least 10,000 iterations."

One thing I've done to split machine learning datasets is to use an auxiliary table that maps IDs to a random number. I write a stored procedure that returns a random number. I create a table with two columns, one for the ID of the source table, and one to hold a random number. I populate the column for the source IDs:

insert into random_table (source_id)
select id from source_table

I then populate the column for the random number:

update random_table
set random_number = MySchema.MySP_Random(1E9)

Then I can select with an ORDER BY clause on the random number:

select top 10 source_id
from random_table
order by random_number, source_id

It depends on your use case whether this will be appropriate for a source table with millions of rows. It's an expensive way to select just one row.

I agree that the most likely path to a Rust binding is to wrap a C or C++ API. If you're content with a local client, callin and/or callout is the place to start. As you said, it shouldn't be too hard to write a callout library in Rust. Callin, on the other hand, (and callback from a callout library) is a bit more involved, requiring a lot of unsafe code.

If you want a remote client, you could look at wrapping the C or C++ binding, but that's a dead end that is not supported in IRIS. You might also look into relational access or an ORM. Diesel looks promising, but I don't know whether it (or Rust in general) works well with ODBC.