That REST API is indeed for querying iFind indices (hence the direct reference to an index you can provide) and the somewhat confusingly named "query" argument is actually to pass in the iFind search string. The API will then build a full SQL query for you and run it right away. 

Here's the OpenAPI spec for this endpoint (from self-documentation endpoint /api/iKnow/v1/USER/swagger):

  /table/{table}/search:
    post:
      operationId: /table/{table}/search-POST
      summary: |
        Search the given iFind index in the given table
      tags: ["iFind"]
      parameters:
        - $ref: '#/parameters/tableParam'
        - name: RequestBody
          description: JSON object with a list of query-specific arguments
          in: body
          schema:
            type: object
            properties:
              query:
                description: This is the only necessary parameter with no default value. The search terms to query against the iFind index.
                type: string
              index:
                description: the iFind index would be searched against, if you don't specify it, the first found iFind index would be used .
                type: string
              option:
                $ref: '#/definitions/OptionSpec'
              distance:
                description: only valid when option is fuzzy search (when option is 3)
                type: string
                example: "3"
              language:
                description: iKnow-supported language model to apply, for example "en"
                type: string
              includeText:
                description: whether the returned columns should include the column beging indexed by 'index'
                type: integer
                default: 0
                enum: [0, 1] 
              columns:
                description: specify the columns which also needed to be returned. For example, ["column1","column2"] 
                type: array
                example: []
                items:
                  type: string
              highlightSpec:
                $ref: '#/definitions/HighlightSpec'
                description: the parameters needed for Highlight
              rankSpec:
                $ref: '#/definitions/RankSpec'
                description: the parameters needed for Rank
              where:
                description: the valid SQL logical condition statement. For example, "column1 = ? AND column2 = ?"
                type: string
      responses:
        200:
          description: Successful response
          schema:
            type: object
            properties:
              rows:
                type: array
                default: []
                items:
                  type: object

Love the article! Very well-phrased considerations on the use of AI, almost all of which I share. Especially in the context of #1, we should not forget that the second L in LLM is for Language, and not for Fact or Solution (otherwise it would be a really bad acronym!). Therefore, if we're not qualified to spot what hallucinations crept into the response, its nicely-phrased language will probably make sure we never will.

PS: so glad you passed that Stats test and joined InterSystems :-) 

I believe what you're looking at is the new, more fine-grained set of %Native_* resources you need to use native functions. Look for DP-423341 in the upgrade guide. It seems we failed to describe this requirement in the  Native API documentation (or at least I didn't find it where I expected it), so we'll get that addressed.

I'd also recommend defaulting to the new, dynamic upgrade guide that makes it easier to filter on particular types of issues. This is now replacing the old, static pages that were more reliant on / vulnerable to manual curation. In fact you'll no longer find those static pages from the menu in the 2025.1 doc.

Any tools that use SQL to access partitioned tables will just work, as from the SQL query perspective there is no change. This includes Adaptive Analytics, InterSystems Reports, and any third-party BI tools. Also, IRIS BI cubes can use partitioned tables as their source class.

We currently have no plans to support partitioning of IRIS BI cubes themselves, as they have their own bucketing structure and less commonly have both hot and cold data, so some of the motivations for table partitioning don't apply. 

Nice article @Ben Schlanger !

I like how you're laying out the investigative process, though I think it's worth noting that every case is different and therefore recommendations also can differ. Especially the %NORUNTIME hint should be used with caution as it may deprive you of better plans in most scenarios. In fact, we like to say that any time you have to revert to that hint, it's worth opening a case with the WRC as it should be an improvement opportunity for our engine to make that better choice automatically (available statistics permitting) :-)

Also, I'd like to advertise a few improvements we've made since the IRIS version shown here:

  • Improved feedback in the query plan: we've been displaying a note in the query plan for a while now if there's a chance that different runtime parameter values may lead to a different plan, and as of IRIS 2023.3 are even calling out the specific predicates that drove the RTPC decision. For example, your plan may say "This query plan was selected based on the runtime parameter values that led to improved selectivity estimation of the range condition enc.EncounterTime BETWEEN '2022-01-01' AND '2023-12-31'"
  • Showing the actual runtime plan: Starting with IRIS 2023.3, we've enhanced the EXPLAIN and SMP utilities to no longer show the generic plan, after substituting out all parameter values, but the actual plan you'll get at runtime with the literal values you put in the query text. This addresses step #4 in the investigation described above.
  • SQL Process View: As of IRIS 2022.2, the Operations menu in the System Management Portal includes a "SQL Activity" link that leads you to a page listing all currently-running SQL statements, and allows you to drill through to the statement details and query plan. This also helps with step #4, and to identify any long-running queries in the first place. An aggregated form of this data is also available through the /api/metrics endpoint for consumption through a monitoring tool.
  • Query and schema recommendations: In IRIS 2024.3, released last month, we've further expanded the information contained in the query plan from the RTPC related notes described above, to also include warnings on indices marked as non-selectable (cf investigation step #1), indices that are being ignored because they have non-matching collation, whether the plan is frozen, and similar additional information that may help you improve the statement text, schema, or overall system settings

The above features are all specifically introduced to help diagnose long-running queries more quickly and identify how things can be sped up, but of course these versions also include general performance enhancements and refinements to the RTPC infrastructure too, so it'll be exciting to see how fast this customer's query runs on the latest and greatest IRIS release!

Hi @Scott Roth , the %MANAGE_FOREIGN_SERVER privilege was only just introduced with 2024.2, as part of finalizing the full production support for Foreign Servers (see also release notes). I'm not sure though why it wouldn't appear after you created it. Can you confirm whether it's still there right after the CREATE SERVER command, whether you're using the same user for both connections, and whether or not you can CREATE FOREIGN TABLEs with that server (before logging off and / or after logging back in).

I understand upgrading may not be straightforward, but the most logical explanation would be that the initial, crude privilege checking (that we replaced in 2024.2 as advertised) has a hole in it. 

thanks,
benjamin

No, I would leave out the semicolon at the end of that query. It's typically used as a statement separator, but not really part of query syntax itself. IRIS (as of 2023.2) will tolerate it at the end of a statement, but it doesn't seem that Spark really does anything with it as it wraps what you sent to dbtable with further queries, causing the error you saw.

You may also want to apply 

  .option(“pushDownLimit”, false)