thanks for sharing.
I'm testing the feature.
Generally, when we access a datalake, the source csv files are organized in the same folder with their respective timestamp in the name, example:
transactions20240126.csv
transactions20240127.csv
etc...
Is there any wildcard to load multiple files?
Thanks for the explanations. I would like to explore more possibilities with LOAD DATA. From a data lake perspective, information generally remains as pure data and is often stored in different places. How could LOAD DATA act in situations like this?
Example 1: I have thousands of CSV files on different disks that are accessible via the network. What would be the syntax to perform the load with LOAD DATA?
Example 2: log files available on cloud storage (S3, Google Storage, etc.)
Would the routine to load these files also be through LOAD DATA?
thanks
Thanks for share Guillaume I would like to clarify a doubt: If I want to run embeded python codes on an iris server directly from my desktop, it would be necessary to have intersystems iris also installed locally (client side) correct?