Article
· Nov 20, 2023 3m read
Parquet files and InterSystems IRIS

In the world of Big Data, selecting the right file format is crucial for efficient data storage, processing, and analysis. With the massive amount of data generated every day, choosing the appropriate format can greatly impact the speed, cost, and accuracy of data processing tasks. There are several file formats available, each with its own set of advantages and disadvantages, making the decision of which one to use complex. Some of the popular Big Data file formats include CSV, JSON, Avro, ORC, and Parquet.

3 2
1 236

According to Databricks Apache Parquet is an open source, column-oriented data file format designed for efficient data storage and retrieval. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk. Apache Parquet is designed to be a common interchange format for both batch and interactive workloads. It is similar to other columnar-storage file formats available in Hadoop, namely RCFile and ORC. (source: https://www.databricks.com/glossary/what-is-parquet).

2 0
1 176

In today's landscape, enterprises have grown substantially in scale, amassing vast amounts of data. This data is collected from a plethora of sources including different applications, databases, and other channels. Given the diversity and volume of this data, it's only logical for these enterprises to seek a deeper understanding of what their data entails. Some of the data can be stored in IRIS, and it can be reasonable to be able to add this data to a data lake too.

The Internet now offers many different tools for such tasks, that do not yet support IRIS, but it's achievable.

5 7
0 357