Machine Learning Architecture to massive data
IRIS is an excellent option for machine learning projects with massive data operation scenarios, because the following reasons:
1. Supports the use of shards to scale the data repository, just like MongoDB.
2. Supports the creation of analytical cubes, this in association with sharding allows to have volume with performance.
3. Supports collecting data on a scheduled basis or in real time with a variety of data adapter options.
4. It allows to automate the entire deduplication process using logic in Python or ObjectScript.
5. Allows you to orchestrate and automate the flow of data to the repository using visual flows (BPL) and data transformers (DTL)
6. Advanced auto scaling support through docker (IaC) and Cloud Manager scripts.
7. Supports loading ObjectScript libraries in provisioning, via ZPM.
8. Interoperability with Python and R to perform Machine Learning in real time.
9. It allows to use AutoML engine, IntegratedML to execute the best algorithm for the data set pointed.
10. Allows you to create post-execution analyzes such as AutoML predictions and classifications, outputs from the Python and R cognitive processing, BI Pivot Tables, all with own or third-party views.
11. Allows you to create advanced views and reports with JReport.
12. It allows to maximize reuse and monetization with API Management.