This article introduces SHAP explainability methods as an approach to understand the reasons behind predictions in machine learning black-box models. It also includes a simple Jupyter notebook that you can use and modify to gain hands-on experience with these concepts:
https://www.kaggle.com/code/jorgeivnjh/explainability-in-ml-models
https://github.com/JorgeIvanJH/Explainability-in-ML-models
We will leverage these concepts for a future implementation in our Continuous Training Pipeline: https://community.intersystems.com/post/complementing-iris-mlflow-continuous-training-ct-pipeline





