The term Explainable AI refers to a set of methods and processes that allows humans to understand the decisions or predictions made by the AI. For explainers (such as the SHAP explainer), original model and training data is needed. When the synthetic data is accurate, they can be used as drop-in replacement for actual data to help interpret sophisticated ML models.