


Data visualization using a beeswarm plot combined with a violin plot can reveal the impact of each feature and the nonlinear dependency of Shapley values on feature values. For example, the 120-day BCOM Industrial Metals Sharpe ratio is a strong predictor of lower crash probability when the ratio is high, but this effect disappears when the ratio is low.
GBDT can provide feature importance scores. These scores indicate how much each feature contributes to the overall predictive power of the model. By examining these scores, you can identify which variables are most influential in driving the model's outcomes, allowing for a clearer understanding of the model's behavior.
In contrast, less explainable AI models lack clear decision rules, with decisions resulting from complex black-box systems.





• Shapley Values
The use of Shapley values in GBDT enhances its explainability by providing both global and local explanations, such as during the COVID-19 period. Shapley values quantify the marginal contribution of each feature to a specific prediction when combined with various subsets of other features, providing detailed insights into how different features interact and influence the model's output.
• Feature Importance
• Tree Structure

GBDT consists of an ensemble of decision trees, which are inherently interpretable. Each tree makes decisions based on a series of simple rules (e.g., "if feature X is greater than a certain value, then go left; otherwise, go right").
What Makes the Explainable AI Model GBDT Explainable in This Context?









