Modified SHAP approach for interpretable prediction of cardiovascular complications

Authors

  • D.K. Sharipov Tashkent University of Information Technologies named after Muhammad al-Khwarizmi Author
  • A.D. Saidov Digital Technologies and Artificial Intelligence Development Research Institute Author

DOI:

https://doi.org/10.71310/pcam.2_64.2025.10

Keywords:

model interpretability, feature importance, normalization, AI transparency

Abstract

This article explores the significance of modifying SHAP (SHapley Additive exPlana tions) values to enhance model interpretability in machine learning. SHAP values provide a fair attribution of feature contributions, making AI-driven decision-making more trans parent and reliable. However, raw SHAP values can sometimes be difficult to interpret due to feature interactions, noise, and inconsistencies in scale. The article discusses key techniques for modifying SHAP values, including feature aggregation, normalization, cus tom weighting, and noise reduction, to improve clarity and relevance in explanations. It also examines how these modifications align interpretations with real-world needs, ensur ing that SHAP-based insights remain practical and actionable. By strategically refining SHAP values, data scientists can derive more meaningful explanations, improving trust in AI models and enhancing decision-making processes. The article provides a structured approach to modifying SHAP values, offering practical applications and benefits across various domains. 

References

Mao Q. et al. XGBoost-Enhanced Prediction and Interpretation of Heart Disease Using SHAP Values // 4th International Conference on Computer, Big Data and Artificial Intel ligence.– 2023.– P. 738-742.– DOI: 10.1109/ICCBD-AI62252.2023.00134.

Ghose P. et al. Explainable AI assisted heart disease diagnosis through effective feature engineering and stacked ensemble learning // Expert Systems with Applications.– 2025.– Vol. 265.– DOI: https://doi.org/10.1016/j.eswa.2024.125928.

Dehuri S. et al. 2024. Heart Disease Prediction Using Ensemble Techniques and Explainable AI Validation // Machine Intelligence, Tools and Applications (ICMITA). 2025.– Vol 40.– DOI: https://doi.org/10.1007/978-3-031-65392-6_24.

Saranya A., Narayan S. Risk Prediction of Heart Disease using Deep SHAP Techniques // 2nd International Conference on Advancement in Computation Computer Technologies. 2024.– P. 332-336.– DOI: 10.1109/InCACCT61598.2024.10551212.

arXiv:2103.11254 [cs.LG].– https://doi.org/10.48550/arXiv.2103.11254.

Pratheek N. et al. Cardiovascular Disease Prediction with Machine Learning Algo rithms and Interpretation using Explainable AI methods: LIME SHAP // 3rd Interna tional Conference for Advancement in Technology (ICONAT).– 2024.– P. 1-7.– DOI: 10.1109/ICONAT61936.2024.10774972.

Reddy V.A., Kodipalli D.M., Rao T. Innovative Approaches to Cardiovascular Disease: Machine Learning Predictions Unveiled Interpretation Using LIME SHAP // International Conference on Integrated Intelligence and Communication Systems (ICIICS).– 2024.– P. 1-5.– doi: 10.1109/ICIICS63763.2024.10859933.

Qi X. et al. Machine learning and SHAP value interpretation for predicting comorbidity of cardiovascular disease and cancer with dietary antioxidants.– 2024.– P. 2213-2317. PMID: 39700695. Accession: 096580542.

Donmez T.B. et al. Explainable AI in action: a comparative analysis of hypertension risk factors using SHAP and LIME // Neural Comput. Applic.– 2025.– Vol. 37.– P. 4053–4074.– DOI: https://doi.org/10.1007/s00521-024-10724-y.

Nguyen H.T., Vasconcellos H.D., Keck K. et al. 2023. Multivariate longitudinal data for survival analysis of cardiovascular event prediction in young adults: insights from a compar ative explainable study. BMC Med Res Methodol 23, 23 https://doi.org/10.1186/s12874 023-01845-4.

Lee Y.G. et al. SHAP Value-Based Feature Importance Analysis for Short-Term Load Forecasting // Journal of Electrical Engineering Technology.– 2023.

Leevy J.L. et al. Feature Selection Strategies: A Comparative Analysis of SHAP-Value and Importance-Based Methods // Journal of Big Data.– 2024.

Zhao Chi et al. ShapG: New Feature Importance Method Based on the Shapley Value // arXiv.– 2024.

Letoffe O. et al. From SHAP Scores to Feature Importance Scores // arXiv.– 2024.

Ponce-Bobadilla A.V. et al. Practical guide to SHAP analysis: Explaining supervised ma chine learning model predictions in drug development // Clin Transl Sci.– 2024.– No. 17(11).– DOI: 10.1111/cts.70056.

Downloads

Published

2025-05-15

Issue

Section

Статьи