Presentation
19 June 2024 Spectral zones based SHAP: enhancing interpretability in spectral deep learning models
Jhonatan Contreras, Thomas Bocklitz
Author Affiliations +
Abstract
Deep learning models are widely used because of their high accuracy in solving classification problems in spectroscopy, but they lack interpretability. The challenge lies in the balance between interpretability and accuracy. Current interpretive methods, such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), can sometimes provide mathematical meaning but physically implausible interpretations by perturbing individual feature values. To address this gap, our research proposes a group-focused methodology that targets 'spectral zones' to estimate the impact of collective spectral features directly. This approach enhances the interpretability of deep learning models, diminishes noisy data, and provides a more comprehensive understanding of model behaviors. By applying group perturbations, the resultant interpretations are not only more intuitive but also offer results that are easily comparable with domain expertise, thus leading to an enriched analysis of the model's decision-making processes.
Conference Presentation
© (2024) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jhonatan Contreras and Thomas Bocklitz "Spectral zones based SHAP: enhancing interpretability in spectral deep learning models", Proc. SPIE PC13011, Data Science for Photonics and Biophotonics, PC130110A (19 June 2024); https://doi.org/10.1117/12.3016969
Advertisement
Advertisement
KEYWORDS
Mathematical modeling

Deep learning

Raman spectroscopy

Artificial intelligence

Machine learning

Performance modeling

Reflection

Back to Top