Optimization-based Uncertainty Attribution Via Learning Informative Perturbations

Hanjing Wang*, Bashirul Azam Biswas, Qiang Ji ;

Abstract


"Uncertainty attribution (UA) aims to identify key contributors to predictive uncertainty in deep learning models. To improve the faithfulness of existing UA methods, we formulate UA as an optimization problem to learn a binary mask on the input. The learned mask identifies regions that significantly contribute to output uncertainty and allows uncertainty reduction through learning informative perturbations on the masked input. Our method enhances UA interpretability and maintains high efficiency by integrating three key improvements: Segment Anything Model (SAM)-guided mask parameterization for efficient and interpretable mask learning; learnable perturbations that adaptively target and refine problematic regions specific to each input without manually tuning the perturbation parameters; and a novel application of Gumbel-sigmoid reparameterization for efficiently learning Bernoulli-distributed binary masks under continuous optimization. Our experiments on problematic region detection and faithfulness tests demonstrate our method’s superiority over state-of-the-art UA methods."

Related Material


[pdf] [supplementary material] [DOI]