From Fake to Real: Pretraining on Balanced Synthetic Images to Prevent Spurious Correlations in Image Recognition
Maan Qraitem*, Kate Saenko, Bryan A. Plummer
;
Abstract
"Visual recognition models are prone to learning spurious correlations induced by a biased training set where certain conditions B (, Indoors) are over-represented in certain classes Y (, Big Dogs). Synthetic data from off-the-shelf large-scale generative models offers a promising direction to mitigate this issue by augmenting underrepresented subgroups in the real dataset. However, by using a mixed distribution of real and synthetic data, we introduce another source of bias due to distributional differences between synthetic and real data (synthetic artifacts). As we will show, prior work’s approach for using synthetic data to resolve the model’s bias toward B do not correct the model’s bias toward the pair (B, G), where G denotes whether the sample is real or synthetic. Thus, the model could simply learn signals based on the pair (B, G) (, Synthetic Indoors) to make predictions about Y (, Big Dogs). To address this issue, we propose a simple, easy-to-implement, two-step training pipeline that we call From Fake to Real (FFR). The first step of FFR pre-trains a model on balanced synthetic data to learn robust representations across subgroups. In the second step, FFR fine-tunes the model on real data using ERM or common loss-based bias mitigation methods. By training on real and synthetic data separately, FFR does not expose the model to the statistical differences between real and synthetic data and thus avoids the issue of bias toward the pair (B, G). Our experiments show that FFR improves worst group accuracy over the state-of-the-art by up to 20% over three datasets. Code available: https://github.com/ mqraitem/From-Fake-to-Real"
Related Material
[pdf]
[supplementary material]
[DOI]