AdversariaLeak: External Information Leakage Attack Using Adversarial Samples on Face Recognition Systems
Roye Katzav*, Amit Giloni, Edita Grolman*, Hiroo Saito, Tomoyuki Shibata, Tsukasa Omino, Misaki Komatsu, Yoshikazu Hanatani, Yuval Elovici, Asaf Shabtai
;
Abstract
"Face recognition (FR) systems are vulnerable to external information leakage (EIL) attacks, which can reveal sensitive information about the training data, thus compromising the confidentiality of the company’s proprietary and the privacy of the individuals concerned. Existing EIL attacks mainly rely on unrealistic assumptions, such as a high query budget for the attacker and massive computational power, resulting in impractical EIL attacks. We present , a novel and practical query-based EIL attack that targets the face verification model of the FR systems by using carefully selected adversarial samples. uses substitute models to craft adversarial samples, which are then handpicked to infer sensitive information. Our extensive evaluation on the MAAD-Face and CelebA datasets, which includes over 200 different target models, shows that outperforms state-of-the-art EIL attacks in inferring the property that best characterizes the FR model’s training set while maintaining a small query budget and practical attacker assumptions."
Related Material
[pdf]
[supplementary material]
[DOI]