[month] [year]

UMDBB-2022

August 2022

Dr. Sudipta Banerjee, Aditi Aggarwal and Arun Ross presented a paper virtually on Can GAN-induced Attribute Manipulations Impact Face Recognition at Understanding and Mitigating Demographic Bias in Biometric Systems (UMDBB) workshop held in conjunction with 26th International Conference on Pattern Recognition – 2022 on 21 August.

 

Aditi Aggarwal was a M.Tech student who graduated in Spring 2022. She did this research for her Independent Study under the supervision of Dr. Sudipta Banerjee. Prof. Arun Ross is a collaborator who is a full-time professor at Michigan State University.

 

Research work as explained by the authors: Impact due to demographic factors such as age, sex, race, etc., has been studied extensively in automated face recognition systems. However, the impact of digitally modified demographic and facial attributes on face recognition is relatively under-explored. In this work, we study the effect of attribute manipulations induced via generative adversarial networks (GANs) on face recognition performance. We conduct experiments on the CelebA dataset by intentionally modifying thirteen attributes using AttGAN and STGAN and evaluating their impact on two deep learning-based face verification methods, ArcFace and VGGFace. Our findings indicate that some attribute manipulations involving eyeglasses and digital alteration of sex cues can significantly impair face recognition by up to 73% and need further analysis.

This workshop on bias and fairness of biometric systems highly complements the main ICPR 2022 conference and its tracks 1, 2, 3, and 4. Although the main conference has a track 4 on biometrics and human-machine interaction, the topic of bias and fairness of biometric systems needs a dedicated session of its own.

With recent advances in deep learning obtaining hallmark accuracy rates for various computer vision applications, biometrics is a widely adopted technology for recognizing identities, surveillance, border control, and mobile user authentication. However, over the last few years, the fairness of this automated biometric-based recognition and attribute classification methods have been questioned across demographic variations by media articles in the well-known press, academic, and industry research. Specifically, facial analysis technology is reported to be biased against darker-skinned people like African Americans, and women. This has led to the ban of facial recognition technology for government use. Apart from facial analysis, bias is also reported for other biometric modalities such as ocular and fingerprint and other AI systems based on biometric images such as face morphing attack detection algorithms. Despite existing work in this field, the state-of-the-art is still at its initial stages. There is a pressing need to examine the bias of existing biometric modalities and the development of advanced methods for bias mitigation in existing biometric-based systems. This workshop provides the forum for addressing the recent advancement and challenges in the field. The expected outcomes are to increase awareness of demographic effects, recent advances and provide a common ground of discussion for academicians, industry, and government.

Workshop page: https://vcbsl-wsu.github.io/icpr22w-umdbb/

 

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •