[month] [year]

Sai Amrit Patnaik – Physical Adversarial Attacks

Sai Amrit Patnaik,  supervised by Dr. Anoop Namboodiri  received his Master of Science in Computer Science and Engineering (CSE). Here’s a summary of his research work on Physical Adversarial Attacks on Face Presentation

Attack detection Systems:

In the realm of biometric security, face recognition technology plays an increasingly pivotal role. However, as its adoption grows, so does the need to safeguard it against adversarial attacks. Attacks involve presenting images of a person printed on a medium or displayed on a screen. Detection of such attacks relies on identifying artefacts introduced in the image during the printing or display and capture process. Adversarial Attacks try to deceive the learning strategy of a recognition system using slight modifications to the captured image. Evaluating the risk level of adversarial images is essential for safely deploying face authentication models in the real world. Among these, physical adversarial attacks present a particularly insidious threat to face anti spoofing systems. Popular approaches for physical-world attacks, such as print or replay attacks, suffer from some limitations, like including physical and geometrical artefacts. The presence of a physical process (printing and capture) between the image generation and the PAD module makes traditional adversarial attacks non-viable. Recently adversarial attacks have gained attraction, which try to digitally deceive the learning strategy of a recognition system using slight modifications to the captured image. While most previous research assumes that the adversarial image could be digitally fed into the authentication systems, this is not always the case for systems deployed in the real world.

This thesis delves into the intriguing domain of physical adversarial attacks on face anti spoofing systems, aiming to expose their vulnerabilities and implications. Our research unveils novel methodologies using white box and black box approaches to craft adversarial inputs capable of deceiving even the most robust face anti spoofing systems. Unlike traditional adversarial attacks that manipulate digital inputs, our approach operates in the physical domain, where printed images and replayed videos are utilized to mimic real-world presentation attacks. By dissecting and understanding the vulnerabilities inherent in face anti spoofing systems, we can develop more resilient defenses, contributing to the security of biometric authentication in an increasingly interconnected world. This thesis not only highlights the pressing need to address these vulnerabilities but also motivates towards a pioneering approach by exploring simple yet effective attack strategy to advancing the state of the art in face anti spoofing security.

February 2024