M Sreenivasan, supervised by Dr. Naresh Manwani received his Master of Science in Computer Science and Engineering (CSE). Here’s a summary of his research work on Robust Models in Presence of Uncertainty and Adversarial Attacks:
With the increased data availability, the biggest problem is getting accurate labelled data for training models. However, most of the time, we get the missing values. It is preferred to consider weak supervised learning methods to avoid the cost of quality labelled data. One such technique is called partial labelled data classification. We present a novel second-order cone programming framework to create robust classifiers that can tolerate uncertainty in the observations of partially labelled multiclass classification problems. Only the presence of second-order moments is necessary for the proposed formulations independent of the underlying distribution. The case of missing values in observations for multiclass classification problems is thus specifically addressed by these formulations.
Adversarial examples are machine learning model inputs that an attacker has purposefully constructed to cause the model to make a mistake. Recent research aimed to improve the computational efficiency of adversarial training for deep learning models. Projected Gradient Descent (PGD) and Fast Gradient Sign Method (FGSM) are popular current techniques for generating adversarial examples efficiently. There is a trade-off between these two regarding robustness or training time. Adversarial training with the PGD is considered as one of the most efficient adversarial defence techniques to achieve moderate adversarial robustness. However, PGD requires high training time since it takes multiple iterations to generate perturbations. On the other hand, adversarial training with the FGSM takes much less training time since it takes one step to generate perturbations but fails to increase adversarial robustness. On the CIFAR-10/100 datasets, our approach outperforms PGD strong adversarial training techniques in terms of robustness to adversarial training and speed.
While deep learning systems trained on medical images have demonstrated cutting-edge performance in various clinical prediction tasks, recent research indicates that cleverly created hostile images can fool these systems. Deep learning-based medical image classification algorithms have been questioned regarding their practical deployment. To attack this problem, we provide an unsupervised learning technique for detecting adversarial attacks on medical images. Without identifying the attackers or reducing classification performance, our suggested strategy FNS (Features Normalisation and Standardization), can detect adversarial attacks more effectively than earlier methods.
October 2023