[month] [year]

Himanshu Pal

Himanshu Pal supervised by Dr. Charu Sharma received his Master of Science by Research  in Computer Science and Engineering (CSE). Here’s a summary of his research work on Privacy and Safety in Graph Neural Networks: Federated Learning and Adversarial Robustness:

Graph neural networks (GNNs) have shown great performance in learning from structured data, but their use in real-world situations faces two main difficulties: privacy-preserving training and resilience against adversarial attacks. With this thesis, two different areas of research are brought together to create a general framework for safe and reliable graph learning: adversarial attacks on temporal graphs and federated learning for non-IID graph data. First, we address privacy concerns in GNN training by building a federated spectral graph transformer model coupled with neural ordinary differential equations (ODEs). Especially relevant to social networks, recommendation systems, and fraud detection—where data privacy is vital—our method allows collaborative learning while preserving the decentralisation of sensitive graph data. This strategy increases the value of federated learning in a larger spectrum of complicated network scenarios by optimizing bandwidth and considering non-IID heterophilic graphs. Second, we explore Temporal Graph Neural Network (TGNN) adversarial weaknesses and propose Temporal Edge Rank Adversarial Attack (TERA). This approach selectively corrupts important edges using a novel Temporal PageRank-based significance metric, therefore significantly reducing model efficacy while maintaining stealth. Our results underline the imminent need for strong defence methods since they show the sensitivity of advanced TGNNs to well-crafted adversarial attacks. This thesis integrates adversarial robustness with privacy-preserving federated learning to build a framework for safe and dependable graph learning systems. We investigate feasible defences against adversarial attacks in federated graph environments and propose future directions to enhance security and privacy in GNN implementations. Our work promotes the general goal of developing strong machine learning models capable of operating in adversarial, real-world, privacy-sensitive environments.

 

October 2025