As artificial intelligence reshapes industries and digital finance redraws economic boundaries, the risks beneath our connected world are growing just as fast. At IIIT Hyderabad, the Security and Privacy (SyPy) Research Group is working behind the scenes to uncover hidden vulnerabilities, defend emerging technologies, and build the foundations of digital trust.
“We live in an online world,” says Prof. Ankit Gangwal, continuing, “Our savings move through digital wallets. Our faces unlock our phones. Our conversations are filtered through machine learning systems that predict what we want before we type it. Every swipe, tap, and transaction depends on layers of invisible code. But what happens when that code is compromised?”
Prof. Gangwal’s group, the Security and Privacy (SyPy) Research Group at the International Institute of Information Technology, Hyderabad (IIIT-H), is not just asking exactly that question but working relentlessly to answer it. “To secure the future, we must first understand the vulnerabilities of the present,” he remarks.
The Digital Cracks We Don’t See
Security failures rarely announce themselves loudly at first. They hide in edge cases, in overlooked assumptions, in code that “should work.” And when exploited, they can ripple across systems affecting millions. SyPy – short for Security, and Privacy – specializes in finding those cracks before someone else does.
The lab’s work sits at the intersection of infrastructure and intelligence: blockchain networks, mobile systems, distributed ledgers, and AI pipelines. If it moves data or stores value, it’s within their scope. Take blockchain systems, for example. While many view cryptocurrencies as inherently secure, SyPy researchers examine patterns of abnormal fund flow – identifying anomalies that could signal fraud, laundering, or exploitation. According to Prof. Gangwal, in decentralized systems, there’s no central authority to fix mistakes. That means architectural weaknesses can have massive consequences. “Security isn’t just about preventing breaches – it’s about designing systems that remain trustworthy under pressure,” he says.
When AI Goes Wrong
While artificial intelligence may be the defining technology of our time, it is also one of the most vulnerable. Imagine a healthcare AI trained to detect diseases from medical images. Now imagine that somewhere in its training pipeline, small amounts of malicious data are inserted. The model still appears accurate. It passes basic checks. But under specific conditions, it fails – misclassifying certain cases in ways that could be catastrophic.
This is the threat of ‘model poisoning’, and it’s one of SyPy’s active research areas. By studying how adversarial data manipulates AI behavior, the group develops techniques to detect and defend against such corruption. Their work also explores ‘model merging’ – a growing practice where developers combine multiple pre-trained models to save time and resources. While efficient, merging models can introduce hidden vulnerabilities, much like combining two medicines without understanding their interaction.
Yet another frontier is that of ‘AI watermarking’. As AI models become valuable intellectual property, theft becomes inevitable. Imagine a company spending years developing a proprietary language model only to find a competitor replicating its capabilities suspiciously well. SyPy researchers design watermarking techniques that embed verifiable ownership signals directly into models, enabling creators to prove authorship. “In the AI era, protecting innovation means protecting the models themselves,” notes Prof. Gangwal.
Malware Analysis & Network Defense
Modern malware is no longer crude or obvious. It evolves, adapts, and hides within legitimate processes. In one line of research, SyPy researchers deconstruct malware strains to understand their behavioral signatures – how they communicate, escalate privileges, or evade detection. “Think of it as digital forensics meets predictive defense,” remarks Prof. Gangwal. Recognizing that the best defense begins with understanding how an attacker thinks, the group aims to anticipate tomorrow’s threats instead of reacting to yesterday’s.
Mobile Security: Guarding the Device in Your Pocket
Your smartphone may be the most data-rich device you own. It knows your location, your habits, your contacts, your financial apps. SyPy researchers rigorously test mobile ecosystems for data leakage and silent exfiltration. For instance, what permissions do apps truly need? Could background processes quietly siphon sensitive information? Are there hidden pathways between applications that allow unintended data sharing? By stress-testing these systems, the group helps ensure that convenience doesn’t come at the cost of privacy.
Blockchain & Cryptocurrency: Following the Money
Decentralized systems promise transparency but they also create new attack surfaces. In blockchain ecosystems, SyPy researchers analyze transaction flows to detect abnormal patterns that might indicate fraud or coordinated manipulation. Because blockchain data is public but pseudonymous, tracing suspicious activity requires sophisticated modeling.
Designing scalable and secure architectures for decentralized payment systems is another major focus of the lab. Prof. Gangwal observes that trust in digital finance depends on invisible layers of rigorous engineering. As digital currencies grow in adoption, ensuring their resilience becomes not just a technical problem but an economic one.
The Mind Behind the Mission
Prof. Ankit Gangwal who helms the SyPy group is a researcher blending academic depth with real-world urgency. He fosters a lab culture built around adversarial thinking. Students are encouraged to reverse assumptions, probe weaknesses, and think like attackers. The professor reasons, “In cybersecurity, empathy for the adversary isn’t optional, it’s strategic. This mindset shapes both research and mentorship.”
Rather than focusing solely on publishing papers, Prof. Gangwal invests in building thinkers and researchers capable of identifying emerging risks in technologies that don’t yet fully exist. His work has been recognized in leading global research venues, reflecting both technical rigor and global relevance. But perhaps his most lasting impact lies in the generation of security researchers he is training, who will shape digital defenses for decades to come.
The Power of Interdisciplinary Security
SyPy operates within the Centre for Security, Theory and Algorithmic Research (CSTAR) – one of India’s premier cybersecurity hubs. CSTAR brings together experts in cryptography, algorithms, and systems research. This interdisciplinary environment creates what might be called “intellectual cross-pollination.” A theoretical breakthrough in algorithm design might influence practical system defenses. A systems vulnerability might inspire new cryptographic safeguards. This synergy allows SyPy to approach security holistically — not as a patchwork of fixes, but as a design philosophy.
Engineering Trust in a Digital World
As technology accelerates and our dependence deepens, there is a corresponding potential vulnerability. “AI systems are making decisions. Financial transactions are becoming borderless. Devices are constantly communicating, but at SyPy, the focus isn’t fear – it’s preparedness,” explains Prof. Gangwal. By uncovering weaknesses before they are weaponized, embedding ownership into AI models, stress-testing digital finance systems, and protecting the data in our pockets, the group is doing something profound: engineering trust.
Because in the end, digital security isn’t about code. It’s about confidence and knowing that the systems guiding our lives are resilient, ethical, and built with care. And in this meticulous work, SyPy is helping secure the digital frontier – one vulnerability at a time.
Sarita Chebbi is a compulsive early riser. Devourer of all news. Kettlebell enthusiast. Nit-picker of the written word especially when it’s not her own.