X

10 IIIT-H Projects That Clinched ANRF ARG Awards

Against the odds of 15,700 proposals submitted nationwide, ten research projects from IIIT-H emerged as winners of the prestigious Advanced Research Grant (ARG) – the Anusandhan National Research Foundation’s (ANRF) flagship funding scheme – an extraordinary showing that underscores the institute’s growing influence in cutting-edge science and technology. Here’s a brief look at the awardees and their impactful research projects.

The Advanced Research Grant by ANRF, India’s national funding body for research and innovation that has been set up by the Government of India, is designed to support ambitious, investigator-driven research projects led by established researchers working on novel, high-impact ideas. From foundational research to real-world innovation, the selected projects spotlight the depth, diversity, and ambition of IIIT-H’s research ecosystem.

Making Quantum Computers Reliable: Correcting Fragile Qubits
Dr. Lalitha Vadlamani’s cutting-edge project titled “Quantum LDPC Codes and Topological Codes for Fault-Tolerant Quantum Computation” focuses on quantum error correction – an essential technology that allows fragile quantum bits (qubits) to function reliably. “These are very, very fragile objects,” she explains, noting that without error correction, “even to just hold the qubits for some time… in a stable way” would be nearly impossible. Her work aims to explore advanced coding techniques that help protect quantum information from errors, a critical step toward building practical and scalable quantum computers. Globally, this field is booming, with recent breakthroughs published in top journals and implemented by companies like Google. 

Originally trained as a classical coding theorist, Dr. Vadlamani transitioned into quantum error correction about two years ago after recognizing the immense potential of the field. “Some of the classical coding theorists are also kind of picking up quantum… and I am like one of them,” she notes, reflecting a broader shift among researchers responding to emerging opportunities in quantum technologies. In India, only a handful of groups are active in this niche domain, making expertise scarce. This limited expertise, combined with the national strategic need to develop indigenous quantum hardware under the National Quantum Mission, positions her work as both timely and vital. As the professor emphasizes, “Without error correction, a quantum computer cannot work reliably. Every system will have an error correcting code in it.” 

Teaching Robots Physical Common Sense
Dr. Girish Varma (PI) and Dr. Antony Thomas’s (Co-PI) proposal titled, “Learned Estimation of Action Plausibility (LEAP)” is an innovative robotics project designed to give robots a kind of physical “common sense” – the ability to quickly judge what they can and cannot do in cluttered, unpredictable environments. Today, robots often struggle outside tightly controlled factory settings. As Dr. Girish Varma explains, when faced with a stack of objects, a robot “often doesn’t realize that you cannot directly pick it,” and may freeze because it cannot figure out that “you have to move this first and then really take the second object.” Traditional robotic systems rely on detailed geometric planning, which can take significant computing time. LEAP changes this approach: instead of slowly simulating every possible movement, it uses machine learning to instantly predict whether an action – such as grasping an object – is feasible. If it’s not, the robot can reason about what to move first, skipping impossible actions and acting decisively.

Preliminary tests show that LEAP is up to 44 times faster than current planning methods while maintaining over 91% accuracy. The project builds on early success with simple shapes like cylinders and cubes and now aims to generalize to objects of arbitrary shapes using 3D “point cloud” data. This breakthrough is crucial for enabling robots to perform “long horizon tasks”- complex, multi-step activities such as cooking, organizing shelves, or assisting in pharmacies and hospitals. Unlike factory robots that repeat one fixed task, service robots must adapt in real time. “In a complex scenario, you cannot plan for everything that the robot is going to see ahead of time,” Dr. Varma notes. By combining expertise in machine learning, computer vision, and robotics, LEAP addresses a major barrier to building practical service robots, helping them move from rigid automation toward flexible, real-world intelligence.

Bringing Emotion, Rhythm, and Cultural Nuance into AI Voices
Human speech feels natural not just due to the words we use, but how we say them. In the ANRF-funded project titled, “Controlling prosody for typical spoken language technological systems”, led by Dr. Chiranjeevi Yarra (PI)  along with Dr. Parameswari Krishnamurthy and Dr. Rajakrishnan (Co-PIs), the goal is to make machines speak more like humans by teaching them to understand and control prosody, or more simply, the rhythm, pitch, tone, and expressiveness in speech. “Typically when we speak, our speaking style is natural, but machines are not so natural,” explains Dr. Yarra. “Today’s voice assistants can narrate a story or deliver a joke, “but they fall flat because they lack intonations or delivery techniques.” The team is developing automatic methods to extract emotional and expressive cues from human speech and integrate them into speech-generation systems, so machines can detect whether someone sounds happy, angry, or inquisitive – and thus respond in a more human-like way.

The impact could transform voice assistants, spoken dialogue systems, and automated dubbing by making them more engaging and context-aware. A key focus of the project is on Indian languages, including Hindi, Bengali, Telugu, Tamil, and Malayalam, carefully chosen to represent both Indo-Aryan and Dravidian language families since expressive patterns vary widely across such languages. “Whatever exists in Hindi may not be sufficient for Telugu,” Dr. Yarra notes. By building language-specific frameworks for capturing and modeling expressiveness, the project aims to help machines move beyond flat, mechanical speech toward truly natural communication.

Decoding the Unknown in Noisy Communication Systems
In their proposal titled, “Blind Identification of Channel Codes: Fundamental Limits, Algorithms and Analysis”, Dr. Arti Yardi (PI), and Dr. Prasad Krishnan (Co-PI) suggest a solution that tackles a fundamental challenge in digital communication: how to reliably decode a message when you don’t know the encoding scheme used to send it. In any wired or wireless system – whether it’s a phone call, mobile data, or internet traffic – information travels as bits (zeros and ones) through a noisy channel, where signals can get distorted. To guard against this, transmitters add redundancy using carefully designed “error-correcting codes,” which receivers then decode. Such codes are widely used in practice for many decades now. But what happens when the receiver has no knowledge of the code being used? The project focuses on this exact problem, known as blind identification of channel codes by first identifying the encoding scheme purely from intercepted, noisy signals, and then recovering the original message. While decoding with known codes is well studied, building a systematic mathematical framework for identifying unknown codes remains an open and underdeveloped area.

The research is especially relevant in adversarial and secure communication settings. In scenarios such as intelligence gathering or border monitoring, one may intercept transmissions without knowing how they were encoded. “Essentially what we are trying to do is some kind of code breaking,” Prof. Krishnan says, though the research is equally valuable from the defensive side. Understanding how such code identification works can help design more secure systems that are resistant to interception. The team aims to develop a rigorous mathematical foundation for this two-stage process of code identification followed by decoding, by laying groundwork that future researchers and practitioners can build upon. With potential applications ranging from national security to adaptive wireless systems, the project seeks to strengthen both theoretical capabilities and long-term technological self-reliance in secure communications.

Verifying Untrusted Quantum Computers from a Classical World
As quantum computers become a reality, most users will access them remotely, treating them as powerful but opaque “black boxes.” Dr. Atul Singh Arora (PI), along with Drs. Uttam Singh and Venkata Koppula (Co-PIs), IIT Delhi are leading an ANRF-funded project titled, “Classical Control of Untrusted Quantum Devices,” which asks a fundamental question: How can a purely classical user verify that a remote quantum computer is actually doing what it claims? “We want to design protocols that allow a classical device to control and verify untrusted quantum devices,” explains Dr. Arora. The project focuses on making such verification efficient, so communication does not explode with computation size; identifying the minimal cryptographic assumptions needed to ensure security; and developing ways of quantifying ‘quantumness’, distinguishing devices by capabilities such as entanglement and coherence. 

The implications are far-reaching. As early quantum hardware will likely be limited and cloud-based, reliable verification protocols are considered “crucial” to ensuring these systems function as promised. By refining the theory behind quantum verification – building on major advances like Mahadev’s 2018 breakthrough – the project strengthens both secure quantum cloud computing and India’s growing theoretical expertise in the field. Ultimately, it tackles a profound challenge of the quantum age: how to ensure efficiency, security, and correctness when the most powerful computers we build rely on physics where even the mildest measurements change the state of the system.

Building Robust Trilingual ASR for India’s Diverse Domains
Dr. Anil Kumar Vuppula’s project, “Domain-Agnostic to Domain-Aware: Adapting Indian Language ASR for Practical Applications,” seeks to make speech recognition truly work for India’s multilingual reality. According to him, today’s Automatic Speech Recognition (ASR) systems perform reasonably well in controlled settings, but often falter in real-world environments filled with background noise, mixed languages, and domain-specific terminology. This project focuses on building robust ASR models that can accurately understand trilingual speech – Hindi, English, and Telugu – as it is actually spoken in hospitals, classrooms, and other field settings. By moving from general-purpose, one-size-fits-all systems to domain-aware models tailored for specific sectors, the team aims to outperform existing general or bilingual systems in both accuracy and reliability.

A key part of the effort is creating and publicly releasing high-quality, domain-specific speech datasets – beginning with healthcare and potentially expanding to legal and education sectors, thereby helping fill a major resource gap in Indian language technology. The project will also improve open-source ASR decoders using better-tuned language models such as KenLM, and develop generative AI tools to create realistic synthetic speech data for domains where real data is scarce. Dr. Vuppula emphasizes that all models and tools will be openly shared to accelerate research and innovation. The broader impact is significant: more accurate transcription of doctor–patient conversations can improve healthcare delivery; lecture and video transcription can enhance accessibility in education; and faster legal transcription can streamline services. By strengthening the foundation of Indian-language ASR, the project aims to make voice technology more inclusive, practical, and ready for real-world deployment.

Multimodal AI for Inclusive Speech in India
Dr. Vineet Gandhi’s ANRF-awarded project, IndiG Multimodal: Advancing Multimodal Machine Learning for Indian Languages,” aims to bring cutting-edge speech accessibility technologies to Indian languages. His lab has already built powerful systems in English that can convert slurred speech caused by dysarthria into clear speech, transform whispers into normal voice, and even generate speech from lip movements or subtle throat vibrations. “This will be an app for people who can’t speak or struggle to speak. They can communicate in that and the other person will hear a cleaner version of their speech,” he explains. While such foundational AI models exist for English, similar large-scale models for Hindi, Telugu, Tamil, and other Indian languages are rare. The project seeks to bridge that gap by building strong base models in Indian languages and extending these multimodal technologies – combining speech, lip movement, and other signals – to make them widely usable.

The motivation behind the proposal, Dr. Gandhi says, was simple: “To write a useful problem statement – one that targets a critical and largely overlooked space at the intersection of speech technology, accessibility, and linguistic diversity.” Backed by working demos and years of research already validated in English, the team demonstrated that the science is sound; now the goal is to localize and scale it for India’s linguistic diversity. By developing foundational models and deploying accessibility tools in Indian languages, the project has the potential to transform communication for people with speech impairments, enable silent or assistive speech interfaces, and make advanced voice technologies more inclusive. As Dr. Gandhi puts it, the proposal succeeded because of “this large work we have been doing in the lab” – work that now stands ready to expand into India’s multilingual landscape.

Blending Satellites, Ground Data and AI to Decode India’s Hourly Rainfall Patterns
Dr. Shruti Upadhyaya (PI, IIT Hyderabad) and Dr. Kuldeep Kurte (Co-PI, IIIT-H) are collaborating on the project titled, “Towards a Deeper Understanding of Diurnal Precipitation over India by Harnessing Models, Observations and Artificial Intelligence Techniques.” The project aims to improve how we measure and understand rainfall patterns across the country, especially how rain varies over the course of a day. “While traditional rain gauges provide accurate ground measurements, they are sparsely distributed and cannot capture rainfall everywhere,” explains Dr. Kurte. Satellite missions such as GPM and INSAT offer a broader, more frequent view, providing precipitation estimates every 30 minutes across large regions. However, these satellite products do not directly measure rainfall; they infer it from cloud properties, which can introduce errors. According to the researchers, the project’s first objective is to systematically benchmark multiple satellite-based rainfall products against ground-based rain gauge data to determine which performs best at the hourly scale, particularly in capturing diurnal (24-hour) variations. This is crucial because extreme events like cloudbursts and flash floods unfold over a matter of hours, not days and require high-resolution, reliable rainfall data.

Building on this evaluation, the second objective is to use machine learning to reduce biases in the best-performing satellite product. Even the most accurate satellite estimates can overestimate or underestimate rainfall in certain regions due to terrain or atmospheric conditions. By developing bias-correction models trained on historical data, the team aims to narrow the gap between satellite-derived estimates and ground truth observations. “The improved datasets can then support more accurate flash flood simulations and short-duration extreme rainfall predictions, ultimately strengthening disaster preparedness and water resource planning,” notes Dr. Kurte. By combining satellite observations, ground data, and AI-driven corrections, the project seeks to create a more reliable framework for understanding India’s rapidly changing rainfall patterns at the timescale that matters most.

AI-Driven Precision Drug Discovery for Aggressive Cancers
Dr. Deva Priyakumar (PI) and Dr. Vinod PK’s (Co-PI) project titled, “Modern AI/ML-Integrated Physics-Based Protocols for Discovering Selective EphA2 and RTK inhibitors”, aims to develop new drug candidates for cancer by targeting a protein called EphA2, which is overactive in many aggressive tumors. “It currently has no approved targeted therapy,” states Dr. Deva. The team combines advanced artificial intelligence with detailed physics-based computer simulations and laboratory experiments to design and test new molecules that can block this cancer-driving protein. By tightly integrating computational predictions with experimental validation, the project seeks to deliver a small number of highly promising pre-clinical drug candidates and establish a powerful new framework for discovering precision cancer therapies.

Building AI-Powered Bridges for Indian Sign Language Accessibility
India is home to an estimated 63 million deaf individuals, yet only about 500 qualified sign language instructors serve the entire country. Against this stark backdrop, the proposal sets out to tackle a pressing accessibility gap: the lack of robust technology for Indian Sign Language (ISL). Unlike spoken languages, ISL is a rich, “spatio-temporal multimodal language” that relies on hand movements, facial expressions, eye gaze, and body posture working in synchrony. However, while countries have built large-scale datasets and AI tools for languages like ASL, ISL remains severely under-resourced. Dr. CV Jawahar (Co-PI) and Dr. Ashutosh Modi’s (PI, IIT Kanpur)  project titled, “AI for understanding sign languages” aims to change that by creating what could become the largest-ever ISL corpus – targeting over a million aligned video, text, and audio samples – collected through studio recordings, YouTube mining, and a community-driven mobile app.  “We propose to investigate methods that analyze the linguistic nuances present in Indian sign languages. We also plan to work on how technology could be integrated to the conversational aspects of sign languages and sign language synthesis,” emphasizes Dr. Jawahar. By combining deep learning with linguistic insights specific to ISL, the team hopes to move beyond rigid, template-based systems toward intelligent, adaptable translation and generation tools.

At its heart, the initiative is about empowerment through co-design. The researchers emphasize that “ISL technologies cannot be created in isolation,” committing to close collaboration with organizations like ISLRTC (Indian Sign Language Research and Training Centre) and members of the deaf and hard-of-hearing (DHH) community.The project will develop transformer-based translation systems to convert sign-to-text, generative models to produce sign from speech or writing, and even new evaluation metrics that better capture the linguistic richness of sign language. Ultimately, these advances will power real-time mobile and web applications enabling two-way communication – sign-to-text and text/speech-to-sign – in settings such as hospitals, banks, schools, and railway stations. By releasing datasets, models, and benchmarks openly, the team envisions not just academic progress but a broader social impact: building an inclusive digital ecosystem where ISL users can access education, public services, and employment opportunities on equal terms.