Six student teams from the UG2 Embedded Systems Workshop (ESW) course at IIIT Hyderabad showcased their Qualcomm Innovators Development Kit (QIDK)–based projects at Qualcomm’s Hyderabad campus on 5 December 2025. The ESW course is a core offering for third-semester CSE students.
This showcase was part of the ongoing collaboration between IIITH and Qualcomm under the Qualcomm Edge AI initiative. Through this partnership, students gain hands-on experience with the Qualcomm Innovators Development Kit—a compact edge-AI platform powered by the Snapdragon 8 Gen 2 SoC, integrating a multi-core CPU, Adreno GPU, and Hexagon NPU. The platform enables efficient, low-latency, on-device AI across domains such as computer vision, speech processing, and natural language processing, while preserving data privacy.
During the semester, course instructors Prof. Sachin Chaudhari, Prof. Aftab M. Hussain, and Prof. Deepak Gangadharan worked closely with the Qualcomm team, who regularly mentored students, addressed technical challenges, and participated in project evaluations along with faculty guides and teaching assistants.
The ESW course featured ten open-ended project themes aligned with contemporary research challenges, including Sustainable Agentic AI at the Edge, AI-Powered Tutors, On-Device Image Inpainting, Multimodal Edge AI, Gesture-Controlled Interfaces, and Smart Home Assistants. Two teams were assigned to each theme, encouraging innovation through parallel exploration and comparison.
Following the final ESW evaluations, Qualcomm selected six teams from twenty QIDK project teams to present at their campus. The selected teams—Dominator069, Entropy, World Domination, AIOT, BigHero4, and BitByBit—presented their work to Qualcomm’s technical experts and senior leadership. The IIITH delegation included Prof. Deepak Gangadharan, Prof. Priyesh Shukla, and Srujan Reddy (Teaching Assistant).
1. Gesture Controlled Android Game
Team Name: Dominator069
Team Members: Navneet Gupta, Rishabh Goyal, Shardul Joshi, Rakshit Aggarwal
Guide: Ravi (Qualcomm)
Faculty: Dr. Aftab M. Hussain
TA: Srujan Reddy
Project Details:
This project developed an Android racing game controlled entirely using hand gestures instead of physical touch inputs. Hand landmarks were extracted with MediaPipe, followed by dense neural layers with a Softmax classifier to recognize gestures. The trained model was converted to TFLite and DLC formats and deployed on the Qualcomm QIDK platform, enabling inference on CPU, GPU, and Hexagon NPU.
Performance reached 98% accuracy on the original dataset and about 85.76% on expanded data. Real-time gameplay was achieved with playable FPS across devices. The team also enhanced Qualcomm’s MediaPipe pipeline to support dual-hand detection, improving robustness in gameplay control.
2. Posture Detection Using QIDK
Team Name: Entropy
Team Members: Harsha, Anish, Chanakya, Yogansh
Faculty Guide: Dr. Aftab M. Hussain
TA: Rahul Kumar
Project Details:
This project focused on creating a privacy-first, on-device posture recognition system to monitor unhealthy sitting behaviors such as slouching, leaning direction, and cross-leg posture. Using MediaPipe Pose Landmarker, 33 body keypoints are extracted from live camera feeds. Geometric features are computed and normalized, then classified with custom TFLite models running on the Qualcomm Hexagon NPU via NNAPI.
A dataset of 50,000 manually labeled frames was created. The system achieved an overall accuracy of about 88%, with class-wise precision up to 94% for slouching detection. Temporal filtering across frames improves stability, and performance benchmarks show the system runs roughly 8× faster on the NPU compared to CPU while maintaining energy efficiency and privacy.
3. AI Image Inpainting on the Edge
Team Name: World Domination
Team Members: Anivarth, Vikesh, Srinivas, Kaushick
Faculty: Dr. Deepak Gangadharan
TA: Udith Krishna Nair
Project Details:
This project demonstrated AI-based image inpainting directly on edge hardware using the Qualcomm Innovators Development Kit (QIDK). Multiple state-of-the-art inpainting models, including AOT-GAN, LaMa (Dilated), and MI-GAN—were deployed using a heterogeneous execution pipeline.
Inference leveraged the QNN NPU delegate, while unsupported operations were offloaded to GPU or CPU as required. The workflow involved preprocessing (resizing, normalization), on-device inference, and postprocessing.
A small benchmark dataset of approximately 100 images with input masks and target references was used. Outputs were evaluated using SSIM, PSNR, LPIPS, and FID metrics. The project highlighted how NPU acceleration enables low-latency generative modeling on embedded systems without dependence on cloud resources.
4. Sustainable Multimodal AI on the Edge
Team Name: AIOT
Team Members: Aryama, Sanjith, Sushil, Yashas
Advisor: Dr. Priyesh Shukla
TA: Aditya Shankar
Project Details:
The AIOT team explored the deployment of Vision-Language Models (VLMs) directly on NPUs for real-time edge applications such as robotics, smart glasses, and medical imaging. The objective was to achieve low-latency and sustainable inference on battery-powered devices.
The methodology involved ONNX graph surgery on NanoVLM models to rewrite operations unsupported by the NPU and custom inference-code optimization for Phi-3.5 Vision. Benchmarks showed major performance gains, with NPU inference providing approximately 3× speedup compared to TFLite delegate paths, and significant improvement compared to CPU-based inference.
This work demonstrated that careful operator graph optimization allows multimodal reasoning models to run efficiently on embedded NPUs with practical real-time performance.
5. ATOM – Your Personalized AI Tutor
Team Name: BigHero4
Team Members: Bhaskar Itikela, Akshit, Kausheya Roy, Akshith Kandagatla
Faculty: Dr. Kartik Vaidhyanathan
TA: Udith Krishna Nair
Project Details:
ATOM is a fully on-device personalized AI tutoring application. It ingests learning material (PDFs and images), generates quizzes, flashcards, summaries, and provides contextual Q&A. The system emphasizes full data privacy by running all inference on-device using Qualcomm’s stack.
The application uses a chunk-and-score pipeline for document processing, dynamic context truncation, and summarization. Model inference is conducted via direct NPU access using Genie (C++ wrapper + JNI bridge) with streaming token output powered by the HTP backend.
The model supports up to 4096 tokens of context using mixed precision quantization (w4/w8 with fp16 activations). Benchmarks reported 3000% energy efficiency improvement, 4× higher token throughput, and 13.6× reduction in time-to-first-token on NPU compared to CPU or GPU baselines.
6. AI-Powered Tutor
Team Name: BitByBit
Team Members: Anushka Sinha, Kartik Gupta, Pariza, Pranjal Garg
Faculty: Dr. Kartik Vaidhyanathan
TA: Udith Krishna Nair
Project Details:
The BitByBit project delivered an offline, privacy-preserving AI tutoring app operating fully on the Snapdragon NPU. The system processes PDFs and images locally, generates quizzes and flashcards, enables contextual chat-based learning, and provides analytics and session management without any cloud dependency.
User requests are passed from the Android UI through a Java bridge (Process Builder) into the Genie text-to-text pipeline, running a quantized LLaMA 3.2–3B Instruct model on-device. Output tokens are streamed back to the UI in real time using Kotlin.
Model deployment involved quantization into .bin and .so formats (~2.5GB total size). Performance metrics showed:
- NPU TTFT: 285–780 ms, 18–23 tokens/sec
- CPU TTFT: 3–8 s, 8–15 tokens/sec
This confirmed the substantial benefits of NPU acceleration for real-time mobile education applications.
Hosted by Ravi Kumar Neti, Venu Raidu, and Sumanth Thirukkovalluru from Qualcomm, the three-hour showcase enabled in-depth technical discussions and constructive feedback. Students found the interaction highly valuable and have incorporated the suggestions received to further refine their projects. The event offered meaningful industry exposure and practical insights into deploying AI solutions on edge devices.
December 2025

