Sahithi Kukkala, MS student, supervised by Dr. Ravi Kiran Sarvadevabhatla and co-supervised by Dr. Mitesh Khapra, IIT Madras – AI4Bhāra and Oikantik Nath, Ph.D scholar, IIT Madras received the Best Student Paper Runner-Up Award at ICDAR 2025 for their research on IndicDLP: A Foundational Dataset for Multi-Lingual and Multi-Domain Document Layout Parsing. ICDAR, the flagship document understanding conference, was held in Wuhan, Hubei, China, from 16 – 21 September.
These works are an important contribution in advancing Indic Document AI technologies, aligned with the vision of BHASHINI – (Digital India BHASHINI Division), the flagship project under the National Language Technologies Mission supported by Ministry of Electronics and Information Technology MeitY, Government of India.
Here is the summary of the research work as explained by the authors:
Document layout analysis is essential for downstream tasks such as information retrieval, extraction, OCR, and digitization. However, existing large-scale datasets like PubLayNet and DocBank lack fine-grained region labels and multilingual diversity, making them insufficient for representing complex document layouts. In contrast, human-annotated datasets such as M6Doc and D4LA offer richer labels and greater domain diversity, but are too small to train robust models and lack adequate multilingual coverage. This gap is especially pronounced for Indic documents, which encompass diverse scripts yet remain underrepresented in current datasets, further limiting progress in this space. To address these shortcomings, we introduce IndicDLP, a large-scale foundational document layout dataset spanning 11 representative Indic languages alongside English and 12 common document domains. Additionally, we curate UED-mini, a dataset derived from DocLayNet and M6Doc, to enhance pretraining and provide a solid foundation for Indic layout models. Our experiments demonstrate that fine-tuning existing English models on IndicDLP significantly boosts performance, validating its effectiveness. Moreover, models trained on IndicDLP generalize well beyond Indic layouts, making it a valuable resource for document digitization. This work bridges gaps in scale, diversity, and annotation granularity, driving inclusive and efficient document understanding.
Full paper: https://indicdlp.github.io/
September 2025

