[month] [year]

TALLIP-Journal

Research paper on Am I a Resource-Poor Language? Datasets, Embeddings, Models and Analysis for four different NLP tasks in Telugu Language by Prof. Radhika Mamidi and her students  – Mounika Marreddy; Lakshmi Sireesha Vakada; Subba Reddy Oota, Research assistant in IIITH;  Venkata Charan Chinni, B.Tech passed out student from IIITH has been accepted in TALLIP-Journal in April. Research work as explained by the authors:

Due to the lack of a large annotated corpus, many resource-poor Indian languages struggle to reap the benefits of recent deep feature representations in Natural Language Processing (NLP).

Moreover, adopting existing language models trained on large English corpora for Indian languages is often limited by data availability, rich morphological variation, syntax, and semantic differences. In this paper, we explore the traditional to recent efficient representations to overcome the challenges of low resource language, Telugu. In particular, our main objective is to mitigate the low-resource problem for Telugu.

Overall, we present several contributions to a resource-poor language viz. Telugu. (i) a large annotated data (35,142 sentences in each task) for multiple NLP tasks such as sentiment analysis, emotion identification, hate-speech detection, and sarcasm detection, (ii) we create different lexicons for sentiment, emotion, and hate-speech for improving the efficiency of the models, (iii) pretrained word and sentence embeddings, and (iv) different pretrained language models for Telugu such as \emph{ELMo-Te}, \emph{BERT-Te}, \emph{RoBERTa-Te}, \emph{ALBERT-Te}, and \emph{DistilBERT-Te} on a large Telugu corpus consisting of 80,15,588 sentences (16,37,408 sentences from Telugu Wikipedia and 63,78,180 sentences crawled from different Telugu websites).

Further, we show that these representations significantly improve the performance of four NLP tasks and present the benchmark results for Telugu. We argue that our pretrained embeddings are competitive or better than the existing multilingual pretrained models:  \emph{mBERT}, \emph{XLM-R}, and \emph{IndicBERT}.

Lastly, the fine-tuning of pretrained models show higher performance than linear probing results on four NLP tasks with following F1-scores: Sentiment (68.72), Emotion (58.04), Hate-Speech (64.27) and Sarcasm (77.93).

We also experiment on publicly available Telugu datasets (Named Entity Recognition, Article Genre Classification, and Sentiment Analysis), find that our Telugu pretrained language models (\emph{BERT-Te} and \emph{RoBERTa-Te}) outperform the state-of-the-art system except for the sentiment task.

We open-source our corpus, four different datasets, lexicons, embeddings, and code ~\url{https://github.com/Cha14ran/DREAM-T}. The pretrained Transformer models for Telugu are available at ~\url{https://huggingface.co/ltrctelugu}.

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •