[month] [year]

IJCNLP-AACL 2023

Dr. Manish Shrivastava and his students Pavan Baswani, MS CSE and Ananya Mukherjee, PhD CSE presented a paper on LTRC_IIITH’s 2023 Submission for Prompting Large Language Models as Explainable Metrics Task at at Shared Task Paper in EVAL4NLP Workshop at IJCNLP-AACL 2023 in Bali, Indonesia on 1 November.

Here is the summary of the research work as explained by the authors:

In this report, we share our contribution to the Eval4NLP Shared Task titled “Prompting Large Language Models as Explainable Metrics.” We build our prompts with a primary focus on effective \textit{prompting strategies}, \textit{score-aggregation}, and \textit{explainability }for LLM-based metrics. We participated in the track for smaller models by submitting the scores along with their explanations. According to the Kendall correlation scores on the leaderboard, our MT evaluation submission ranks second-best, while our summarization evaluation submission ranks fourth, with only a 0.06 difference from the leading submission.

November 2023