Konigari Rachna received her MS in Computational Linguistics (CL). Her research work was supervised by Dr. Manish Shrivastava. Here’s a summary of her research work on Towards Interactive Responses in Conversational Systems:
Personal Assistants like Amazon’s Alexa, Google Home, and Apple Siri, have become a part and parcel of our everyday life. These assistants not only work towards answering user queries but also aim to emulate a human conversation by generating interactive responses for user queries. In this thesis, we look at a two-fold manner of the addition of interactive nature in such open-domain chatbots. We observe that topic diversion occurs frequently with engaging open-domain dialogue systems like virtual assistants. The balance between staying on topic and rectifying the topic drift is important for a good collaborative system.
To resolve this we work towards segmenting the conversation in a flat hierarchical manner with novel annotation tags of a major topic, minor topic, and off-topic. Here, we present a model which uses a fine-tuned XLNet-base to classify the utterances pertaining to the major topic of conversation and those which are not, with a precision of 84%. We propose a preliminary study, classifying utterances into their respective annotation tags, which further extends into a system initiative for diversion rectification. This task of classifying utterances into those which belong to the major theme or not would also help us in the identification of relevant sentences for tasks like dialogue summarization and information extraction from conversations. In addition, a case study was conducted where a system initiative is emulated as a response to the user going off-topic, mimicking a common occurrence of the mixed-initiative present in the natural human-human conversation. An important feature of our topic-shift detection system is that it doesn’t operate with any pre-defined topic sets, and detects the shift in a dynamic manner, as expected in interactions between two humans.
In addition, we also look at the requirement of automatic question generation, which works towards increasing the interactive nature and conversational ability of a personal assistant. This thesis presents a system that automatically generates multiple, natural language questions using relative pronouns and relative adverbs from complex English sentences. Our system is syntax-based, runs on dependency parses information of a single-sentence input, and achieves high accuracy in terms of syntactic correctness, semantic adequacy, fluency, and uniqueness. One of the key advantages of our system, in comparison with other rule-based approaches, is that we nearly eliminate the chances of getting a wrong wh-word in the generated question, by fetching the requisite wh-word from the input sentence itself. Depending upon the input, we generate both factoid and descriptive type questions. Our contribution towards the exploitation of wh-pronouns and wh-adverbs to generate questions is novel in the Automatic Question Generation task.
We explore in this thesis, the intersection between these two fields, which helps us in not only generating interactive responses but also helping the user by guiding them towards their goal