conversational question answering (convQA) over knowledge graphs (KGs)
involves answering multi-turn natural language questions about information
contained in a KG. State-of-the-art methods of ConvQA often struggle with
inexplicit question-answer pairs. These inputs are easy for human beings to
understand given a conversation history, but hard for a machine to interpret,
which can degrade ConvQA performance. To address this problem, we propose a
reinforcement learning (RL) based model, CornNet, which utilizes question
reformulations generated by large language models (LLMs) to improve ConvQA
performance. CornNet adopts a teacher-student architecture where a teacher
model learns question representations using human writing reformulations, and a
student model to mimic the teacher model’s output via reformulations generated
by LLMs. The learned question representation is then used by an RL model to
locate the correct answer in a KG. Extensive experimental results show that
CornNet outperforms state-of-the-art convQA models.

Introduction

In the field of conversational question answering (ConvQA), one of the challenges faced by state-of-the-art methods is dealing with inexplicit question-answer pairs. While humans can easily understand these pairs with the help of conversational context, machines find it difficult to interpret them accurately, leading to a decline in ConvQA performance. In this article, we introduce a reinforcement learning (RL) based model called CornNet that tackles this problem by leveraging question reformulations generated by large language models (LLMs).

The Teacher-Student Architecture

CornNet employs a teacher-student architecture to improve ConvQA performance. The teacher model learns question representations using human-authored reformulations, while the student model attempts to mimic the output of the teacher model using reformulations generated by LLMs. This approach allows for a more comprehensive understanding of different ways questions can be asked and interpreted, enhancing the ability to answer inexplicit question-answer pairs.

Integration of Reinforcement Learning and Knowledge Graphs

Once the teacher model has learned question representations, CornNet utilizes a reinforcement learning (RL) model to locate the correct answer within a knowledge graph (KG). By considering the learned question representation, the RL model can effectively navigate the KG to identify relevant information and provide accurate answers. This integration of reinforcement learning with knowledge graphs showcases the multidisciplinary nature of CornNet, combining natural language processing with graph-based information retrieval and reasoning.

Experimental Results and Performance

Extensive experiments demonstrate that CornNet outperforms state-of-the-art ConvQA models. By utilizing question reformulations generated by LLMs, CornNet significantly improves its ability to handle inexplicit question-answer pairs. The combination of the teacher-student architecture, reinforcement learning, and knowledge graph integration leads to enhanced performance levels and more accurate answers.

Conclusion

CornNet presents a new approach to address the challenge of handling inexplicit question-answer pairs in ConvQA. By leveraging question reformulations generated by large language models and incorporating reinforcement learning and knowledge graph integration, CornNet achieves superior performance compared to existing methods. The multi-disciplinary nature of CornNet highlights the importance of combining various fields, such as natural language processing, machine learning, and information retrieval, to tackle complex problems in question answering. As advancements continue in language models and knowledge graphs, we can expect further improvements in the performance and capabilities of ConvQA systems.

Read the original article