TY - GEN
T1 - Learn to resolve conversational dependency
T2 - Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP 2021
AU - Kim, Gangwoo
AU - Kim, Hyunjae
AU - Park, Jungsoo
AU - Kang, Jaewoo
N1 - Funding Information:
We thank Sean S. Yi, Miyoung Ko, and Jinhyuk Lee for providing valuable comments and feedback. This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ICT Creative Consilience program (IITP-2021-2020-0-01819) supervised by the IITP (Institute for Information communications Technology Planning Evaluation). This research was also supported by National Research Foundation of Korea (NRF-2020R1A2C3010638).
Publisher Copyright:
© 2021 Association for Computational Linguistics
PY - 2021
Y1 - 2021
N2 - One of the main challenges in conversational question answering (CQA) is to resolve the conversational dependency, such as anaphora and ellipsis. However, existing approaches do not explicitly train QA models on how to resolve the dependency, and thus these models are limited in understanding human dialogues. In this paper, we propose a novel framework, EXCORD (Explicit guidance on how to resolve Conversational Dependency) to enhance the abilities of QA models in comprehending conversational context. EXCORD first generates self-contained questions that can be understood without the conversation history, then trains a QA model with the pairs of original and self-contained questions using a consistency-based regularizer. In our experiments, we demonstrate that EXCORD significantly improves the QA models' performance by up to 1.2 F1 on QuAC (Choi et al., 2018), and 5.2 F1 on CANARD (Elgohary et al., 2019), while addressing the limitations of the existing approaches.
AB - One of the main challenges in conversational question answering (CQA) is to resolve the conversational dependency, such as anaphora and ellipsis. However, existing approaches do not explicitly train QA models on how to resolve the dependency, and thus these models are limited in understanding human dialogues. In this paper, we propose a novel framework, EXCORD (Explicit guidance on how to resolve Conversational Dependency) to enhance the abilities of QA models in comprehending conversational context. EXCORD first generates self-contained questions that can be understood without the conversation history, then trains a QA model with the pairs of original and self-contained questions using a consistency-based regularizer. In our experiments, we demonstrate that EXCORD significantly improves the QA models' performance by up to 1.2 F1 on QuAC (Choi et al., 2018), and 5.2 F1 on CANARD (Elgohary et al., 2019), while addressing the limitations of the existing approaches.
UR - http://www.scopus.com/inward/record.url?scp=85118931998&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85118931998
T3 - ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference
SP - 6130
EP - 6141
BT - ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference
PB - Association for Computational Linguistics (ACL)
Y2 - 1 August 2021 through 6 August 2021
ER -