TY - GEN
T1 - Multi2OIE
T2 - Findings of the Association for Computational Linguistics, ACL 2020: EMNLP 2020
AU - Ro, Youngbin
AU - Lee, Yukyung
AU - Kang, Pilsung
N1 - Publisher Copyright:
© 2020 Association for Computational Linguistics
PY - 2020
Y1 - 2020
N2 - In this paper, we propose Multi2OIE, which performs open information extraction (open IE) by combining BERT (Devlin et al., 2019) with multi-head attention blocks (Vaswani et al., 2017). Our model is a sequence-labeling system with an efficient and effective argument extraction method. We use a query, key, and value setting inspired by the Multimodal Transformer (Tsai et al., 2019) to replace the previously used bidirectional long short-term memory architecture with multi-head attention. Multi2OIE outperforms existing sequence-labeling systems with high computational efficiency on two benchmark evaluation datasets, Re-OIE2016 and CaRB. Additionally, we apply the proposed method to multilingual open IE using multilingual BERT. Experimental results on new benchmark datasets introduced for two languages (Spanish and Portuguese) demonstrate that our model outperforms other multilingual systems without training data for the target languages.
AB - In this paper, we propose Multi2OIE, which performs open information extraction (open IE) by combining BERT (Devlin et al., 2019) with multi-head attention blocks (Vaswani et al., 2017). Our model is a sequence-labeling system with an efficient and effective argument extraction method. We use a query, key, and value setting inspired by the Multimodal Transformer (Tsai et al., 2019) to replace the previously used bidirectional long short-term memory architecture with multi-head attention. Multi2OIE outperforms existing sequence-labeling systems with high computational efficiency on two benchmark evaluation datasets, Re-OIE2016 and CaRB. Additionally, we apply the proposed method to multilingual open IE using multilingual BERT. Experimental results on new benchmark datasets introduced for two languages (Spanish and Portuguese) demonstrate that our model outperforms other multilingual systems without training data for the target languages.
UR - http://www.scopus.com/inward/record.url?scp=85104295580&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85104295580
T3 - Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020
SP - 1107
EP - 1117
BT - Findings of the Association for Computational Linguistics Findings of ACL
PB - Association for Computational Linguistics (ACL)
Y2 - 16 November 2020 through 20 November 2020
ER -