Can Machines Learn to Comprehend Scientific Literature?

Donghyeon Park, Yonghwa Choi, Daehan Kim, Minhwan Yu, Seongsoon Kim, Jaewoo Kang

    Research output: Contribution to journalArticlepeer-review

    4 Citations (Scopus)

    Abstract

    To measure the ability of a machine to understand professional-level scientific articles, we construct a scientific question answering task called PaperQA. The PaperQA task is based on more than 80 000 'fill-in-the-blank' type questions on articles from reputed scientific journals such as Nature and Science. We perform fine-grained linguistic analysis and evaluation to compare PaperQA and other conventional question and answering (QA) tasks on general literature (e.g., books, news articles, and Wikipedia texts). The results indicate that the PaperQA task is the most difficult QA task for both humans (lay people) and machines (deep-learning models). Moreover, humans generally outperform machines in conventional QA tasks, but we found that advanced deep-learning models outperform humans by 3%-13% on average in the PaperQA task. The PaperQA dataset used in this paper is publicly available at http://dmis.korea.ac.kr/downloads?id=PaperQA.

    Original languageEnglish
    Article number8606080
    Pages (from-to)16246-16256
    Number of pages11
    JournalIEEE Access
    Volume7
    DOIs
    Publication statusPublished - 2019

    Keywords

    • Artificial intelligence
    • crowdsourcing
    • data acquisition
    • data analysis
    • data collection
    • data mining
    • data preprocessing
    • knowledge discovery
    • machine intelligence
    • natural language processing
    • social computing
    • text analysis
    • text mining

    ASJC Scopus subject areas

    • Computer Science(all)
    • Materials Science(all)
    • Engineering(all)

    Fingerprint

    Dive into the research topics of 'Can Machines Learn to Comprehend Scientific Literature?'. Together they form a unique fingerprint.

    Cite this