ISSN 0253-2778

CN 34-1054/N

Open AccessOpen Access JUSTC Information Science and Technology 15 August 2023

QGAE: an end-to-end answer-agnostic question generation model for generating question-answer pairs

Cite this:
https://doi.org/10.52396/JUSTC-2023-0002
More Information
  • Author Bio:

    Linfeng Li is currently pursuing a master’s degree at the School of Cyber Science and Technology, University of Science and Technology of China. His research interest is natural language processing

    Zhendong Mao received his Ph.D. degree in Computer Application Technology from the Institute of Computing Technology, Chinese Academy of Sciences (CAS) in 2014. From 2014 to 2018, he was an Assistant Professor at the Institute of Information Engineering, CAS. He is currently a Professor at the School of Cyber Science and Technology, University of Science and Technology of China. His research interests include the fields of computer vision, natural language processing, and cross-modal understanding

  • Corresponding author: E-mail: zdmao@ustc.edu.cn
  • Received Date: 08 January 2023
  • Accepted Date: 14 April 2023
  • Available Online: 15 August 2023
  • Question generation aims to generate meaningful and fluent questions, which can address the lack of a question-answer type annotated corpus by augmenting the available data. Using unannotated text with optional answers as input contents, question generation can be divided into two types based on whether answers are provided: answer-aware and answer-agnostic. While generating questions by providing answers is challenging, generating high-quality questions without providing answers is even more difficult for both humans and machines. To address this issue, we proposed a novel end-to-end model called question generation with answer extractor (QGAE), which is able to transform answer-agnostic question generation into answer-aware question generation by directly extracting candidate answers. This approach effectively utilizes unlabeled data for generating high-quality question-answer pairs, and its end-to-end design makes it more convenient than a multi-stage method that requires at least two pre-trained models. Moreover, our model achieves better average scores and greater diversity. Our experiments show that QGAE achieves significant improvements in generating question-answer pairs, making it a promising approach for question generation.
    The architecture of the QGAE model.
    Question generation aims to generate meaningful and fluent questions, which can address the lack of a question-answer type annotated corpus by augmenting the available data. Using unannotated text with optional answers as input contents, question generation can be divided into two types based on whether answers are provided: answer-aware and answer-agnostic. While generating questions by providing answers is challenging, generating high-quality questions without providing answers is even more difficult for both humans and machines. To address this issue, we proposed a novel end-to-end model called question generation with answer extractor (QGAE), which is able to transform answer-agnostic question generation into answer-aware question generation by directly extracting candidate answers. This approach effectively utilizes unlabeled data for generating high-quality question-answer pairs, and its end-to-end design makes it more convenient than a multi-stage method that requires at least two pre-trained models. Moreover, our model achieves better average scores and greater diversity. Our experiments show that QGAE achieves significant improvements in generating question-answer pairs, making it a promising approach for question generation.
    • We propose a new end-to-end question generation model using PLMs for answer-agnostic question generation.
    • Our model combines question generation and answer extraction into dual tasks to achieve answer-question pair generation.

  • loading
  • [1]
    Rus V, Cai Z, Graesser A. Question generation: Example of a multi-year evaluation campaign. In: Proceedings of 1st Question Generation Workshop, 2008
    [2]
    Rus V, Wyse B, Piwek P, et al. The first question generation shared task evaluation challenge. In: Proceedings of the 6th International Natural Language Generation Conference. New York: ACM, 2010: 251–257.
    [3]
    Wang B, Wang X, Tao T, et al. Neural question generation with answer pivot. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34: 9138–9145. doi: 10.1609/aaai.v34i05.6449
    [4]
    Du X, Shao J, Cardie C. Learning to ask: Neural question generation for reading comprehension. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Vancouver, Canada: Association for Computational Linguistics, 2017: 1342–1352.
    [5]
    Baradaran R, Ghiasi R, Amirkhani H. A survey on machine reading comprehension systems. Natural Language Engineering, 2022, 28: 683–732. doi: 10.1017/S1351324921000395
    [6]
    Chen D. Neural Reading Comprehension and Beyond. Stanford, California: Stanford University, 2018.
    [7]
    Green B F Jr, Wolf A K, Chomsky C, et al. Baseball: An automatic question-answerer. In: Papers presented at the May 9–11, 1961, western joint IRE-AIEE-ACM computer conference. New York: ACM Press, 1961: 219–224.
    [8]
    Cunningham P, Cord M, Delany S J. Supervised learning. In: Cord M, Cunningham P, editors. Machine Learning Techniques for Multimedia. Cognitive Technologies. Berlin, Heidelberg: Springer, 2008: 21–49.
    [9]
    Liu B. Supervised learning. In: Web Data Mining. Berlin, Heidelberg: Springer, 2011: 63–132.
    [10]
    Zhang S, Bansal M. Addressing semantic drift in question generation for semi-supervised question answering. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Hong Kong, China: Association for Computational Linguistics, 2019: 2495–2509.
    [11]
    Zhou Q, Yang N, Wei F, et al. Neural question generation from text: A preliminary study. In: National CCF conference on natural language processing and Chinese computing. Cham, Switzerland: Springer, 2018: 662–671.
    [12]
    Reddy S, Raghu D, Khapra M M, et al. Generating natural language question-answer pairs from a knowledge graph using an RNN-based question generation model. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, Valencia, Spain: Association for Computational Linguistics. 2017: 376–385.
    [13]
    Paranjape B, Lamm M, Tenney I. Retrieval-guided counterfactual generation for QA. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Dublin, Ireland: Association for Computational Linguistics, 2022: 1670–1686.
    [14]
    Du X, Cardie C. Identifying where to focus in reading comprehension for neural question generation. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Copenhagen, Denmark: Association for Computational Linguistics, 2017: 2067–2073.
    [15]
    Du X, Cardie C. Harvesting paragraph-level question-answer pairs from Wikipedia. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Melbourne, Australia: Association for Computational Linguistics, 2018: 1907–1917.
    [16]
    Kumar V, Ramakrishnan G, Li Y F. Putting the horse before the cart: A generator-evaluator framework for question generation from text. In: Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). Hong Kong, China: Association for Computational Linguistics, 2019: 812–821.
    [17]
    Nakanishi M, Kobayashi T, Hayashi Y. Towards answer-unaware conversational question generation. In: Proceedings of the 2nd Workshop on Machine Reading for Question Answering. Hong Kong, China: Association for Computational Linguistics, 2019: 63–71.
    [18]
    Lewis M, Liu Y, Goyal N, et al. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics, 2020: 7871–7880.
    [19]
    Kumar V, Black A W. ClarQ: A large-scale and diverse dataset for Clarification Question Generation. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics, 2020: 7296–7301.
    [20]
    Laban P, Canny J, Hearst M A. What’s the latest? A question-driven news chatbot. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Online: Association for Computational Linguistics, 2020: 380–387.
    [21]
    Yuan X, Wang T, Gulcehre C, et al. Machine comprehension by text-to-text neural question generation. In: Proceedings of the 2nd Workshop on Representation Learning for NLP. Vancouver, Canada: Association for Computational Linguistics, 2017: 15–25.
    [22]
    Yao B, Wang D, Wu T, et al. It is AI’s turn to ask humans a question: Question-answer pair generation for children’s story books. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Dublin, Ireland: Association for Computational Linguistics, 2022: 731–744.
    [23]
    Gulcehre C, Ahn S, Nallapati R, et al. Pointing the unknown words. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany: Association for Computational Linguistics, 2016: 140–149.
    [24]
    Kalchbrenner N, Blunsom P. Recurrent continuous translation models. In: 2013 Conference on Empirical Methods in Natural Language Processing. Seattle, USA: Association for Computational Linguistics, 2013: 1700–1709.
    [25]
    Mostow J, Chen W. Generating instruction automatically for the reading strategy of self-questioning. In: Proceedings of the 2009 Conference on Artificial Intelligence in Education: Building Learning Systems that Care: From Knowledge Representation to Affective Modelling. Brighton, UK: ACM, 2009: 465–472.
    [26]
    Wu X, Jiang N, Wu Y. A question type driven and copy loss enhanced framework for answer-agnostic neural question generation. In: Proceedings of the Fourth Workshop on Neural Generation and Translation. Online: Association for Computational Linguistics, 2020: 69–78.
    [27]
    Hochreiter S, Schmidhuber J. Long short-term memory. Neural Computation, 1997, 9: 1735–1780. doi: 10.1162/neco.1997.9.8.1735
    [28]
    Ramshaw L A, Marcus M P. Text chunking using transformation-based learning. In: Armstrong S, Church K, Isabelle P, editors. Natural Language Processing Using Very Large Corpora. Dordrecht: Springer, 1999: 157–176.
    [29]
    Vaswani A, Shazeer N, Parmar N, et al. Attention is all You need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 6000–6010.
    [30]
    Pandit S M, Wu S M, Šmits T I. Time series and system analysis with applications by Sudhakar Madhavrao Pandit and Shien-Ming Wu. The Journal of the Acoustical Society of America, 1984, 75: 1924–1925. doi: 10.1121/1.390924
    [31]
    Radford A, Wu J, Child R, et al. Language models are unsupervised multitask learners. OpenAI blog, 2019. https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf. Accessed December 8, 2022.
    [32]
    Wang A, Cho K, Lewis M. Asking and answering questions to evaluate the factual consistency of summaries. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics, 2020: 5008–5020.
    [33]
    Bhambhoria R, Feng L, Sepehr D, et al. A smart system to generate and validate question answer pairs for COVID-19 literature. In: Proceedings of the First Workshop on Scholarly Document Processing. Online: Association for Computational Linguistics, 2020: 20–30.
    [34]
    Raffel C, Shazeer N, Roberts A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 2020, 21 (1): 5485–5551. doi: 10.5555/3455716.3455856
    [35]
    Niklaus C, Cetto M, Freitas A, et al. A survey on open information extraction. In: Proceedings of the 27th International Conference on Computational Linguistics. Santa Fe, USA: Association for Computational Linguistics, 2018: 3866–3878.
    [36]
    Mintz M, Bills S, Snow R, et al. Distant supervision for relation extraction without labeled data. In: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2. Suntec, Singapore: Association for Computational Linguistics, 2009: 1003–1011.
    [37]
    Yahya M, Whang S, Gupta R, et al. ReNoun: Fact extraction for nominal attributes. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar: Association for Computational Linguistics, 2014: 325–335.
    [38]
    Del Corro L, Gemulla R. ClausIE: Clause-based open information extraction. In: Proceedings of the 22nd International Conference on World Wide Web. New York: ACM, 2013: 355–366.
    [39]
    Fader A, Soderland S, Etzioni O. Identifying relations for open information extraction. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. New York: ACM, 2011: 1535–1545.
    [40]
    Christensen J, Soderland S, Etzioni O, et al. Semantic role labeling for open information extraction. In: Proceedings of the NAACL HLT 2010 first international workshop on formalisms and methodology for learning by reading. Los Angeles, USA: Association for Computational Linguistics, 2010: 52–60.
    [41]
    Mesquita F, Schmidek J, Barbosa D. Effectiveness and efficiency of open relation extraction. In: 2013 Conference on Empirical Methods in Natural Language Processing. Seattle, USA: Association for Computational Linguistics, 2013: 447–457.
    [42]
    Dai A M, Le Q V. Semi-supervised sequence learning. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2015: 3079–3087.
    [43]
    Devlin J, Chang M W, Lee K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Minneapolis, MN, USA: Association for Computational Linguistics, 2019: 4171–4186.
    [44]
    Wang A, Singh A, Michael J, et al. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In: Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Brussels, Belgium: Association for Computational Linguistics, 2018: 353–355.
    [45]
    Rajpurkar P, Zhang J, Lopyrev K, et al. SQuAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, TX, USA: Association for Computational Linguistics, 2016: 2383–2392.
    [46]
    Rajpurkar P, Jia R, Liang P. Know what You don’t know: Unanswerable questions for SQuAD. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Melbourne, Australia: Association for Computational Linguistics, 2018: 784–789.
    [47]
    Wolf T, Debut L, Sanh V, et al. HuggingFace’s transformers: State-of-the-art natural language processing. arXiv: 1910.03771, 2019.
    [48]
    Kingma D P, Ba J L. Adam: A method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015-Conference Track Proceedings, San Diego, USA: ICLR, 2015: 7–9.
    [49]
    Willis A, Davis G, Ruan S, et al. Key phrase extraction for generating educational question-answer pairs. In: Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale. New York: ACM, 2019: 20.
    [50]
    Scialom T, Piwowarski B, Staiano J. Self-attention architectures for answer-agnostic neural question generation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy: Association for Computational Linguistics, 2019: 6027–6032.
    [51]
    Wang S, Wei Z, Fan Z, et al. A multi-agent communication framework for question-worthy phrase extraction and question generation. Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33 (1): 7168–7175. doi: 10.1609/aaai.v33i01.33017168
  • 加载中

Catalog

    Figure  1.  The difference between multi-stage methods and end-to-end models is that a multi-stage method usually has more than one model in the whole workflow. In every stage, a multi-stage method may need to deal with different inputs and outputs, while on the contrary, an end-to-end model only needs a definite kind of input.

    Figure  2.  The architecture of QGAE consists of two encoders and one decoder, which take raw texts as input and generate question-answer pairs.

    [1]
    Rus V, Cai Z, Graesser A. Question generation: Example of a multi-year evaluation campaign. In: Proceedings of 1st Question Generation Workshop, 2008
    [2]
    Rus V, Wyse B, Piwek P, et al. The first question generation shared task evaluation challenge. In: Proceedings of the 6th International Natural Language Generation Conference. New York: ACM, 2010: 251–257.
    [3]
    Wang B, Wang X, Tao T, et al. Neural question generation with answer pivot. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34: 9138–9145. doi: 10.1609/aaai.v34i05.6449
    [4]
    Du X, Shao J, Cardie C. Learning to ask: Neural question generation for reading comprehension. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Vancouver, Canada: Association for Computational Linguistics, 2017: 1342–1352.
    [5]
    Baradaran R, Ghiasi R, Amirkhani H. A survey on machine reading comprehension systems. Natural Language Engineering, 2022, 28: 683–732. doi: 10.1017/S1351324921000395
    [6]
    Chen D. Neural Reading Comprehension and Beyond. Stanford, California: Stanford University, 2018.
    [7]
    Green B F Jr, Wolf A K, Chomsky C, et al. Baseball: An automatic question-answerer. In: Papers presented at the May 9–11, 1961, western joint IRE-AIEE-ACM computer conference. New York: ACM Press, 1961: 219–224.
    [8]
    Cunningham P, Cord M, Delany S J. Supervised learning. In: Cord M, Cunningham P, editors. Machine Learning Techniques for Multimedia. Cognitive Technologies. Berlin, Heidelberg: Springer, 2008: 21–49.
    [9]
    Liu B. Supervised learning. In: Web Data Mining. Berlin, Heidelberg: Springer, 2011: 63–132.
    [10]
    Zhang S, Bansal M. Addressing semantic drift in question generation for semi-supervised question answering. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Hong Kong, China: Association for Computational Linguistics, 2019: 2495–2509.
    [11]
    Zhou Q, Yang N, Wei F, et al. Neural question generation from text: A preliminary study. In: National CCF conference on natural language processing and Chinese computing. Cham, Switzerland: Springer, 2018: 662–671.
    [12]
    Reddy S, Raghu D, Khapra M M, et al. Generating natural language question-answer pairs from a knowledge graph using an RNN-based question generation model. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, Valencia, Spain: Association for Computational Linguistics. 2017: 376–385.
    [13]
    Paranjape B, Lamm M, Tenney I. Retrieval-guided counterfactual generation for QA. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Dublin, Ireland: Association for Computational Linguistics, 2022: 1670–1686.
    [14]
    Du X, Cardie C. Identifying where to focus in reading comprehension for neural question generation. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Copenhagen, Denmark: Association for Computational Linguistics, 2017: 2067–2073.
    [15]
    Du X, Cardie C. Harvesting paragraph-level question-answer pairs from Wikipedia. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Melbourne, Australia: Association for Computational Linguistics, 2018: 1907–1917.
    [16]
    Kumar V, Ramakrishnan G, Li Y F. Putting the horse before the cart: A generator-evaluator framework for question generation from text. In: Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). Hong Kong, China: Association for Computational Linguistics, 2019: 812–821.
    [17]
    Nakanishi M, Kobayashi T, Hayashi Y. Towards answer-unaware conversational question generation. In: Proceedings of the 2nd Workshop on Machine Reading for Question Answering. Hong Kong, China: Association for Computational Linguistics, 2019: 63–71.
    [18]
    Lewis M, Liu Y, Goyal N, et al. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics, 2020: 7871–7880.
    [19]
    Kumar V, Black A W. ClarQ: A large-scale and diverse dataset for Clarification Question Generation. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics, 2020: 7296–7301.
    [20]
    Laban P, Canny J, Hearst M A. What’s the latest? A question-driven news chatbot. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Online: Association for Computational Linguistics, 2020: 380–387.
    [21]
    Yuan X, Wang T, Gulcehre C, et al. Machine comprehension by text-to-text neural question generation. In: Proceedings of the 2nd Workshop on Representation Learning for NLP. Vancouver, Canada: Association for Computational Linguistics, 2017: 15–25.
    [22]
    Yao B, Wang D, Wu T, et al. It is AI’s turn to ask humans a question: Question-answer pair generation for children’s story books. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Dublin, Ireland: Association for Computational Linguistics, 2022: 731–744.
    [23]
    Gulcehre C, Ahn S, Nallapati R, et al. Pointing the unknown words. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany: Association for Computational Linguistics, 2016: 140–149.
    [24]
    Kalchbrenner N, Blunsom P. Recurrent continuous translation models. In: 2013 Conference on Empirical Methods in Natural Language Processing. Seattle, USA: Association for Computational Linguistics, 2013: 1700–1709.
    [25]
    Mostow J, Chen W. Generating instruction automatically for the reading strategy of self-questioning. In: Proceedings of the 2009 Conference on Artificial Intelligence in Education: Building Learning Systems that Care: From Knowledge Representation to Affective Modelling. Brighton, UK: ACM, 2009: 465–472.
    [26]
    Wu X, Jiang N, Wu Y. A question type driven and copy loss enhanced framework for answer-agnostic neural question generation. In: Proceedings of the Fourth Workshop on Neural Generation and Translation. Online: Association for Computational Linguistics, 2020: 69–78.
    [27]
    Hochreiter S, Schmidhuber J. Long short-term memory. Neural Computation, 1997, 9: 1735–1780. doi: 10.1162/neco.1997.9.8.1735
    [28]
    Ramshaw L A, Marcus M P. Text chunking using transformation-based learning. In: Armstrong S, Church K, Isabelle P, editors. Natural Language Processing Using Very Large Corpora. Dordrecht: Springer, 1999: 157–176.
    [29]
    Vaswani A, Shazeer N, Parmar N, et al. Attention is all You need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 6000–6010.
    [30]
    Pandit S M, Wu S M, Šmits T I. Time series and system analysis with applications by Sudhakar Madhavrao Pandit and Shien-Ming Wu. The Journal of the Acoustical Society of America, 1984, 75: 1924–1925. doi: 10.1121/1.390924
    [31]
    Radford A, Wu J, Child R, et al. Language models are unsupervised multitask learners. OpenAI blog, 2019. https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf. Accessed December 8, 2022.
    [32]
    Wang A, Cho K, Lewis M. Asking and answering questions to evaluate the factual consistency of summaries. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics, 2020: 5008–5020.
    [33]
    Bhambhoria R, Feng L, Sepehr D, et al. A smart system to generate and validate question answer pairs for COVID-19 literature. In: Proceedings of the First Workshop on Scholarly Document Processing. Online: Association for Computational Linguistics, 2020: 20–30.
    [34]
    Raffel C, Shazeer N, Roberts A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 2020, 21 (1): 5485–5551. doi: 10.5555/3455716.3455856
    [35]
    Niklaus C, Cetto M, Freitas A, et al. A survey on open information extraction. In: Proceedings of the 27th International Conference on Computational Linguistics. Santa Fe, USA: Association for Computational Linguistics, 2018: 3866–3878.
    [36]
    Mintz M, Bills S, Snow R, et al. Distant supervision for relation extraction without labeled data. In: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2. Suntec, Singapore: Association for Computational Linguistics, 2009: 1003–1011.
    [37]
    Yahya M, Whang S, Gupta R, et al. ReNoun: Fact extraction for nominal attributes. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar: Association for Computational Linguistics, 2014: 325–335.
    [38]
    Del Corro L, Gemulla R. ClausIE: Clause-based open information extraction. In: Proceedings of the 22nd International Conference on World Wide Web. New York: ACM, 2013: 355–366.
    [39]
    Fader A, Soderland S, Etzioni O. Identifying relations for open information extraction. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. New York: ACM, 2011: 1535–1545.
    [40]
    Christensen J, Soderland S, Etzioni O, et al. Semantic role labeling for open information extraction. In: Proceedings of the NAACL HLT 2010 first international workshop on formalisms and methodology for learning by reading. Los Angeles, USA: Association for Computational Linguistics, 2010: 52–60.
    [41]
    Mesquita F, Schmidek J, Barbosa D. Effectiveness and efficiency of open relation extraction. In: 2013 Conference on Empirical Methods in Natural Language Processing. Seattle, USA: Association for Computational Linguistics, 2013: 447–457.
    [42]
    Dai A M, Le Q V. Semi-supervised sequence learning. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2015: 3079–3087.
    [43]
    Devlin J, Chang M W, Lee K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Minneapolis, MN, USA: Association for Computational Linguistics, 2019: 4171–4186.
    [44]
    Wang A, Singh A, Michael J, et al. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In: Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Brussels, Belgium: Association for Computational Linguistics, 2018: 353–355.
    [45]
    Rajpurkar P, Zhang J, Lopyrev K, et al. SQuAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, TX, USA: Association for Computational Linguistics, 2016: 2383–2392.
    [46]
    Rajpurkar P, Jia R, Liang P. Know what You don’t know: Unanswerable questions for SQuAD. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Melbourne, Australia: Association for Computational Linguistics, 2018: 784–789.
    [47]
    Wolf T, Debut L, Sanh V, et al. HuggingFace’s transformers: State-of-the-art natural language processing. arXiv: 1910.03771, 2019.
    [48]
    Kingma D P, Ba J L. Adam: A method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015-Conference Track Proceedings, San Diego, USA: ICLR, 2015: 7–9.
    [49]
    Willis A, Davis G, Ruan S, et al. Key phrase extraction for generating educational question-answer pairs. In: Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale. New York: ACM, 2019: 20.
    [50]
    Scialom T, Piwowarski B, Staiano J. Self-attention architectures for answer-agnostic neural question generation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy: Association for Computational Linguistics, 2019: 6027–6032.
    [51]
    Wang S, Wei Z, Fan Z, et al. A multi-agent communication framework for question-worthy phrase extraction and question generation. Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33 (1): 7168–7175. doi: 10.1609/aaai.v33i01.33017168

    Article Metrics

    Article views (657) PDF downloads(1942)
    Proportional views

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return