Generating a Common Question from Multiple Documents using a MapReduce Encoder-Decoder Model
Published in EMNLP Workshop, 2019
Recommended citation: Woon Sang Cho, Yizhe Zhang, Sudha Rao, Chris Brockett, Sungjin Lee. Generating a Common Question from Multiple Documents using a MapReduce Encoder-Decoder Model. https://www.aclweb.org/anthology/D19-5604.pdf
Ambiguous user queries in search engines result in the retrieval of documents that often span multiple topics. One potential solution is for the search engine to generate multiple refined queries, each of which relates to a subset of the documents spanning the same topic. A preliminary step towards this goal is to generate a question that captures common concepts of multiple documents. We propose a new task of generating common question from multiple documents and present simple variant of an existing multi-source encoder-decoder framework, called the Multi-Source Question Generator (MSQG). We first train an RNN-based single encoder-decoder generator from (single document, question) pairs. At test time, given multiple documents, the 'Distribute' step of our MSQG model predicts target word distributions for each document using the trained model. The 'Aggregate' step aggregates these distributions to generate a common question. This simple yet effective strategy significantly outperforms several existing baseline models applied to the new task when evaluated using automated metrics and human judgments on the MS-MARCO-QA dataset. [Download paper here](https://www.aclweb.org/anthology/D19-5604.pdf)