Select Language

公开数据集

WDW数据集

WDW数据集

Scene:

MNIST

Data Type:

Classification
所需积分:0 去赚积分?
  • 187浏览
  • 0下载
  • 0点赞
  • 收藏
  • 分享

贡献者查看主页

小小程序员

致力于人工智能业务的研究、数据集处理。

Data Preview ? 26G

    Data Structure ?

    *数据结构实际以真实数据为准

    We have constructed a new "Who-did-What" dataset of over 200,000 fill-in-the-gap (cloze) multiple choice reading comprehension problems constructed from the LDC English Gigaword newswire corpus. The WDW dataset has a variety of novel features. First, in contrast with the CNN and Daily Mail datasets (Hermann et al., 2015) we avoid using article summaries for question formation. Instead, each problem is formed from two independent articles --- an article given as the passage to be read and a separate article on the same events used to form the question. Second, we avoid anonymization --- each choice is a person named entity. Third, the problems have been filtered to remove a fraction that are easily solved by simple baselines, while remaining 84% solvable by humans. We report performance benchmarks of standard systems and propose the WDW dataset as a challenge task for the community. ( ARTICLE HERE )

    0相关评论
    ×

    帕依提提提温馨提示

    该数据集正在整理中,为您准备了其他渠道,请您使用

    注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。