Select Language

AI社区

公开数据集

ELI5记分器训练数据原型816000例,用于创建评分模型

ELI5记分器训练数据原型816000例,用于创建评分模型

672.61M
172 浏览
0 喜欢
0 次下载
0 条讨论
NLP,Earth and Nature,Arts and Entertainment,Education,Social Science,Sports,Regression,Transformers Classification

ELI5 means Explain like I am 5 . It's originally a long and free form Question-Answering scraping from reddit eli5 s......

数据结构 ? 672.61M

    Data Structure ?

    * 以上分析是由系统提取分析形成的结果,具体实际数据为准。

    README.md

    ELI5 means "Explain like I am 5" . It's originally a "long and free form" Question-Answering scraping from reddit eli5 subforum.
    Original ELI5 datasets (https://github.com/facebookresearch/ELI5) can be used to train a model for "long & free" form Question-Answering ,
    e.g. by Encoder-Decoder models like T5 or Bart

    Conventional performance evaluation : ROUGE scores

    When we get a model, how can we estimate model performance (ability to give high-quality answers) ?
    Conventional methods are ROUGE-family metrics (see ELI5 paper linked above)

    However, ROUGE scores are based on n-gram and and need to compare a generated answer to a ground-truth answer.
    Unfortunately, n-gram scoring cannot evaluate high-quality paraphrase answers.

    Worse, the need to a ground-truth answer in order to compare and calculate (ROUGE) score. This scoring perspective is against the "spirit" of the "free form" question answering where there are many possible (non-paraphrase) valid and good answers .

    To summarize, "creative & high-quality" answers cannot be estimated with ROUGE , which prevents us to construct (and estimate) creative models.

    This dataset : to create a better scorer

    This dataset, in contrast, is aimed for training a "scoring" (regression) model , which can predict an upvote score on each Q-A pair individually (not A-A pair like ROUGE) .

    The data is simply a CSV file containing Q-A pairs and their scores.
    Each line contains Q-A texts (in Roberta format) and its upvote score (non-negative integer)

    It is intended to be easy and direct to create scoring model with Roberta (or other Transformer models with changing separation token) .

    CSV file

    In the csv file, there is qa column and answer_score column
    Each row in qa is written in Roberta paired-sentences format -- Answer

    With answer_score we have the following principle :

    • High quality answer related to its question should get high score (upvotes)

    • Low quality answer related to its question should get low score

    • Well written answer NOT related to its question should get 0 score

    Each positive Q-A pair comes from the original ELI5 dataset (true upvote score).
    Each 0-score Q-A pair is constructed with details in the next subsection.

    0-score construction details via RetriBERT & FAISS

    The principle is contrastive training. We need somewhat high-quality 0-score pairs for model to generalize.
    Too easy 0-score pairs (e.g. a question with random answers will be too easy and a model will learn nothing)

    Therefore, for each question, we try to construct two answers (two 0-score pairs) where each answer is related to the topic of the question, but does not answer the question.

    This can be achieve by vectorizing all questions into vectors using RetriBERT and storing with FAISS.  We can then measure a distance between two question vectors using cosine distance.

    More precisely, for a question Q1, we choose two answers of related (but non-identical) questions Q2 and Q3 , i.e. answer A2 and A3, to construct Q1-A2 and Q1-A3 pairs of 0-score. Combining with the Q1-A1 pair of positive score, we will have 3 Q1 pairs , and 3 pairs for each questions in total. Therefore, from 272,000 examples of original ELI5 , in this dataset we have 3 times of its size = 816,000 examples .

    Note that two question vectors that are very close can be the same (paraphrase) question , and two questions that are very far apart are totally different questions.
    Therefore, we need a threshold to determine not-too-close & not-too-far pair of questions so that we get non-identical but same-topic question pairs.
    In a simple experiment, a cosine distance of 10-11 of RetriBERT vectors seem work well, so we use this number as a threshold to construct a 0-score Q-A pair.

    baseline Model

    roberta-base baseline with MAE 3.91 on validation set can be found here :
    https://www.kaggle.com/ratthachat/eli5-scorer-roberta-base-500k-mae391

    Acknowledgements

    Facebook AI team for creating original ELI5 dataset, and Huggingface NLP library for make us access this dataset easily .

    Inspiration

    My project on ELI5 is mainly inspired from this amazing work of Yacine Jernite : https://yjernite.github.io/lfqa.html


    ×

    帕依提提提温馨提示

    该数据集正在整理中,为您准备了其他渠道,请您使用

    注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
    暂无相关内容。
    暂无相关内容。
    • 分享你的想法
    去分享你的想法~~

    全部内容

      欢迎交流分享
      开始分享您的观点和意见,和大家一起交流分享.
    所需积分:20 去赚积分?
    • 172浏览
    • 0下载
    • 0点赞
    • 收藏
    • 分享