Select Language

AI社区

公开数据集

BERT英语无冠词双冠词,BERT英语无上限训练数据的双谱图频率

BERT英语无冠词双冠词,BERT英语无上限训练数据的双谱图频率

1.99G
162 浏览
0 喜欢
0 次下载
0 条讨论
NLP,Music Classification

Is BERT the right model to fine tune your data on? Or do you need to pretrain from scratch?Know your model's trainin......

数据结构 ? 1.99G

    Data Structure ?

    * 以上分析是由系统提取分析形成的结果,具体实际数据为准。

    README.md

    Is BERT the right model to fine tune your data on? Or do you need to pretrain from scratch?

    Know your model's training data

    BERT models have become commonly available and the use of subword tokenization has become widespread. But are these base models suitable for fine tuning against your data? Subword tokenization obscures the vocabulary the base model was trained on. By examining the original training data unigrams and their distributions, you can determine whether or not your data would benefit from training a model from scratch.

    Content

    This dataset is a best effort reconstruction of the training data used to train the English BERT base uncased model. The dataset comes from the BookCorpus dataset and a processed dump of Wikipedia (August 2019). Following the principles of BERT's tokenization scheme, no punctuation nor stopwords have been removed. The original unicode text was normalized using NFKC, tokenized using SpaCy English model (large), and the total count for each bigram across the corpora was recorded. The bigrams do not cross sentence boundaries, but they do include all tokens, only spaces, newlines and non-printing characters have been omitted. The bigrams are sorted in descending order of frequency. The CSV file column values are tab separated.

    This dataset is large because it contains all bigrams; of course, you will likely want to filter out many of the lower counts to reduce some of the noise. There are many different methods one could adopt so I wanted to empower users to apply their own filters. Also, it was important to include complete counts so that the base of this corpus could be extended on, and used with new data to spot language changes and emerging trends.

    Acknowledgements

    Wikipedia and a public archive site provided the raw data for processing.

    Inspiration

    Here are some useful ideas

    • Use the sister dataset BERT unigrams and:

      • experiment with language modeling

      • calculate Pointwise Mutual Information to find interesting collocations

      • Construct a probability distribution of data in your domain and determine if BERT base is close enough for your task.

      • Analyze the training data of a new BERT model (e.g. Bio-BERT, Legal-BERT) and quantify how similar/different they are to BERT base by calculating the Kullback–Leibler divergence for the shared vocabulary.

      • Compare with newer language data to spot emerging keywords and trends.


    ×

    帕依提提提温馨提示

    该数据集正在整理中,为您准备了其他渠道,请您使用

    注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
    暂无相关内容。
    暂无相关内容。
    • 分享你的想法
    去分享你的想法~~

    全部内容

      欢迎交流分享
      开始分享您的观点和意见,和大家一起交流分享.
    所需积分:24 去赚积分?
    • 162浏览
    • 0下载
    • 0点赞
    • 收藏
    • 分享