Select Language

AI社区

公开数据集

腐烂的西红柿

腐烂的西红柿

107.14M
276 浏览
0 喜欢
5 次下载
0 条讨论
Movies and TV Shows,NLP,Classification,Text Data Classification

数据结构 ? 107.14M

    Data Structure ?

    * 以上分析是由系统提取分析形成的结果,具体实际数据为准。

    README.md

    Context As part of my [OpenAI Scholars summer program][1], I wanted to try out the ULMFiT approach to text classification: [http://nlp.fast.ai/classification/2018/05/15/introducting-ulmfit.html][2]. ULMFiT has been described as a "state-of-the-art AWD LSTM" language model *backbone* or *encoder* with a linear classifier *head* or *decoder*. The language model released by Jeremy Howard and Sebastian Ruder comes pre-trained with WikiText-103, and optionally one can choose to fine-tune it with a corpus more related to the downstream task. The general idea is to first teach the model English (Wikipedia), then teach it about more specific writing (e.g., movie reviews). With that kind of prior knowledge, sentiment analysis should be a whole lot easier. Approach I initially tried fine-tuning the WikiText-103 language model on the complete sentences provided by the Rotten Tomatoes dataset from the [Movie Review Sentiment Analysis Playground Competition][3] - however, my classification results were lackluster. I got better results by fine-tuning first on the larger [IMDB movie reviews dataset][4], then fine-tuning that on sentences from Rotten Tomatoes, then finally applying the linear head and classifying sentiment. The result of this process is the pre-trained model `fwd_pretrain_aclImdb_clas_1.h5`. It was pre-trained with scripts provided [here][5]. I executed the scripts in this approximate order: # fine-tune from WikiText-103 to IMDB python create_toks.py data/aclImdb/imdb_lm/ python tok2id.py data/aclImdb/imdb_lm/ python finetune_lm.py data/aclImdb/imdb_lm/ data/wt103/ 0 50 --lm-id pretrain_wt103 --early_stopping True # fine-tune from IMDB to RT python create_toks.py data/rt/rt_lm/ python tok2id.py data/rt/rt_lm/ python finetune_lm.py data/rt/rt_lm/ data/aclImdb/imdb_lm/ 0 50 --lm-id pretrain_aclImdb --early_stopping True --pretrain_id aclImdb # classify python train_clas.py data/rt/rt_clas/ 0 --lm-id pretrain_aclImdb --clas-id pretrain_aclImdb --lr 0.0001 --cl=25 I then zipped up all the files necessary to run the [kernel][6] for competition submission. Conclusion To be honest, I was hoping for a more impressive result - my ok-ish [result][7] in the competition is likely a testament to the challenging task of assigning the same sentiment to all "phrases" of a sentence (down to single punctuation marks). Perhaps more epochs or time spent tinkering with parameters would help. Acknowledgements All credit goes to Jeremy Howard and Sebastian Ruder. Check out ["Introducing state of the art text classification with universal language models"][8] for more explanation, plus links to the paper, video, and code. [1]: https://iconix.github.io/dl/2018/05/30/openai-scholar [2]: http://nlp.fast.ai/category/classification.html [3]: https://www.kaggle.com/c/movie-review-sentiment-analysis-kernels-only/ [4]: http://ai.stanford.edu/~amaas/data/sentiment/ [5]: https://github.com/fastai/fastai/tree/master/courses/dl2/imdb_scripts [6]: https://www.kaggle.com/iconix/ulmfit-for-rotten-tomatoes/code [7]: https://www.kaggle.com/iconix/ulmfit-for-rotten-tomatoes [8]: http://nlp.fast.ai/classification/2018/05/15/introducting-ulmfit.html

    同类数据

    ×

    帕依提提提温馨提示

    该数据集正在整理中,为您准备了其他渠道,请您使用

    注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
    暂无相关内容。
    暂无相关内容。
    • 分享你的想法
    去分享你的想法~~

    全部内容

      欢迎交流分享
      开始分享您的观点和意见,和大家一起交流分享.
    所需积分:0 去赚积分?
    • 276浏览
    • 5下载
    • 0点赞
    • 收藏
    • 分享