Select Language

AI社区

公开数据集

拼写校正器

拼写校正器

6.6M
151 浏览
0 喜欢
0 次下载
0 条讨论
Earth and Nature,NLP,Text Data,Linguistics,Languages,Search Engines Classification

数据结构 ? 6.6M

    Data Structure ?

    * 以上分析是由系统提取分析形成的结果,具体实际数据为准。

    README.md

    From Peter Norvig's classic [*How to Write a Spelling Corrector*](http://norvig.com/spell-correct.html) > One week in 2007, two friends (Dean and Bill) independently told me they were amazed at Google's spelling correction. Type in a search like [speling] and Google instantly comes back with Showing results for: spelling. I thought Dean and Bill, being highly accomplished engineers and mathematicians, would have good intuitions about how this process works. But they didn't, and come to think of it, why should they know about something so far outside their specialty? > I figured they, and others, could benefit from an explanation. The full details of an industrial-strength spell corrector are quite complex (you can read a little about it here or here). But I figured that in the course of a transcontinental plane ride I could write and explain a toy spelling corrector that achieves 80 or 90% accuracy at a processing speed of at least 10 words per second in about half a page of code. [A Kernel has been added with Peter's basic spell.py and evaluation code](https://www.kaggle.com/bittlingmayer/spell-py/code) to set a baseline. Minimal modifications were made so that it runs on this environment. # Data files big.txt is required by the code. That's how it learns the probabilities of English words. You can prepend more text data to it, but be sure to leave in the little Python snippet at the end. # Testing files The other files are for testing the accuracy. The baseline code should get 75% of 270 correct on spell-testset1.txt, and 68% of 400 correct on spell-testset2.txt. I've also added some other files for more extensive testing. [The example Kernel](https://www.kaggle.com/bittlingmayer/spell-py) runs all of them but birkbeck.txt by default. Here's the output: Testing spell-testset1.txt 75% of 270 correct (6% unknown) at 32 words per second Testing spell-testset2.txt 68% of 400 correct (11% unknown) at 28 words per second Testing wikipedia.txt 61% of 2455 correct (24% unknown) at 21 words per second Testing aspell.txt 43% of 531 correct (23% unknown) at 15 words per second The larger datasets take a few minutes to run. birkbeck.txt takes more than a few minutes. You can try adding other datasets, or splitting these ones in meaningful ways - for example a dataset of only words of 5 characters or less, or 10 characters or more, or without uppercase - to understand the effect of changes you make on different types of words. # Languages The data and testing files include English only for now. In principle it is easily generalisable to other languages.
    ×

    帕依提提提温馨提示

    该数据集正在整理中,为您准备了其他渠道,请您使用

    注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
    暂无相关内容。
    暂无相关内容。
    • 分享你的想法
    去分享你的想法~~

    全部内容

      欢迎交流分享
      开始分享您的观点和意见,和大家一起交流分享.
    所需积分:0 去赚积分?
    • 151浏览
    • 0下载
    • 0点赞
    • 收藏
    • 分享