Select Language

AI社区

公开数据集

合成语音命令数据集,“语音命令数据集 v0.01”的文本到语音对应项

合成语音命令数据集,“语音命令数据集 v0.01”的文本到语音对应项

2.5G
194 浏览
0 喜欢
0 次下载
0 条讨论
Earth and Nature,Software,Languages Classification

ContextWe would like to have good open source speech recognitionCommercial companies try to solve a hard problem: map ar......

数据结构 ? 2.5G

    Data Structure ?

    * 以上分析是由系统提取分析形成的结果,具体实际数据为准。

    README.md

    Context

    • We would like to have good open source speech recognition

    • Commercial companies try to solve a hard problem: map arbitrary, open-ended speech to text and identify meaning

    • The easier problem should be: detect a predefined sequence of sounds and map it to a predefined action.

    • Lets tackle the simplest problem first: Classifying single, short words (commands)

    • Audio training data is difficult to obtain.

    Approaches

    • The parent project (spoken verbs) created synthetic speech datasets using text-to-speech programs. The focus there is on single-syllable verbs (commands).

    • The Speech Commands dataset (by Pete Warden, see the TensorFlow Speech Recognition Challenge) asked volunteers to pronounce a small set of words: (yes, no, up, down, left, right, on, off, stop, go, and 0-9).

    • This data set provides synthetic counterparts to this real world dataset.

    Open questions

    One can use these two datasets in various ways. Here are some things I am interested in seeing answered:

    1. What is it in an audio sample that makes it "sound similar"? Our ears can easily classify both synthetic and real speech, but for algorithms this is still hard. Extending the real dataset with the synthetic data yields a larger training sample and more diversity.

    2. How well does an algorithm trained on one data set perform on the other? (transfer learning) If it works poorly, the algorithm probably has not found the key to audio similarity.

    3. Are synthetic data sufficient for classifying real datasets? If this is the case, the implications are huge. You would not need to ask thousands of volunteers for hours of time. Instead, you could easily create arbitrary synthetic datasets for your target words.

    A interesting challenge (idea for competition) would be to train on this data set and evaluate on the real dataset.

    Synthetic data creation

    Here I describe how the synthetic audio samples were created.
    Code is available at https://github.com/JohannesBuchner/spoken-command-recognition, in the "tensorflow-speech-words" folder.

    1. The list of words is in "inputwords". "marvin" was changed to "marvel", because "marvin" does not have a pronounciation coding yet.

    2. Pronounciations were taken from the British English Example Pronciation dictionary (BEEP, http://svr-www.eng.cam.ac.uk/comp.speech/Section1/Lexical/beep.html ). The phonemes were translated for the next step with a translation table (see compile.py for details). This creates the file "words". There are multiple pronounciations and stresses for each word.

    3. A text-to-speech program (espeak) was used to pronounce these words (see generatetfspeech.sh for details). The pronounciation, stress, pitch, speed and speaker were varied. This gives >1000 clean examples for each word.

    4. Noise samples were obtained. Noise samples (airport babble car exhibition restaurant street subway train) come from AURORA (https://www.ee.columbia.edu/~dpwe/sounds/noise/), and additional noise samples were synthetically created (ocean white brown pink). (see ../generatenoise.sh for details)

    5. Noise and speech were mixed. The speech volume and offset were varied. The noise source, volume was also varied. See addnoise.py for details. addnoise2.py is the same, but with lower speech volume and higher noise volume. All audio files are one second (1s) long and are in wav format (16 bit, mono, 16000 Hz).

    6. Finally, the data was compressed into an archive and uploaded to kaggle.

    Acknowledgements

    This work built upon

    Please provide appropriate citations to the above when using this work.

    To cite the resulting dataset, you can use:

    APA-style citation: "Buchner J. Synthetic Speech Commands: A public dataset for single-word speech recognition, 2017. Available from https://www.kaggle.com/jbuchner/synthetic-speech-commands-dataset/".

    BibTeX @article{speechcommands, title={Synthetic Speech Commands: A public dataset for single-word speech recognition.}, author={Buchner, Johannes}, journal={Dataset available from https://www.kaggle.com/jbuchner/synthetic-speech-commands-dataset/}, year={2017} }

    Thanks to everyone trying to improve open source voice detection and speech recognition.


    ×

    帕依提提提温馨提示

    该数据集正在整理中,为您准备了其他渠道,请您使用

    注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
    暂无相关内容。
    暂无相关内容。
    • 分享你的想法
    去分享你的想法~~

    全部内容

      欢迎交流分享
      开始分享您的观点和意见,和大家一起交流分享.
    所需积分:20 去赚积分?
    • 194浏览
    • 0下载
    • 0点赞
    • 收藏
    • 分享