本文主要是介绍kaldi数据准备(二),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
local/prepare_data.sh
创建data/train, data/test, (data/dev可选),每个文件里面必须包含text, wav.scp, utt2spk, spk2utt
for x in train_yesno test_yesno; docat data/$x/text | awk '{printf("%s global\n", $1);}' > data/$x/utt2spkutils/utt2spk_to_spk2utt.pl <data/$x/utt2spk >data/$x/spk2utt
done
text和wav.scp, utt2spk提前准备好, spk2utt通过上面命令可以生成。
text : < uttid > < word >
wav.scp : < uttid > < utter_file_path >
utt2spk : < uttid > < speakid >
spk2utt : < speakid > < uttid >
local/prepare_dict.sh
创建data/local/dict。
lexicon.txt
lexicon_words.txt
nonsilence_phones.txt
silence_phones.txt
optional_silence.txt
extra_questions.txt #可选
lexicon.txt #词典包括语料中涉及的词汇与发音
lexicon_words.txt #不包含静音的词典
silence_phones.txt #静音标识sil
optional_silence.txt #有选择性地在词之间出现的单音素sil
nonsilence_phones.txt #非静音标识
silence_phones.txt
extra_questions.txt #包含重音音调标记
utils/prepare_lang.sh –position-dependent-phones false data/local/dict “\“ data/local/lang data/lang
输入是目录 data/local/dict,标签”\“是字典里的单词,映射OOV单词到出现在副本中的单词中(data/lang/oov.txt)。目录data/local/lang/是脚本会用到的临时目录;data/lang/是输出目录。
生成data/local/lang, data/lang
data/local/lang 是一个临时目录,生成的文件包括:
align_lexicon.txt lexiconp_disambig.txt
lexiconp.txt lex_ndisambig
phone_map.txt phones
data/lang 里面包含的文件:
L_disambig.fst L.fst #词典fst
oov.int oov.txt phones
phones.txt topo words.txt
如果一个词有不同发音,则会在不同行中出现多次。如果你想使用发音概率,你需要lexiconp.txt而不是lexicon.txt
local/prepare_lm.sh
生成data/lang_test_bg,包含
G.fst L_disambig.fst #新生成的语言模型fst
L.fst oov.int oov.txt
phones phones.txt
topo words.txt
脚本
#!/bin/bash. path.shecho Preparing language models for testfor lm_suffix in tg; dotest=data/lang_test_${lm_suffix}rm -rf data/lang_test_${lm_suffix}cp -r data/lang data/lang_test_${lm_suffix}arpa2fst --disambig-symbol=#0 --read-symbol-table=$test/words.txt input/task.arpabo $test/G.fst#arpa2fst --disambig-symbol=#0 --read-symbol-table=$test/words.txt my_data/3gram.arpa $test/G.fstfstisstochastic $test/G.fst# The output is like:
# 9.14233e-05 -0.259833
# we do expect the first of these 2 numbers to be close to zero (the second is
# nonzero because the backoff weights make the states sum to >1).
# Because of the <s> fiasco for these particular LMs, the first number is not
# as close to zero as it could be.# Everything below is only for diagnostic.
# Checking that G has no cycles with empty words on them (e.g. <s>, </s>);
# this might cause determinization failure of CLG.
# #0 is treated as an empty word.mkdir -p tmpdir.gawk '{if(NF==1){ printf("0 0 %s %s\n", $1,$1); }} END{print "0 0 #0 #0"; print "0";}' \< data/local/dict/lexicon.txt >tmpdir.g/select_empty.fst.txtfstcompile --isymbols=$test/words.txt--osymbols=$test/words.txt tmpdir.g/ select_empty.fst.txt | \fstarcsort --sort_type=olabel | fstcompose - $test/G.fst >tmpdir.g/empty_words.fstfstinfo tmpdir.g/empty_words.fst | grep cyclic | grep -w 'y' &&echo "Language model has cycles with empty words" && exit 1rm -r tmpdir.g
doneecho "Succeeded in formatting data."
其中my_data/3gram.arpa是我们的3gram模型
ngram-count -wbdiscount -order 3 -text words.txt -vocab vocab.txt -unk -interpolate5 -lm 3-gram.arpa
ngram-count命令可以google.
至此数据准备阶段完成了。进入data目录会看到下面的文件夹
lang # 包含语言文件
lang_test_bg # 用于测试的语言文件
local # 包含了原始数据的信息,以及词典
test # 测试集
train # 训练集
mfcc
for x in train dev test; dosteps/make_mfcc.sh --cmd "$train_cmd" --nj $feats_nj data/$x exp/make_mfcc/$x $mfccdirsteps/compute_cmvn_stats.sh data/$x exp/make_mfcc/$x $mfccdir
utils/fix_data_dir.sh data/$x #该脚本会修复排序错误,并会移除那些被指明需要特征数据或标注,但是却找不到被需要的数据的那些发音(utterances)
done
生成mfcc, exp/make_mfcc以及data/train, data/test, date/dev里面的cmvn.scp feats.scp。
其中mfcc里面放着.scp和.ark特征文件,exp/make_mfcc里面有train, test, dev放着各种日志.log
make_mfcc.sh会用到conf/mfcc.conf
–use-energy=false # only non-default option.
–sample-frequency=16000 # Switchboard is sampled at 8kHz
采样频率在这里设置的,要是出现Sample frequency mismatch的问题在这里进行修改。
里面的所有脚本都可以自己根据实际数据情况进行更改。
注:本实例基于timit,timit数据集获取方式:
wget http://182.92.241.109/cxst_download/mnt/luojie/timit.zip
这篇关于kaldi数据准备(二)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!