Thesis Details

Semi-Supervised Training of Deep Neural Networks for Speech Recognition

Ph.D. Thesis Student: Veselý Karel Academic Year: 2017/2018 Supervisor: Burget Lukáš, doc. Ing., Ph.D.
Czech title
"Semi-supervised" trénování hlubokých neuronových sítí pro rozpoznávání řeči

In this thesis, we first present the theory of neural network training for the speech recognition, along with our implementation, that is available as the 'nnet1' training recipe in the Kaldi toolkit. The recipe contains RBM pre-training, mini-batch frame Cross-Entropy training and sequence-discriminative sMBR training. Then we continue with the main topic of this thesis: semi-supervised training of DNN-based ASR systems. Inspired by the literature survey and our initial experiments, we investigated several problems: First, whether the confidences are better to be calculated per-sentence, per-word or per-frame. Second, whether the confidences should be used for data-selection or data-weighting. Both approaches are compatible with the framework of weighted mini-batch SGD training. Then we tried to get better insight into confidence calibration, more precisely whether it can improve the efficiency of semi-supervised training. We also investigated how the model should be re-tuned with the correctly transcribed data. Finally, we proposed a simple recipe that avoids a grid search of hyper-parameters, and therefore is very practical for general use with any dataset.The experiments were conducted on several data-sets: for Babel Vietnamese with 10 hours of transcribed speech, the Word Error Rate (WER) was reduced by 2.5%. For SwitchboardEnglish with 14 hours of transcribed speech, the WER was reduced by 3.2%. Although we found it difficult to further improve the performance of semi-supervised training by meansof enhancing the confidences, we still believe that our findings are of significant practical value: the untranscribed data are abundant and easy to obtain, and our proposed solutionbrings solid WER improvements and it is not difficult to replicate.


Deep neural networks, speech recognition, semi-supervised training, Kaldi, nnet1

Degree Programme
3 April 2018
VESELÝ, Karel. Semi-Supervised Training of Deep Neural Networks for Speech Recognition. Brno, 2017. Ph.D. Thesis. Brno University of Technology, Faculty of Information Technology. 2018-04-03. Supervised by Burget Lukáš. Available from:
    author = "Karel Vesel\'{y}",
    type = "Ph.D. thesis",
    title = "Semi-Supervised Training of Deep Neural Networks for Speech Recognition",
    school = "Brno University of Technology, Faculty of Information Technology",
    year = 2018,
    location = "Brno, CZ",
    language = "english",
    url = ""
Back to top