Result Details

Lightly supervised vs. semi-supervised training of acoustic model on Luxembourgish for low-resource automatic speech recognition

VESELÝ, K.; PERALES, C.; SZŐKE, I.; LUQUE, J.; ČERNOCKÝ, J. Lightly supervised vs. semi-supervised training of acoustic model on Luxembourgish for low-resource automatic speech recognition. In Proceedings of Interspeech 2018. Proceedings of Interspeech. Hyderabad: International Speech Communication Association, 2018. no. 9, p. 2883-2887. ISSN: 1990-9772.
Type
conference paper
Language
English
Authors
Veselý Karel, Ing., Ph.D., DCGM (FIT)
PERALES, C.
Szőke Igor, Ing., Ph.D., DCGM (FIT)
Luque Jordi, FIT (FIT)
Černocký Jan, prof. Dr. Ing., DCGM (FIT)
Abstract

In this work, we focus on exploiting inexpensive data in orderto to improve the DNN acoustic model for ASR. We exploretwo strategies: The first one uses untranscribed data fromthe target domain. The second one is related to the proper selectionof excerpts from imperfectly transcribed out-of-domainpublic data, as parliamentary speeches. We found out that bothapproaches lead to similar results, making them equally beneficialfor practical use. The Luxembourgish ASR seed systemhad a 38.8% WER and it improved by roughly 4% absolute,leading to 34.6% for untranscribed and 34.9% for lightlysuperviseddata. Adding both databases simultaneously ledto 34.4% WER, which is only a small improvement. As asecondary research topic, we experiment with semi-supervisedstate-level minimum Bayes risk (sMBR) training. Nonetheless,for sMBR we saw no improvement from adding the automaticallytranscribed target data, despite that similar techniquesyield good results in the case of cross-entropy (CE) training.

Keywords

Luxembourgish, call centers, speech recognition,low-resourced ASR, unsupervised training

URL
Published
2018
Pages
2883–2887
Journal
Proceedings of Interspeech, vol. 2018, no. 9, ISSN 1990-9772
Proceedings
Proceedings of Interspeech 2018
Conference
Interspeech Conference
Publisher
International Speech Communication Association
Place
Hyderabad
DOI
UT WoS
000465363900602
EID Scopus
BibTeX
@inproceedings{BUT155104,
  author="VESELÝ, K. and PERALES, C. and SZŐKE, I. and LUQUE, J. and ČERNOCKÝ, J.",
  title="Lightly supervised vs. semi-supervised training of acoustic model on Luxembourgish for low-resource automatic speech recognition",
  booktitle="Proceedings of Interspeech 2018",
  year="2018",
  journal="Proceedings of Interspeech",
  volume="2018",
  number="9",
  pages="2883--2887",
  publisher="International Speech Communication Association",
  address="Hyderabad",
  doi="10.21437/Interspeech.2018-2361",
  issn="1990-9772",
  url="https://www.isca-speech.org/archive/Interspeech_2018/abstracts/2361.html"
}
Files
Projects
Big speech data analytics for contact centers, EU, Horizon 2020, start: 2015-01-01, end: 2017-12-31, completed
Neural networks for signal processing and speech data mining, TAČR, Program na podporu aplikovaného výzkumu ZÉTA, TJ01000208, start: 2018-01-01, end: 2019-12-31, completed
Research groups
Departments
Back to top