Publication Details

Multilingual Sequence-to-Sequence Speech Recognition: Architecture, Transfer Learning, and Language Modeling

CHO Jaejin, BASKAR Murali K., LI Ruizhi, WIESNER Matthew, MALLIDI Sri Harish, YALTA Nelson, KARAFIÁT Martin, WATANABE Shinji and HORI Takaaki. Multilingual Sequence-to-Sequence Speech Recognition: Architecture, Transfer Learning, and Language Modeling. In: Proceedings of 2018 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2018). Athens: IEEE Signal Processing Society, 2018, pp. 521-527. ISBN 978-1-5386-4334-1. Available from: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8639655
Czech title
Multilingvální sequence-to-sequence rozpoznávání řeči: architektura, přenosové učení a jazykové modelování
Type
conference paper
Language
english
Authors
Cho Jaejin (JHU)
Baskar Murali K. (DCGM FIT BUT)
Li Ruizhi (JHU)
Wiesner Matthew (JHU)
Mallidi Sri Harish (AmazonCom)
Yalta Nelson (WasUni)
Karafiát Martin, Ing., Ph.D. (DCGM FIT BUT)
Watanabe Shinji, Dr. (JHU)
Hori Takaaki (MERL)
URL
Keywords

Automatic speech recognition (ASR), sequence to sequence, multilingual setup, transfer learning, language modeling

Abstract

Sequence-to-sequence (seq2seq) approach for low-resource ASR is a relatively new direction in speech research. The approach benefits by performing model training without using lexicon and alignments. However, this poses a new problem of requiring more data compared to conventional DNN-HMM systems. In this work, we attempt to use data from 10 BABEL languages to build a multilingual seq2seq model as a prior model, and then port them towards 4 other BABEL languages using transfer learning approach. We also explore different architectures for improving the prior multilingual seq2seq model. The paper also discusses the effect of integrating a recurrent neural network language model (RNNLM) with a seq2seq model during decoding. Experimental results show that the transfer learning approach from the multilingual model shows substantial gains over monolingual models across all 4 BABEL languages. Incorporating an RNNLM also brings significant improvements in terms of %WER, and achieves recognition performance comparable to the models trained with twice more training data.

Published
2018
Pages
521-527
Proceedings
Proceedings of 2018 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2018)
Conference
2018 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2018), Athens, GR
ISBN
978-1-5386-4334-1
Publisher
IEEE Signal Processing Society
Place
Athens, GR
DOI
UT WoS
000463141800073
EID Scopus
BibTeX
@INPROCEEDINGS{FITPUB12251,
   author = "Jaejin Cho and K. Murali Baskar and Ruizhi Li and Matthew Wiesner and Harish Sri Mallidi and Nelson Yalta and Martin Karafi\'{a}t and Shinji Watanabe and Takaaki Hori",
   title = "Multilingual Sequence-to-Sequence Speech Recognition: Architecture, Transfer Learning, and Language Modeling",
   pages = "521--527",
   booktitle = "Proceedings of 2018 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2018)",
   year = 2018,
   location = "Athens, GR",
   publisher = "IEEE Signal Processing Society",
   ISBN = "978-1-5386-4334-1",
   doi = "10.1109/SLT.2018.8639655",
   language = "english",
   url = "https://www.fit.vut.cz/research/publication/12251"
}
Back to top