Publication Details

How Does Pre-Trained Wav2Vec 2.0 Perform on Domain-Shifted ASR? an Extensive Benchmark on Air Traffic Control Communications

ZULUAGA-GOMEZ Juan, PRASAD Amrutha, NIGMATULINA Iuliia, SARFJOO Seyyed Saeed, MOTLÍČEK Petr, KLEINERT Matthias, HELMKE Hartmut, OHNEISER Oliver and ZHAN Qingran. How Does Pre-Trained Wav2Vec 2.0 Perform on Domain-Shifted ASR? an Extensive Benchmark on Air Traffic Control Communications. In: IEEE Spoken Language Technology Workshop, SLT 2022 - Proceedings. Doha: IEEE Signal Processing Society, 2023, pp. 205-212. ISBN 978-1-6654-7189-3. Available from: https://ieeexplore.ieee.org/document/10022724
Czech title
Jak si vede předtrénovaný Wav2Vec 2.0 v ASR s posunem domény? Rozsáhlé testování na komunikaci v řízení letového provozu
Type
conference paper
Language
english
Authors
Zuluaga-Gomez Juan (IDIAP)
Prasad Amrutha (DCGM FIT BUT)
Nigmatulina Iuliia (IDIAP)
Sarfjoo Seyyed Saeed (IDIAP)
Motlíček Petr, doc. Ing., Ph.D. (DCGM FIT BUT)
Kleinert Matthias (DLR)
Helmke Hartmut (DLR)
Ohneiser Oliver (DLR)
Zhan Qingran (IDIAP)
URL
Keywords

Automatic speech recognition, Wav2Vec 2.0, self-supervised pre-training, air traffic control communications.

Abstract

Recent work on self-supervised pre-training focus on leveraging large-scale unlabeled speech data to build robust end-to-end (E2E) acoustic models (AM) that can be later fine-tuned on downstream tasks e.g., automatic speech recognition (ASR). Yet, few works investigated the impact on performance when the data properties substantially differ between the pre-training and fine-tuning phases, termed domain shift. We target this scenario by analyzing the robustness of Wav2Vec 2.0 and XLS-R models on downstream ASR for a completely unseen domain, air traffic control (ATC) communications. We benchmark these two models on several open-source and challenging ATC databases with signal-to-noise ratio between 5 to 20 dB. Relative word error rate (WER) reductions between 20% to 40% are obtained in comparison to hybrid-based ASR baselines by only fine-tuning E2E acoustic models with a smaller fraction of labeled data. We analyze WERs on the low-resource scenario and gender bias carried by one ATC dataset.

Published
2023
Pages
205-212
Proceedings
IEEE Spoken Language Technology Workshop, SLT 2022 - Proceedings
Conference
IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, Doha, QA
ISBN
978-1-6654-7189-3
Publisher
IEEE Signal Processing Society
Place
Doha, QA
DOI
UT WoS
000968851900028
EID Scopus
BibTeX
@INPROCEEDINGS{FITPUB13047,
   author = "Juan Zuluaga-Gomez and Amrutha Prasad and Iuliia Nigmatulina and Saeed Seyyed Sarfjoo and Petr Motl\'{i}\v{c}ek and Matthias Kleinert and Hartmut Helmke and Oliver Ohneiser and Qingran Zhan",
   title = "How Does Pre-Trained Wav2Vec 2.0 Perform on Domain-Shifted ASR? an Extensive Benchmark on Air Traffic Control Communications",
   pages = "205--212",
   booktitle = "IEEE Spoken Language Technology Workshop, SLT 2022 - Proceedings",
   year = 2023,
   location = "Doha, QA",
   publisher = "IEEE Signal Processing Society",
   ISBN = "978-1-6654-7189-3",
   doi = "10.1109/SLT54892.2023.10022724",
   language = "english",
   url = "https://www.fit.vut.cz/research/publication/13047"
}
Back to top