Publication Details

Improving Speaker Discrimination of Target Speech Extraction With Time-Domain Speakerbeam

DELCROIX Marc, OCHIAI Tsubasa, ŽMOLÍKOVÁ Kateřina, KINOSHITA Keisuke, TAWARA Naohiro, NAKATANI Tomohiro and ARAKI Shoko. Improving Speaker Discrimination of Target Speech Extraction With Time-Domain Speakerbeam. In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Barcelona: IEEE Signal Processing Society, 2020, pp. 691-695. ISBN 978-1-5090-6631-5. Available from: https://ieeexplore.ieee.org/document/9054683
Czech title
Zlepšení diskiriminability mluvčích v extrakci cílového mluvčího pomocí metody Speakerbeam v časové oblasti
Type
conference paper
Language
english
Authors
Delcroix Marc (NTT)
Ochiai Tsubasa (NTT)
Žmolíková Kateřina, Ing., Ph.D. (DCGM FIT BUT)
Kinoshita Keisuke (NTT)
Tawara Naohiro (NTT)
Nakatani Tomohiro (NTT)
Araki Shoko (NTT)
URL
Keywords

Target speech extraction, time-domain network, spatial features, multi-task loss

Abstract

Target speech extraction, which extracts a single target source in a mixture given clues about the target speaker, has attracted increasing attention. We have recently proposed SpeakerBeam, which exploits an adaptation utterance of the target speaker to extract his/her voice characteristics that are then used to guide a neural network towards extracting speech of that speaker. SpeakerBeam presents a practical alternative to speech separation as it enables tracking speech of a target speaker across utterances, and achieves promising speech extraction performance. However, it sometimes fails when speakers have similar voice characteristics, such as in same-gender mixtures, because it is difficult to discriminate the target speaker from the interfering speakers. In this paper, we investigate strategies for improving the speaker discrimination capability of SpeakerBeam. First, we propose a time-domain implementation of SpeakerBeam similar to that proposed for a time-domain audio separation network (TasNet), which has achieved state-of-the-art performance for speech separation. Besides, we investigate (1) the use of spatial features to better discriminate speakers when microphone array recordings are available, (2) adding an auxiliary speaker identification loss for helping to learn more discriminative voice characteristics. We show experimentally that these strategies greatly improve speech extraction performance, especially for same-gender mixtures, and outperform TasNet in terms of target speech extraction.

Published
2020
Pages
691-695
Proceedings
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Conference
2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), Barcelona, ES
ISBN
978-1-5090-6631-5
Publisher
IEEE Signal Processing Society
Place
Barcelona, ES
DOI
UT WoS
000615970400138
EID Scopus
BibTeX
@INPROCEEDINGS{FITPUB12280,
   author = "Marc Delcroix and Tsubasa Ochiai and Kate\v{r}ina \v{Z}mol\'{i}kov\'{a} and Keisuke Kinoshita and Naohiro Tawara and Tomohiro Nakatani and Shoko Araki",
   title = "Improving Speaker Discrimination of Target Speech Extraction With Time-Domain Speakerbeam",
   pages = "691--695",
   booktitle = "ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings",
   year = 2020,
   location = "Barcelona, ES",
   publisher = "IEEE Signal Processing Society",
   ISBN = "978-1-5090-6631-5",
   doi = "10.1109/ICASSP40776.2020.9054683",
   language = "english",
   url = "https://www.fit.vut.cz/research/publication/12280"
}
Back to top