Publication Details

Integration of Variational Autoencoder and Spatial Clustering for Adaptive Multi-Channel Neural Speech Separation

ŽMOLÍKOVÁ Kateřina, DELCROIX Marc, BURGET Lukáš, NAKATANI Tomohiro and ČERNOCKÝ Jan. Integration of Variational Autoencoder and Spatial Clustering for Adaptive Multi-Channel Neural Speech Separation. In: 2021 IEEE Spoken Language Technology Workshop, SLT 2021 - Proceedings. Shenzhen - virtual : IEEE Signal Processing Society, 2021, pp. 889-896. ISBN 978-1-7281-7066-4. Available from: https://ieeexplore.ieee.org/document/9383612
Czech title
Integrace variačního autoenkodéru a prostorového shlukování pro adaptivní multikanálovou neurální separaci řeči
Type
conference paper
Language
english
Authors
Žmolíková Kateřina, Ing., Ph.D. (DCGM FIT BUT)
Delcroix Marc (NTT)
Burget Lukáš, doc. Ing., Ph.D. (DCGM FIT BUT)
Nakatani Tomohiro (NTT)
Černocký Jan, prof. Dr. Ing. (DCGM FIT BUT)
URL
Keywords

Multi-channel speech separation, variational autoencoder, spatial clustering, DOLPHIN

Abstract

In this paper, we propose a method combining variational autoencoder model of speech with a spatial clustering approach for multichannel speech separation. The advantage of integrating spatial clustering with a spectral model was shown in several works. As the spectral model, previous works used either factorial generative models of the mixed speech or discriminative neural networks. In our work, we combine the strengths of both approaches, by building a factorial model based on a generative neural network, a variational autoencoder. By doing so, we can exploit the modeling power of neural networks, but at the same time, keep a structured model. Such a model can be advantageous when adapting to new noise conditions as only the noise part of the model needs to be modified. We show experimentally, that our model significantly outperforms previous factorial model based on Gaussian mixture model (DOLPHIN), performs comparably to integration of permutation invariant training with spatial clustering, and enables us to easily adapt to new noise conditions.

Published
2021
Pages
889-896
Proceedings
2021 IEEE Spoken Language Technology Workshop, SLT 2021 - Proceedings
Conference
2021 IEEE Spoken Language Technology Workshop (SLT), Shenzhen - virtual conference, CN
ISBN
978-1-7281-7066-4
Publisher
IEEE Signal Processing Society
Place
Shenzhen - virtual , CN
DOI
UT WoS
000663633300121
EID Scopus
BibTeX
@INPROCEEDINGS{FITPUB12553,
   author = "Kate\v{r}ina \v{Z}mol\'{i}kov\'{a} and Marc Delcroix and Luk\'{a}\v{s} Burget and Tomohiro Nakatani and Jan \v{C}ernock\'{y}",
   title = "Integration of Variational Autoencoder and Spatial Clustering for Adaptive Multi-Channel Neural Speech Separation",
   pages = "889--896",
   booktitle = "2021 IEEE Spoken Language Technology Workshop, SLT 2021 - Proceedings",
   year = 2021,
   location = "Shenzhen - virtual , CN",
   publisher = "IEEE Signal Processing Society",
   ISBN = "978-1-7281-7066-4",
   doi = "10.1109/SLT48900.2021.9383612",
   language = "english",
   url = "https://www.fit.vut.cz/research/publication/12553"
}
Back to top