Publication Details

DPCCN: Densely-Connected Pyramid Complex Convolutional Network for Robust Speech Separation and Extraction

HAN Jiangyu, LONG Yanhua, BURGET Lukáš and ČERNOCKÝ Jan. DPCCN: Densely-Connected Pyramid Complex Convolutional Network for Robust Speech Separation and Extraction. In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Singapore: IEEE Signal Processing Society, 2022, pp. 7292-7296. ISBN 978-1-6654-0540-9. Available from: https://ieeexplore.ieee.org/document/9747340
Czech title
DPCCN: Hustě propojená pyramidální komplexní konvoluční síť pro robustní separaci a extrakci řeči
Type
conference paper
Language
english
Authors
Han Jiangyu (SHNU)
Long Yanhua (SHNU)
Burget Lukáš, doc. Ing., Ph.D. (DCGM FIT BUT)
Černocký Jan, prof. Dr. Ing. (DCGM FIT BUT)
URL
Keywords

DPCCN, Mixture-Remix, cross-domain, speech separation, unsupervised target speech extraction

Abstract

In recent years, a number of time-domain speech separation methods have been proposed. However, most of them are very sensitive to the environments and wide domain coverage tasks. In this paper, from the time-frequency domain perspective, we propose a densely-connected pyramid complex convolutional network, termed DPCCN, to improve the robustness of speech separation under complicated conditions. Furthermore, we generalize the DPCCN to target speech extraction (TSE) by integrating a new specially designed speaker encoder. Moreover, we also investigate the robustness of DPCCN to unsupervised cross-domain TSE tasks. A Mixture-Remix approach is proposed to adapt the target domain acoustic characteristics for fine-tuning the source model. We evaluate the proposed methods not only under noisy and reverberant in-domain condition, but also in clean but cross-domain conditions. Results show that for both speech separation and extraction, the DPCCN-based systems achieve significantly better performance and robustness than the currently dominating time-domain methods, especially for the crossdomain tasks. Particularly, we find that the Mixture-Remix finetuning with DPCCN significantly outperforms the TD-SpeakerBeam for unsupervised cross-domain TSE, with around 3.5 dB SISNR improvement on target domain test set, without any source domain performance degradation.

Published
2022
Pages
7292-7296
Proceedings
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Conference
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), Singapore, SG
ISBN
978-1-6654-0540-9
Publisher
IEEE Signal Processing Society
Place
Singapore, SG
DOI
UT WoS
000864187907119
EID Scopus
BibTeX
@INPROCEEDINGS{FITPUB12787,
   author = "Jiangyu Han and Yanhua Long and Luk\'{a}\v{s} Burget and Jan \v{C}ernock\'{y}",
   title = "DPCCN: Densely-Connected Pyramid Complex Convolutional Network for Robust Speech Separation and Extraction",
   pages = "7292--7296",
   booktitle = "ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings",
   year = 2022,
   location = "Singapore, SG",
   publisher = "IEEE Signal Processing Society",
   ISBN = "978-1-6654-0540-9",
   doi = "10.1109/ICASSP43922.2022.9747340",
   language = "english",
   url = "https://www.fit.vut.cz/research/publication/12787"
}
Back to top