Result Details

Fine-tune Before Structured Pruning: Towards Compact and Accurate Self-Supervised Models for Speaker Diarization

HAN, J.; LANDINI, F.; ROHDIN, J.; SILNOVA, A.; DIEZ, M.; ČERNOCKÝ, J.; BURGET, L. Fine-tune Before Structured Pruning: Towards Compact and Accurate Self-Supervised Models for Speaker Diarization. In Proceedings of the Annual Conference of the International Speech Communication Association Interspeech. Interspeech. Rotterdam, The Netherlands: International Speech Communication Association, 2025. p. 1583-1587.
Type
conference paper
Language
English
Authors
Abstract

Self-supervised learning (SSL) models like WavLM can be effectively utilized when building speaker diarization systems but are often large and slow, limiting their use in resource-constrained scenarios. Previous studies have explored compression techniques, but usually for the price of degraded performance at high pruning ratios. In this work, we propose to compress SSL models through structured pruning by introducing knowledge distillation. Different from the existing works, we emphasize the importance of fine-tuning SSL models before pruning. Experiments on far-field single-channel AMI, AISHELL-4, and AliMeeting datasets show that our method can remove redundant parameters of WavLM Base+ and WavLM Large by up to 80% without any performance degradation. After pruning, the inference speeds on a single GPU for the Base+ and Large models are 4.0 and 2.6 times faster, respectively. Our source code is publicly available.

Keywords

fine-tuning | knowledge distillation | model compression | speaker diarization | structured pruning | WavLM

URL
Published
2025
Pages
1583–1587
Journal
Interspeech, ISSN
Proceedings
Proceedings of the Annual Conference of the International Speech Communication Association Interspeech
Conference
Interspeech
Publisher
International Speech Communication Association
Place
Rotterdam, The Netherlands
DOI
EID Scopus
BibTeX
@inproceedings{BUT199389,
  author="Jiangyu {Han} and Federico Nicolás {Landini} and Johan Andréas {Rohdin} and Anna {Silnova} and Mireia {Diez Sánchez} and Jan {Černocký} and Lukáš {Burget}",
  title="Fine-tune Before Structured Pruning: Towards Compact and Accurate Self-Supervised Models for Speaker Diarization",
  booktitle="Proceedings of the Annual Conference of the International Speech Communication Association Interspeech",
  year="2025",
  journal="Interspeech",
  pages="1583--1587",
  publisher="International Speech Communication Association",
  address="Rotterdam, The Netherlands",
  doi="10.21437/Interspeech.2025-484",
  url="https://www.isca-archive.org/interspeech_2025/han25_interspeech.pdf"
}
Projects
Advancing Robust and Creative Human language technologies through CHallenge Events and Research, EU, European Defence Fund, start: 2024-12-01, end: 2029-11-30, running
Exchanges for SPEech ReseArch aNd TechnOlogies, EU, Horizon 2020, start: 2021-01-01, end: 2025-12-31, running
Linguistics, Artificial Intelligence and Language and Speech Technologies: from Research to Applications, EU, MEZISEKTOROVÁ SPOLUPRÁCE, EH23_020/0008518, start: 2025-01-01, end: 2028-12-31, running
Departments
Back to top