Publication Details

An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification

PENG Junyi, PLCHOT Oldřich, STAFYLAKIS Themos, MOŠNER Ladislav, BURGET Lukáš and ČERNOCKÝ Jan. An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification. In: 2022 IEEE Spoken Language Technology Workshop, SLT 2022 - Proceedings. Doha: IEEE Signal Processing Society, 2023, pp. 555-562. ISBN 978-1-6654-7189-3. Available from: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10022775
Czech title
Backend pro rozpoznávání mluvčího založený na attention modelech umožňující efektivní jemné doladění transformerových modelů
Type
conference paper
Language
english
Authors
Peng Junyi, Msc. Eng. (DCGM FIT BUT)
Plchot Oldřich, Ing., Ph.D. (DCGM FIT BUT)
Stafylakis Themos (OMILIA)
Mošner Ladislav, Ing. (DCGM FIT BUT)
Burget Lukáš, doc. Ing., Ph.D. (DCGM FIT BUT)
Černocký Jan, prof. Dr. Ing. (DCGM FIT BUT)
URL
Keywords

Pre-trained model, fine-tuning strategy, speaker verification, attentive pooling

Abstract

In recent years, self-supervised learning paradigm has received extensive attention due to its great success in various down-stream tasks. However, the fine-tuning strategies for adapting those pre-trained models to speaker verification task have yet to be fully explored. In this paper, we analyze several feature extraction approaches built on top of a pre-trained model, as well as regularization and a learning rate scheduler to stabilize the fine-tuning process and further boost performance: multi-head factorized attentive pooling is proposed to factorize the comparison of speaker representations into multiple phonetic clusters. We regularize towards the parameters of the pretrained model and we set different learning rates for each layer of the pre-trained model during fine-tuning. The experimental results show our method can significantly shorten the training time to 4 hours and achieve SOTA performance: 0.59%, 0.79% and 1.77% EER on Vox1-O, Vox1-E and Vox1-H, respectively.

Published
2023
Pages
555-562
Proceedings
2022 IEEE Spoken Language Technology Workshop, SLT 2022 - Proceedings
Conference
IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, Doha, QA
ISBN
978-1-6654-7189-3
Publisher
IEEE Signal Processing Society
Place
Doha, QA
DOI
UT WoS
000968851900075
EID Scopus
BibTeX
@INPROCEEDINGS{FITPUB12984,
   author = "Junyi Peng and Old\v{r}ich Plchot and Themos Stafylakis and Ladislav Mo\v{s}ner and Luk\'{a}\v{s} Burget and Jan \v{C}ernock\'{y}",
   title = "An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification",
   pages = "555--562",
   booktitle = "2022 IEEE Spoken Language Technology Workshop, SLT 2022 - Proceedings",
   year = 2023,
   location = "Doha, QA",
   publisher = "IEEE Signal Processing Society",
   ISBN = "978-1-6654-7189-3",
   doi = "10.1109/SLT54892.2023.10022775",
   language = "english",
   url = "https://www.fit.vut.cz/research/publication/12984"
}
Back to top