Detail výsledku

Parameter-Efficient Transfer Learning of Pre-Trained Transformer Models for Speaker Verification Using Adapters

PENG, J.; STAFYLAKIS, T.; GU, R.; PLCHOT, O.; MOŠNER, L.; BURGET, L.; ČERNOCKÝ, J. Parameter-Efficient Transfer Learning of Pre-Trained Transformer Models for Speaker Verification Using Adapters. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Rhodes Island: IEEE Signal Processing Society, 2023. p. 1-5. ISBN: 978-1-7281-6327-7.
Typ
článek ve sborníku konference
Jazyk
anglicky
Autoři
Peng Junyi, UPGM (FIT)
Stafylakis Themos
GU, R.
Plchot Oldřich, Ing., Ph.D., UPGM (FIT)
Mošner Ladislav, Ing., UPGM (FIT)
Burget Lukáš, doc. Ing., Ph.D., UPGM (FIT)
Černocký Jan, prof. Dr. Ing., UPGM (FIT)
Abstrakt

Recently, the pre-trained Transformer models have received a rising
interest in the field of speech processing thanks to their great success
in various downstream tasks. However, most fine-tuning approaches
update all the parameters of the pre-trained model, which becomes
prohibitive as the model size grows and sometimes results in over-
fitting on small datasets. In this paper, we conduct a comprehensive
analysis of applying parameter-efficient transfer learning (PETL)
methods to reduce the required learnable parameters for adapting
to speaker verification tasks. Specifically, during the fine-tuning
process, the pre-trained models are frozen, and only lightweight
modules inserted in each Transformer block are trainable (a method
known as adapters). Moreover, to boost the performance in a cross-
language low-resource scenario, the Transformer model is further
tuned on a large intermediate dataset before directly fine-tuning it
on a small dataset. With updating fewer than 4% of parameters, (our
proposed) PETL-based methods achieve comparable performances
with full fine-tuning methods (Vox1-O: 0.55%, Vox1-E: 0.82%,
Vox1-H:1.73%).

Klíčová slova

Speaker verification, pre-trained model, adapter, fine-tuning, transfer learning

URL
Rok
2023
Strany
1–5
Sborník
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Konference
2023 IEEE International Conference on Acoustics, Speech and Signal Processing IEEE
ISBN
978-1-7281-6327-7
Vydavatel
IEEE Signal Processing Society
Místo
Rhodes Island
DOI
EID Scopus
BibTeX
@inproceedings{BUT185200,
  author="PENG, J. and STAFYLAKIS, T. and GU, R. and PLCHOT, O. and MOŠNER, L. and BURGET, L. and ČERNOCKÝ, J.",
  title="Parameter-Efficient Transfer Learning of Pre-Trained Transformer Models for Speaker Verification Using Adapters",
  booktitle="ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings",
  year="2023",
  pages="1--5",
  publisher="IEEE Signal Processing Society",
  address="Rhodes Island",
  doi="10.1109/ICASSP49357.2023.10094795",
  isbn="978-1-7281-6327-7",
  url="https://ieeexplore.ieee.org/document/10094795"
}
Soubory
Projekty
Multi-lingualita v řečových technologiích, MŠMT, INTER-EXCELLENCE - Podprogram INTER-ACTION, LTAIN19087, zahájení: 2020-01-01, ukončení: 2023-08-31, ukončen
Neuronové reprezentace v multimodálním a mnohojazyčném modelování, GAČR, Grantové projekty exelence v základním výzkumu EXPRO - 2019, GX19-26934X, zahájení: 2019-01-01, ukončení: 2023-12-31, ukončen
Robustní zpracování nahrávek pro operativu a bezpečnost, MV, PROGRAM STRATEGICKÁ PODPORA ROZVOJE BEZPEČNOSTNÍHO VÝZKUMU ČR 2019-2025 (IMPAKT 1) PODPROGRAMU 1 SPOLEČNÉ VÝZKUMNÉ PROJEKTY (BV IMP1/1VS), VJ01010108, zahájení: 2020-10-01, ukončení: 2025-09-30, ukončen
Výměny pro výzkum řeči a technologií, EU, Horizon 2020, zahájení: 2021-01-01, ukončení: 2025-12-31, řešení
Výzkumné skupiny
Pracoviště
Nahoru