Result Details
Analysis of ABC Frontend Audio Systems for the NIST-SRE24
Silnova Anna, M.Sc., Ph.D., DCGM (FIT)
Mošner Ladislav, Ing., DCGM (FIT)
Peng Junyi, DCGM (FIT)
Plchot Oldřich, Ing., Ph.D., DCGM (FIT)
Rohdin Johan Andréas, M.Sc., Ph.D., FIT (FIT), DCGM (FIT)
Zhang Lin, Ph.D.
Han Jiangyu, DCGM (FIT)
Pálka Petr, Ing., FIT (FIT), DCGM (FIT)
Landini Federico Nicolás, Ph.D.
Burget Lukáš, doc. Ing., Ph.D., DCGM (FIT)
Stafylakis Themos
Cumani Sandro, Ph.D.
Boboš Dominik, Ing.
Hlavaček Miroslav
Kodovsky Martin
Pavliček Tomaš
We present a comprehensive analysis of the embedding extractors (frontends) developed by the ABC team for the audio track of NIST SRE 2024. We follow the two scenarios imposed by NIST: using only a provided set of telephone recordings for training (fixed) or adding publicly available data (open condition). Under these constraints, we develop the best possible speaker embedding extractors for the pre-dominant conversational telephone speech (CTS) domain. We explored architectures based on ResNet with different pooling mechanisms, recently introduced ReDimNet architecture, as well as a system based on the XLS-R model, which represents the family of large pre-trained self-supervised models. In open condition, we train on VoxBlink2 dataset, containing 110 thousand speakers across multiple languages. We observed a good performance and robustness of VoxBlink-trained models, and our experiments show practical recipes for developing state-of-the-art frontends for speaker recognition.
embedding extractors | NIST-SRE | speaker recognition | VoxBlink
@inproceedings{BUT199934,
author="{} and Anna {Silnova} and Ladislav {Mošner} and Junyi {Peng} and Oldřich {Plchot} and Johan Andréas {Rohdin} and Lin {Zhang} and Jiangyu {Han} and Petr {Pálka} and Federico Nicolás {Landini} and Lukáš {Burget} and {} and Sandro {Cumani} and Dominik {Boboš} and {} and {} and {}",
title="Analysis of ABC Frontend Audio Systems for the NIST-SRE24",
booktitle="Proceedings of the Annual Conference of the International Speech Communication Association Interspeech",
year="2025",
journal="Interspeech",
pages="5763--5767",
publisher="International Speech Communication Association",
address="Rotterdam",
doi="10.21437/Interspeech.2025-2737",
url="https://www.isca-archive.org/interspeech_2025/barahona25_interspeech.pdf"
}
Linguistics, Artificial Intelligence and Language and Speech Technologies: from Research to Applications, EU, MEZISEKTOROVÁ SPOLUPRÁCE, EH23_020/0008518, start: 2025-01-01, end: 2028-12-31, running