Detail výsledku

Self-distillation-based domain exploration for source speaker verification under spoofed speech from unknown voice conversion

MA, X.; ZHANG, R.; WEI, J.; LU, X.; XU, J.; ZHANG, L.; LU, W. Self-distillation-based domain exploration for source speaker verification under spoofed speech from unknown voice conversion. Speech communication, 2025, vol. 167, no. 103153, p. 1-12.
Typ
článek v časopise
Jazyk
angličtina
Autoři
Ma Xinlei
Zhang Ruiteng
Wei Jianguo
Lu Xugang
Xu Junhai
Zhang Lin, Ph.D., UPGM (FIT)
Lu Wenhuan
Abstrakt

Advancements in voice conversion (VC) technology have made it easier to generate spoofed speech that closely resembles the identity of a target speaker. Meanwhile, verification systems within the realm of speech processing are widely used to identify speakers. However, the misuse of VC algorithms poses significant privacy and security risks by potentially deceiving these systems. To address this issue, source speaker verification (SSV) has been proposed to verify the source speaker's identity of the spoofed speech generated by VCs. Nevertheless, SSV often suffers severe performance degradation when confronted with unknown VC algorithms, which is usually neglected by researchers. To deal with this cross-voice-conversion scenario and enhance the model's performance when facing unknown VC methods, we redefine it as a novel domain adaptation task by treating each VC method as a distinct domain. In this context, we propose an unsupervised domain adaptation (UDA) algorithm termed self-distillation-based domain exploration (SDDE). This algorithm adopts a siamese framework with two branches: one trained on the source (known) domain and the other trained on the target domains (unknown VC methods). The branch trained on the source domain leverages supervised learning to capture the source speaker's intrinsic features. Meanwhile, the branch trained on the target domain employs self-distillation to explore target domain information from multi-scale segments. Additionally, we have constructed a large-scale data set comprising over 7945 h of spoofed speech to evaluate the proposed SDDE. Experimental results on this data set demonstrate that SDDE outperforms traditional UDAs and substantially enhances the performance of the SSV model under unknown VC scenarios. The code for data generation and the trial lists are available at https://github.com/zrtlemontree/cross-domain-source-speaker-verification.

Klíčová slova

Source speaker verification, Unsupervised domain adaptation, Spoofing attack, Self-distillation

URL
Rok
2025
Strany
1–12
Časopis
Speech communication, roč. 167, č. 103153, ISSN
Vydavatel
Elsevier
DOI
UT WoS
001391212500001
EID Scopus
BibTeX
@article{BUT201395,
  author="{} and  {} and  {} and  {} and  {} and Lin {Zhang} and  {}",
  title="Self-distillation-based domain exploration for source speaker verification under spoofed speech from unknown voice conversion",
  journal="Speech communication",
  year="2025",
  volume="167",
  number="103153",
  pages="1--12",
  doi="10.1016/j.specom.2024.103153",
  issn="0167-6393",
  url="https://www.sciencedirect.com/science/article/pii/S0167639324001249?pes=vor&utm_source=scopus&getft_integrator=scopus"
}
Soubory
Projekty
Soudobé metody zpracování, analýzy a zobrazování multimediálních a 3D dat, VUT, Vnitřní projekty VUT, FIT-S-23-8278, zahájení: 2023-03-01, ukončení: 2026-02-28, ukončen
Výzkumné skupiny
Pracoviště
Nahoru