News

Day: 30 January 2024

Voice deepfakes can't be detected by humans or security systems. Attacks on the rise

[img]

Spreading alarm messages or disclosing confidential company or bank data. Artificial intelligence is developing rapidly and almost anyone can create deepfake voice recordings at home and in high quality. Neither humans nor biometric systems can reliably distinguish artificial speech from real speech. Researchers from FIT BUT together with commercial system developers now want to design more reliable testing and more accurate detection of deepfakes. They are responding to a call from the Ministry of the Interior.

Anton Firc from FIT BUT first started to address the issue of deepfakes in his master's thesis, in which he investigated the resistance of voice biometrics to deepfakes. The same issue was then followed up by Daniel Prudky's research, which sent voice messages to 31 respondents and investigated their ability to detect deepfakes in ordinary conversation. "People were told a cover story about the user-friendliness of the voicemails being tested. He included one deepfake recording in the test conversations and monitored the respondents' reactions. The results showed that none of them experienced a fraudulent deepfake message," Firc explains.

However, in the same experiment, when respondents were told that one of the voicemails was a fake, they were able to identify it with almost 80% accuracy. "But the research showed that although a deepfake recording is easily identifiable among real ones, no one can detect it in a normal conversation," Firc adds. Part of the reason for this, he says, is that the interviewer didn't expect it in the context, and that's what the creators of deepfake recordings can exploit in reality.


Complete article here.

Share News

Back to top