Press Release

Day: 31 August 2021

FIT student performed a research of deepfakes. A synthetic voice managed to fool both systems and people

[img]

Is it possible for an attacker to create an artificial voice and use it to call bank or operator lines and impersonate customers or collect sensitive information? Anton Firc from the Faculty of Information Technology of the Brno University of Technology has decided to find out. He managed to breach both biometric systems he tested so that they accepted artificial speech without any suspicion. Furthermore, during his work, he ascertained that not even the human ear is able to reliably tell human speech from a robot speech. According to him, this provides room for development for "deepfakes" and attacks utilising synthesised voice.

In his diploma thesis, Anton Firc from FIT BUT decided to explore usability of "deepfakes." "These are synthetic media depicting events that never happened," added Firc. Specifically, he took an interest in the field of voice biometrics which is not yet very well explored. He was also interested in the potential impacts of the use of these synthetic media on cybernetic security. "Originally, I planned to contact companies which already use biometric systems. Those are usually call centres, banks and telephone operators. I wanted to test how the system works directly with them. I planned to test what impact would it have if somebody "borrowed" someone else's voice and tried to access that person's customer section and information," described Anton Firc. However, he did not meet with much understanding on the part of the companies, so in the end, he had to make do without systems actually used by organisations.

[img]
Anton Firc researched the unexplored field of deepfakes. The research has proven that attacks using artificial voice are possible | Author: Anton Firc's archive

For this reason, I made some changes to the entire procedure and divided my work into three parts. "One part consisted in breaching the biometric system itself. In order for the experiment to be feasible in our region, I needed the synthetic voice to speak Czech. That was a task for the second part. In the third part, it was important for the robot to be able to hold a meaningful conversation which wouldn't arouse suspicion," explained Firc, who introduced his diploma thesis at Excel@FIT.

He managed to gain access to two biometric systems and with both of them, he reached a stage when they accepted synthetic speech without any suspicion. "Systems use two basic verification methods. Either on the basis of text or only by verifying voice characteristics. It was shown that systems using a text-based verification are safer, because it is harder to exactly reproduce the style of a person's speech and get certain phrases right than to just falsify a voice," noted Anton Firc.

During the second part, he had to train a speech synthesis model in Czech. There are very few Czech datasets. "The result was not as high-quality as I hoped, but it was enough to fool biometric systems," said Firc.

Subsequently, Anton Firc also tested whether human ear is better equipped for detecting artificial voice than technology. It was shown that this is not the case. "I played some people a sample record of a human and then tasked them with selecting which of three other recordings were made by the actual human and which featured robot speech. The results were very mixed and it was found that as people's age increased, their ability to tell synthetic recordings from actual human speech decreased," he noted.

The overall conclusion of the work is thus that an attack using artificial voice would be possible in the Czech Republic. However, according to Anton Firc, no such attacks are documented to have actually taken place. "They are either kept secret or nobody has ever managed to follow completely through on them. Sending some phishing e-mail to a company is still much easier. So far it seems that the time for this type of attacks has not yet come," added Firc, according to whom, it is however the perfect time to start researching and raising awareness of this threat.

Even for this reason, he plans to further focus on this topic in his doctoral studies. "I would like to contact companies using these systems once more and test it with them. I plan to actively address the topic both in the field of research and in the design of defence solutions," concluded Anton Firc.

Author: Kozubová Hana, Mgr.

Last modified: 2022-01-25T15:36:52

Back to press releases

Back to top