News

Category: news

Day: 1 February 2024

How to defend against deepfakes? Kamil Malinka from FIT presented the risks of AI abuse

[img]

On Wednesday 31 January, the Economic Committee of the Chamber of Deputies held a seminar on the security threats of artificial intelligence. Kamil Malinka from FIT BUT presented his expert view to a wide audience from the state, commercial and academic spheres as well as representatives of defrauded citizens at the invitation of MP and chairman of the subcommittee on consumer protection Patrik Nacher. 

He spoke about deep fake scams. Artificial intelligence is constantly increasing their sophistication. "These are attacks on biometric systems, identity theft, hate propaganda and more. Artificial intelligence gives attackers "powerful new tools" against which we are relatively powerless", Kamil Malinka explains. Voice deepfakes are already at such a high level that it is impossible to distinguish an artificial voice from a real one by ear. With a human voice, an attacker calls and demands something. It may seem like someone you know is calling from a known number, but in reality it is a complete stranger from a different number and you won't have the slightest suspicion that it is a fake. What's more, this is how a robot programmed for this purpose can communicate with you.  The success rate of such calls increases with how much or how little one concentrates on them. When attention is fragmented and under pressure, people are much more likely to fall for a scam.



Kamil Malinka is working on the issue of deep fakes and their impact on security as part of the Security@FIT research group. Together with another group of researchers from Speech@FIT and Phonexia, they are collaborating on cybersecurity research as part of a Home Office challenge. The aim is to develop tools that can reliably identify man-made recordings.

Share News

Back to top