News

Category: news

Day: 7 October 2025

In October, Anton Firc from the Department of Intelligent Systems will defend his dissertation

[img]

We would like to invite you to the public defense of the dissertation of Ing. Anton Firc, which will take place on Tuesday, October 21, 2025, at 3:00 p.m. in room C209. The dissertation was written under the supervision of doc. Kamil Malinka.

Anton Firc is a member of the Security@FIT research group and, among other things, a recipient of the Joseph Fourier Award. His research focuses on the security implications of voice deepfakes. He has been involved in cybersecurity since his master's studies, when he was attracted to the topic of a thesis written by Kamil Malinka: "I was looking for more detailed information on deepfakes, and I was very interested in the possibilities of technology that, figuratively speaking, can create another person. Around 2020, it was an unexplored topic. Everything we did was new in a way. The transition to a doctorate was then just a natural step." The situation has changed significantly since then, and today our researchers are often confronted with demands to popularize the topic, as well as requests for tools and specific professional cooperation, as Firc admits in an interview from this summer.

Firc's dissertation is based on the premise that audio deepfakes pose a threat to computer security from the outset because they increase the effectiveness of social engineering attacks and allow speaker recognition systems to be circumvented. Previous research has focused mainly on detection, but it lacked a comprehensive security framework based on an understanding of the attacker, their goals, and their methods, so it was unclear how detection should work and how to build defenses against voice deepfakes accordingly. Anton Firc's work focuses precisely on how to evaluate voice deepfakes in a structured way from a cybersecurity perspective and, based on that, propose effective defenses. A key step is understanding the attacker model, which is a prerequisite for the subsequent development and evaluation of protection methods. Among other things, the work includes a proposal for a framework for evaluating detection methods that addresses known problems with deepfake detection, such as poor generalization and limited comparability. The analysis shows that people are unable to reliably recognize deepfakes in real attacks. Therefore, an expanded model of protection against deepfake threats is needed, one that includes multiple measures active at different stages of the deepfake life cycle and offers structured protection. The author also addresses the broader context of voice deepfakes, such as public awareness as a proactive defense strategy.

You can read the abstract of the dissertation here.

You are cordially invited to attend the defense!

Share News

Back to top