News

Category: news

Day: 4 June 2025

On Thursday, June 12, Kamil Malinka from the Institute of Intelligent Systems delivers a public habilitation lecture at FIT BUT

Tags: Faculty of Information Technology

[img]

We cordially invite you to a public lecture held as part of the habilitation procedure of Mgr. Kamil Malinka, Ph.D., Protection mechanisms against deepfake-based attacks. The lecture will take place on June 12, 2025, at 2 p.m. in room G108.

The habilitation thesis follows up on Kamil Malinka's long-term research activities, which he also develops within the Security@FIT research group. The author focuses on the broader context of cybersecurity in the field of artificial intelligence. His professional interest is not limited to technical solutions themselves – he is also interested in user behavior when using selected security tools, the education of future IT professionals, and the impact of deepfakes on voice and facial biometrics. As he himself points out, an essential and somewhat neglected aspect of the problem is understanding how users work with security tools. A key part of his habilitation thesis is the design of a detection mechanism for voice deepfakes based on spectrogram analysis. Finally, the thesis focuses on the problem of comparing detectors, proposing a detailed framework for evaluating and comparing voice deepfake detectors by evaluating 40 state-of-the-art.

Author: Václav Koníček



Abstract of the lecture: 

First, we will briefly introduce the issue of deepfakes and its security implications. To better understand the problem, we will present the deepfake use lifecycle and illustrate applications of protection mechanisms on top of it. Most of the discussion that has been devoted to protection mechanisms has focused on deepfake detection. The disadvantage of this approach is that if detection fails, no other protection is standing in the way of a successful attack. Detection is thus an important part of building protection, but it is not the only option. First, we will discuss various protection methods such as watermarking, legal regulations, methods for obstructing deepfake creation, forensics analysis, and methods ensuring proof. In the area of deepfake face and voice detection methods, we will summarize current approaches: detection based on artifact detection, deep learning, or based on physiological features such as eye blinking. Next, we will present and evaluate the design of our detection mechanism for voice deepfakes based on spectrogram analysis. Finally, we will focus on the problem of a detector comparison. We will propose a detailed framework for evaluating and comparing deepfake speech detectors. To showcase our framework's usage and its benefits, we then used this framework to evaluate 40 state-of-the-art deepfake speech detectors. We will demonstrate the results of extensive experiments, where we extended common approaches by testing for previously unobserved forms of manipulated speech.

You can join the lecture via MS Teams, link to the meeting HERE.

Share News

Back to top