Thesis Details
Depth-Based Determination of a 3D Hand Position
This work aims to offer a real-time, depth-based gesture recognition system using a hand's skeletal information. The Tiny YOLOv3 neural network detects the hand in the depth image. The detected hand is rid of the background and used by the JGR-P2O neural network, which estimates the hand's skeleton represented by 21 key points. Furthermore, a novel technique for gesture recognition from hand key points that compares the input skeleton with user-defined gestures has been proposed. A dataset consisting of four thousand images was captured to evaluate the system.
image processing, object detection, hand pose estimation, gesture recognition, depth image, depth-based, real-time, YOLOv3, JGR-P2O, convolutional neural network, deep learning, key points, skeleton, hand
Burget Lukáš, doc. Ing., Ph.D. (DCGM FIT BUT), člen
Holík Lukáš, doc. Mgr., Ph.D. (DITS FIT BUT), člen
Martínek Tomáš, doc. Ing., Ph.D. (DCSY FIT BUT), člen
Matoušek Petr, doc. Ing., Ph.D., M.A. (DIFS FIT BUT), člen
@bachelorsthesis{FITBT23384, author = "Ladislav Ondris", type = "Bachelor's thesis", title = "Depth-Based Determination of a 3D Hand Position", school = "Brno University of Technology, Faculty of Information Technology", year = 2021, location = "Brno, CZ", language = "english", url = "https://www.fit.vut.cz/study/thesis/23384/" }