Results and software licenses available at the faculty may be provided as needed. This way of cooperation is suitable for development of more complex solutions with guaranteed solution time and with higher financial demands. If you are interested, please send us a professional area including a brief description of the problem and the desired results. This information will not be disseminated in any way and will serve to select competent experts for further cooperation discussions.
The RUDA robot was created at our faculty in 2014. It consists of a tracked undercarriage with a modular platform for connecting various modules and a telescopic camera. The individual developed modules can be: a stereoscopic camera system with lidar, a bioradar manipulator, a gripper manipulator, or an additional battery module. By using a bioradar, thermal imager, microphone, or avalanche finder, one can find a person behind an obstacle.The robot can be expanded with other technologies - only a module replacement with a new one is sufficient. The robot is able to move autonomously, ie it has automatic motion (it locates and creates maps of the surrounding area), but it also allows the operator to control it either wirelessly or wired (using a cable that is also used to power supply).
We started with tracking targets in 2009 when we became co-researchers of the Research and Development of Technology for Intelligent Optical Surveillance Systems within the Ministry of Industry and Trade project. We focused on the one-camera system solution for which we have developed a remote target tracking application, which allows to track distant (several kilometers) and disappearing targets (up to 50 in one scene, with one primary). Later, we designed a one-camera system within the framework of the project of the Ministry of the Interior of the Czech Republic entitled Tools and Methods for Video and Image Processing for the Fight against Terrorism, which allows tracking of distant and disappearing targets up to 30 km distance.The zoom camera system (22x) is located on a military manipulator. A high performance GPGPU computer for video signal processing is located in the base. We called this system a semi-Automated Object Tracking System (sAOTS).
This software application could be used for processing of eye retina images. In the image of eye retina, there is searched for vein pattern, the veins are detected and the features are extracted for the generation of a biometric template, which could be used for recognition of people on the basis of their eye retina vein pattern.
CGPAnalyzer was developed to analyse and visualise a genetic record (i.e. a log file) generated by CGP-based circuit design software. CGPAnalyzer automatically finds key genetic improvements in the genetic record and presents relevant phenotypes. The comparison module of CGPAnalyzer allows the user to select two phenotypes and compare their structure, history and functionality. It thus enables to reconstruct the process of discovering new circuit designs. This feature is demonstrated by means of the analysis of the genetic record from a 9-parity circuit evolution. The CGPAnalyzer tool is a desktop application with a graphical user interface created using Java v.8 and Swing library.
Face recognition from a quadcopter
The program for face recognition on the video input gets. In it's cascading style sought a face is detected, which is then converted to grayscale image, normalized as contrast and brightness, and reduced concerns for achieving uniform image size for recognition algorithm. The picture is stored only if the difference between the previous and current greater than a specified threshold (for reducing the size of the DB). All images are saved in the specified folder labeled forward by ID.
For the very recognition is used OpenCV library. Face detection, the user selects a cascading set that will be used for detection, sets the value ScaleFactor that defines how greatly reduces the image size for each measurement (ideally 1.5). Press LoadVideo retrieves video, using the .CSV load paths loads and descriptions to a set of images that will be trained enough. Next, the user selects a recognition algorithm. Subsequently, using the Recognize started the video, which will face detection. The comparison is drawn ID in the folder where the images were stored, the detected faces. Rendering takes place directly into the picture.
3D Hand Scanner
The device allows to capture and recognize the 3D shape (geometry) of the hand.The first version included 4 line laser transmitters with a power of 10 mW and a wavelength of 532 nm (green) and imaging device made of CCD camera connected via USB with 1280 x 800 px resolution while it was running on Linux Portable OS. The size of our solution was 380 x 185 x 380 mm. Due to the high cost of green lasers, we tried red line lasers with a wavelength of 640 nm. During the next testing, we tried the 3M micro-data projector and then the RGB LED projectors. Due to the effects of external lighting and the properties of the skin for different wavelengths, we eventually came to the decision to use 3D cameras (such as Kinect, SoftKinetic or Intel RealSense), which we do till now.