Product Details

Temporal HDR Tone Mapping IP Core

Created: 2022

Czech title
IP Jádro pro temporální HDR Video Tone Mapping
Type
software
License
required - licence fee
Authors
Keywords

HDR image acquisition, FPGA, multiexposure, tonemapping, drone, UAV

Description

This component is part of Comp4Drones Component Repository. Module provides a video processing module that converts RGB images captured by a drone into tone-mapped high dynamic range (HDR) video streams. The component is implemented for Xilinx FPGA  and can read data from a camera sensor, merge multiple images with alternating exposures into HDR images or HDR video, and apply HDR tone mapping. The system is also capable of image pre-processing, exposure control, and "ghost-free" function to remove possible artifacts caused by the movement of objects. The component is divided into four main blocks: Sensor data acquisition, Buffering, HDR merging and deghosting, and HDR tone mapping. The architecture is based on the Xilinx Zynq platform and is connected to the Python 2000 CMOS sensor using LVDS interface. The CMOS output is raw CFA image data with a Bayer filter mosaic, which is stored in DDR memory using DMA and double buffering. The HDR merge block reads three image streams simultaneously and applies an inverse camera response function to obtain an image with a linear response and merge HDR image. The merging algorithm performs per-pixel processing, which requires a relatively small number of per-pixel operations. The HDR tone mapping pipeline is implemented in FPGA and pipelined at 200MHz while processing one pixel per clock. The input of the tone mapping block is 18-bit CFA pixel in 10.8 fixed-point representation, and the output is an RGB pixel in the <0,1> interval. The algorithm is based on the Durand and Dorsey tone mapping operator, which is a two-pass algorithm. However, due to limited memory size, the implementation only computes minimum and maximum values (or percentiles) of the base layer. This component is designed to support data acquisition for further processing in the FPGA or in the following systems. It provides acquired data in HDR or tone-mapped format and can be extended with other data analytics tools/algorithms such as detectors. The improvements made by BUT include increased performance of the algorithms, reduced latency, increased throughput (up to 200 mega pixels per second), robustness of the controller with respect to environmental disturbances, and increased resiliency.

Location

Fakulta informačních technologií VUT v Brně, Božetěchova 2, 612 66 Brno, Q301

Projects
Research groups
Departments
Back to top