Fusion of multi-exposure videos in the gradient domain
Date
2025
Authors
Rawat, Aryan
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Gradient-domain processing has proven effective for image fusion tasks by preserving local contrast and structural details while avoiding intensity-domain artifacts. However, most existing fusion methods are designed for still images and fail to address temporal consistency when applied frame-by-frame to video sequences, often resulting in flicker and temporal instability.
This seminar presents a novel approach for multi-exposure video fusion formulated directly in the three-dimensional gradient domain. The proposed method operates on spatial-temporal gradients extracted from registered exposure sequences and fuses them using a gradient-selection strategy. Reconstruction of the fused video is performed using a wavelet-based 3-D Haar decomposition combined with an iterative Poisson solver, ensuring both spatial fidelity and temporal coherence.
Experimental results on standard video datasets demonstrate that the proposed method effectively enhances visual detail while significantly reducing temporal artifacts compared to classical intensity-based fusion techniques. Quantitative evaluation using spatial and temporal metrics further confirms the advantages of gradient-domain fusion for video applications.