High-Zoom Video Hallucination by Exploiting Spatio-Temporal Regularities
Carnegie Mellon University
In this talk, I will be presenting our ongoing work on enhancing the spatial resolution of video sequences, with particular focus on zooming into a human face video by a very high (x16) factor. Inspired by recent literature on hallucination and example-based learning, we formulate this task using a graphical model that encodes 1) spatio-temporal consistencies, and 2) image formation & degradation processes. A video database of facial expressions is used to learn a domain-specific prior for high-resolution videos. The problem is posed as one of probabilistic inference, in which we aim to find the high-resolution video that best satisfies the constraints expressed through the graphical model. Traditional approaches to this problem using video data first estimate the relative motion between frames and then compensate for it, effectively resulting in multiple measurements of the scene. Our use of time is rather direct: We define data structures that span multiple consecutive frames, enriching our feature vectors with a temporal signature. We then exploit these signatures to find consistent solutions over time. In our experiments, a 8x6 pixel-wide face video, subject to translational jitter and additive noise, gets magnified to a 128x96 pixel video. Our results show that by exploiting both space and time, drastic improvements can be achieved in reducing both video flicker artifacts and mean-squared-error.
Collaborators: Takeo Kanade and Jonas August
Publications and video examples available from:http://www.cs.cmu.edu/~dedeoglu/cvpr04
DATE: April 20, 2004, Tuesday @ 15:40