13603View
5m 3sLenght
Rating

Sampling Based Scene-Space Video Link to publication page: http://www.disneyresearch.com/publication/scenespace/ Many compelling video processing effects can be achieved if per-pixel depth information and 3D camera calibrations are known. However, the success of such techniques is highly dependent on the accuracy of this ``scene space'' information. We present a novel, sampling-based concept for processing video that enables high-quality scene space video effects despite inevitable and considerable inaccuracies in depth and camera calibration. Instead of trying to improve the explicit 3D scene representation, the key idea of our method is to exploit the high redundancy of rough scene information in video, i.e., every scene point is typically visible in many video frames. Based on this observation we devise a pixel gathering and filtering approach that robustly processes pixel sample sets in order to produce output frames. The gathering step is general and collects samples in the order of billions of pixels, while the filtering step is application-specific and efficiently processes the samples in order to produce the desired video effect. The whole approach is easily parallelized on the GPU, allowing us to handle the required large volumes of data and facilitating standard desktop computer applications. Our generic scene-space formulation is able to comprehensively describe a multitude of video processing applications such as denoising, HDR exposure fusion, superresolution, action shots, object removal, computational shutter functions and other scene space camera effects. We present various results on challenging, casually captured, hand-held monocular videos of uncontrolled environments, with both computed and recorded depth.