Computer Graphics Laboratory ETH Zurich

ETH

Deep Video Color Propagation

S. Meyer, V. Cornillere, A. Djelouah, C. Schroers, M. Gross

Proceedings of the British Machine Vision Conference (BMVC) (Newcastle upon Tyne, UK, September 3-6, 2018), pp. 128

Abstract

Traditional approaches for color propagation in videos rely on some form of matching between consecutive video frames. Using appearance descriptors, colors are then propagated both spatially and temporally. These methods, however, are computationally expensive and do not take advantage of semantic information of the scene. In this work we propose a deep learning framework for color propagation that combines a local strategy, to propagate colors frame-by-frame ensuring temporal stability, and a global strategy, using semantics for color propagation within a longer range. Our evaluation shows the superiority of our strategy over existing video and image color propagation methods as well as neural photo-realistic style transfer approaches.


Downloads

Download Paper
[PDF]
Download Paper
[PDF suppl.]
Download Video
[Video]
Download Paper
[BibTeX]