Computer Graphics Laboratory ETH Zurich


Neural Frame Interpolation for Rendered Content

K. M. Briedis, A. Djelouah, M. Meyer, I. McGonigal, M. Gross, C. Schroers

Proceedings of ACM SIGGRAPH Asia (Tokyo, Japan, Dec. 14-17, 2021), ACM Transactions on Graphics, vol. 40, no. 6, pp. 239:1-239:13


The demand for creating rendered content continues to drastically grow. As it often is extremely computationally expensive and thus costly to render high-quality computer-generated images, there is a high incentive to reduce this computational burden. Recent advances in learning-based frame interpolation methods have shown exciting progress but still have not achieved the production-level quality which would be required to render fewer pixels and achieve savings in rendering times and costs. Therefore, in this paper we propose a method specifically targeted to achieve high-quality frame interpolation for rendered content. In this setting, we assume that we have full input for every n-th frame in addition to auxiliary feature buffers that are cheap to evaluate (e.g. depth, normals, albedo) for every frame. We propose solutions for leveraging such auxiliary features to obtain better motion estimates, more accurate occlusion handling, and to correctly reconstruct non-linear motion between keyframes. With this, our method is able to significantly push the state-of-the-art in frame interpolation for rendered content and we are able to obtain production-level quality results.


Download Paper
Download Paper
[PDF suppl.]
Download Paper