ANR: Articulated Neural Rendering for Virtual Avatars
Amit Raj
Julian Tanke
James Hays
Minh Vo
Carsten Stoll
Christoph Lassner
[Paper]

Abstract

The combination of traditional rendering with neural networks in Deferred Neural Rendering (DNR) provides a compelling balance between computational complexity and realism of the resulting images. Using skinned meshes for rendering articulating objects is a natural extension for the DNR framework and would open it up to a plethora of applications. However, in this case the neural shading step must account for deformations that are possibly not captured in the mesh, as well as alignment inaccuracies and dynamics—which can confound the DNR pipeline. We present Articulated Neural Rendering (ANR), a novel framework based on DNR which explicitly addresses its limitations for virtual human avatars. We show the superiority of ANR not only with respect to DNR but also with methods specialized for avatar creation and animation. In two user studies, we observe a clear preference for our avatar model and we demonstrate state-of-the-art performance on quantitative evaluation metrics. Perceptually, we observe better temporal stability, level of detail and plausibility.


Animation



Avatars



Texture mixing and virtual try-on



Paper and Supplementary Material

Amit Raj, Julian Tanke, James Hays, Minh Vo, Carsten Stoll, Christoph Lassner.
ANR-Articulated Neural Rendering for Virtual Avatars
(hosted on ArXiv)


[Bibtex]


Acknowledgements

This work was done while AR, JT, and CS were at Facebook. We thank Tiancheng Zhi and Tony Tung for help with data processing and Michael Zollhoefer for the fruitful discussions.

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project