Neural avatar solutions currently need to overcome three challenges: reliability at scale, minimal or no offline data capture complexity, and minimal training/inference complexity. We present some avenues to resolve these by merging neural avatar proposals together with conventional video/3D encoding standards. Such a merger can bridge the gap between traditional "codecs" and photorealistic neural avatars and offer significant runtime and bit-rate efficiency versus all existing work. Importantly, it can also anchor the rendered output to the physical reality captured by sensors at the sender side, which ensures any scene/person augmentation or reenactment is interpretable and controllable when deployed at scale. We'll show demonstration results on NVIDIA RTX GPUs and summarize lessons learned, such as the importance of accurate visual quality scoring.
GTC EVENT
The list includes sessions,exhibitors and presenters
within the NVIDIA GTC Program.