An Introduction to NeRFs

NeRFs and the technology underpinning it are a buzz-worthy topic of conversation across the VFX industry. Looking at the demonstrations highlighting the possibilities of NeRFs, it’s easy to see why: It can quickly generate 3D scenes and overcome some of the hurdles of existing technologies.

Source: NVIDIA

The explanations of how it works are where people tend to, justifiably, fall off. 

“Our algorithm represents a scene using a fully connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x, y, z) and viewing direction (θ, φ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location,” wrote a group of computer science professionals in a paper introducing the concept in 2020. 

With that in mind, we’ll try to explain NeRFs in a more accessible way.

What are NeRFs?

Neural Radiance Fields, or NeRFs, are an AI-powered means of generating functional 3D scenes from 2D images. Rather than creating a 3D model consisting of polygons and textures, NeRFs use AI, or neural networks, to gauge the light being reflected or emitted by objects in 2D images to create a working 3D representation. While these can be used to generate polygonal models, the underlying method to get there is different from 3D modeling software suites.

What are the advantages over other technologies?

One area where NeRFs stand out is their small file size. While the development of NeRFs takes significant processing power, their file sizes are extremely compact. This is due to the fact that they’re not actually 3D models, but algorithms and equations capable of extrapolating how a scene would look in 3D.

Another advantage of NeRFs is that they present more dynamic and realistic textures and lighting than photogrammetry, a similar technology used to generated 3D scenes from 2D outputs. Photogrammetry works by finding a similar visible point around which to generate a 3D scene, but lighting and textures don’t adjust based on the angle of location of a “camera” within it–the output is static. NeRFs’ AI-driven method will predict what lighting and textures would look like from different angles, making the necessary adjustments to account for reflections, shadows and so on. 

What are the limitations of NeRFs?

The two primary limitations of NeRFs involve the fact that generating them is resource-intensive: They model one scene at a time and require time to “train.” Since the AI technology underlying NeRFs generates models based on assumptions of how something would likely look rather than explicit instructions from a polygonal model, there is a fair amount of error correction and configuration to avoid blurry, shabby, or otherwise bizarre results.  

What does this mean for VFX?

NeRFs are a relatively new technology with a lot of promise that will underpin many VFX tools and suites. While they do have certain disadvantages, they also have enough benefits that will most likely become commonplace in commercial settings as the technology matures and as more VFX experts become well-versed in their application. 

Wondering if NeRFs can enhance your business? Not sure how to incorporate them into your VFX workflow? Nodal can help! Contact us today.