We introduce a high resolution spatially adaptive light source, or a projector, into a neural reflectance
field that allows
to both calibrate the projector and photo realistic light editing. The projected texture is fully
differentiable with respect to all scene
parameters, and can be optimized to yield a desired appearance suitable for applications in augmented
reality and projection mapping.
Our neural field consists of three neural networks, estimating geometry, material, and transmittance.
Using an analytical BRDF model
and carefully selected projection patterns, our acquisition process is simple and intuitive, featuring a
fixed uncalibrated projected and
a handheld camera with a co-located light source. As we demonstrate, the virtual projector incorporated
into the pipeline improves
scene understanding and enables various projection mapping applications, alleviating the need for time
consuming calibration steps
performed in a traditional setting per view or projector location. In addition to enabling novel viewpoint
synthesis, we demonstrate
state-of-the-art performance projector compensation for novel viewpoints, improvement over the baselines
in material and scene
reconstruction, and three simply implemented scenarios where projection image optimization is performed,
including the use of a
2D generative model to consistently dictate scene appearance from multiple viewpoints. We believe that
neural projection mapping
opens up the door to novel and exciting downstream tasks, through the joint optimization of the scene and
projection images.