Neural Projection Mapping Using Reflectance Fields

1Tel Aviv University 2Osaka University

Short video showing our work.

Abstract

We introduce a high resolution spatially adaptive light source, or a projector, into a neural reflectance field that allows to both calibrate the projector and photo realistic light editing. The projected texture is fully differentiable with respect to all scene parameters, and can be optimized to yield a desired appearance suitable for applications in augmented reality and projection mapping. Our neural field consists of three neural networks, estimating geometry, material, and transmittance. Using an analytical BRDF model and carefully selected projection patterns, our acquisition process is simple and intuitive, featuring a fixed uncalibrated projected and a handheld camera with a co-located light source. As we demonstrate, the virtual projector incorporated into the pipeline improves scene understanding and enables various projection mapping applications, alleviating the need for time consuming calibration steps performed in a traditional setting per view or projector location. In addition to enabling novel viewpoint synthesis, we demonstrate state-of-the-art performance projector compensation for novel viewpoints, improvement over the baselines in material and scene reconstruction, and three simply implemented scenarios where projection image optimization is performed, including the use of a 2D generative model to consistently dictate scene appearance from multiple viewpoints. We believe that neural projection mapping opens up the door to novel and exciting downstream tasks, through the joint optimization of the scene and projection images.

Novel viewpoints & projections

The following scenes were obtained using our method and relit with new projected patterns, viewed from a novel viewpoint. Any novel pattern is possible, and can also be optimized to yield a desired appearance.


Multiview text-to-projection

We can optimize for several viewpoints at once, and use a 2D generative model to consistently dictate scene appearance from multiple viewpoints. The 2nd column shown here is the final reprojection of the optimzed texture, i.e. a true spatial augmentation of the 1st column using only a projector.


text-to-projection

BibTeX

@article{erel2023neural,
        title={Neural Projection Mapping Using Reflectance Fields},
        author={Erel, Yotam and Iwai, Daisuke and Bermano, Amit H},
        journal={IEEE Transactions on Visualization and Computer Graphics},
        year={2023},
        publisher={IEEE}
      }