Computers & Graphics, 2019
(presented at SMI 2019)

Combining Voxel and Normal Predictions for Multi-View 3D Sketching

Johanna Delanoy1 David Coeurjolly2 Jacques-Olivier Lachaud3 Adrien Bousseau1
1Inria, Université Côte d’Azur 2Université de Lyon, CNRS, LIRIS 3Université Savoie Mont Blanc, CNRS, LAMA

Our method takes as input multiple sketches of an object (a). We first apply existing deep neural networks to predict a volumetric reconstruction of the shape as well as one normal map per sketch (b). We re-project the normal maps on the voxel grid (c, blue and yellow needles), which complement the surface normal computed from the volumetric prediction (c, pink needles). We aggregate these different normals into a distribution represented by a mean vector and a standard deviation (d, colors denote low variance in green and high variance in red). We optimize this normal field to make it piecewise smooth (e) and use it to regularize the surface (f). The final surface preserves the overall shape of the predicted voxel grid as well as the sharp features of the predicted normal maps.

Abstract

Recent works on data-driven sketch-based modeling use either voxel grids or normal/depth maps as geometric representations compatible with convolutional neural networks. While voxel grids can represent complete objects-including parts not visible in the sketches-their memory consumption restricts them to low-resolution predictions. In contrast, a single normal or depth map can capture fine details, but multiple maps from different viewpoints need to be predicted and fused to produce a closed surface. We propose to combine these two representations to address their respective shortcomings in the context of a multi-view sketch-based modeling system. Our method predicts a voxel grid common to all the input sketches, along with one normal map per sketch. We then use the voxel grid as a support for normal map fusion by optimizing its extracted surface such that it is consistent with the re-projected normals, while being as piecewise-smooth as possible overall. We compare our method with a recent voxel prediction system, demonstrating improved recovery of sharp features over a variety of man-made objects.

Video

Downloads and code

BibTex

@article{delanoy2019combining,
	title={Combining voxel and normal predictions for multi-view 3D sketching},
	author={Delanoy, Johanna and Coeurjolly, David and Lachaud, Jacques-Olivier and Bousseau, Adrien},
	journal={Computers & Graphics},
	volume={82},
	pages={65--72},
	year={2019},
	publisher={Pergamon},
	doi = {https://doi.org/10.1016/j.cag.2019.05.024},
	url = {https://www.sciencedirect.com/science/article/pii/S0097849319300858}
}