We present a method to decompose a video into its intrinsic components of reflectance and shading, plus a number of related example applications in video editing such as segmentation, stylization, material editing, recolorization and color transfer. Intrinsic decomposition is an ill-posed problem, which becomes even more challenging in the case of video due to the need for temporal coherence and the potentially large memory requirements of a global approach. Additionally, user interaction should be kept to a minimum in order to ensure efficiency. We propose a probabilistic approach, formulating a Bayesian Maximum a Posteriori problem to drive the propagation of clustered reflectance values from the first frame, and defining additional constraints as priors on the reflectance and shading. We explicitly leverage temporal information in the video by building a causal-anticausal, coarse-to-fine iterative scheme, and by relying on optical flow information. We impose no restrictions on the input video, and show examples representing a varied range of difficult cases. Our method is the first one designed explicitly for video; moreover, it naturally ensures temporal consistency, and compares favorably against the state of the art in this regard.


  • Download Video [mp4, 190MB]
  • Downloads & Links


    @article{YeSIG2014, author = {Ye, Genzhi and Garces, Elena and Liu, Yebin and Dai, Qionghai and Gutierrez, Diego}, title = {Intrinsic Video and Applications}, journal = {ACM Transactions on Graphics (SIGGRAPH 2014)}, year = {2014}, volume = {33}, number = {4}, }