What is the Range of Surface Reconstructions from a Gradient Field? 
Abstract: We propose a generalized equation to represent a continuum of surface reconstruction solutions of a given nonintegrable gradient field. We show that common approaches such as Poisson solver and FrankotChellappa algorithm are special cases of this generalized equation. For a $N\times N$ pixel grid, the subspace of all integrable gradient fields is of dimension $N^21$. Our framework can be applied to derive a range of meaningful surface reconstructions from this high dimensional space. The key observation is that the range of solutions is related to the degree of anisotropy in applying weights to the gradients in the integration process. While common approaches use isotropic weights, we show that by using a progression of spatially varying anisotropic weights, we can achieve significant improvement in reconstructions. We propose (a) \textsl{$\alpha$surfaces} using binary weights, where the parameter $\alpha$ allows trade off between smoothness and robustness, (b) Mestimators and edge preserving regularization using continuous weights and (c) Diffusion using affine transformation of gradients. We provide results on photometric stereo, compare with previous approaches and show that anisotropic treatment discounts noise while recovering salient features in reconstructions. 

Reconstruction from gradient
fields is important in several applications such as surface
reconstruction using photometric stereo and shape from shading, mesh
smoothing, retinex, high dynamic range (HDR) compression, phase
unwrapping, image editing, image matting, image fusion and so on. In
these gradient domain algorithms, the gradient field of an image is
manipulated to achieve the desired goal and the final image is obtained
by a 2D integration of the gradient field. In photometric stereo/shape
from shading, surface normals/gradients are obtained first and the
desired surface is obtained by integrating the gradient field.
The space of all possible
surface reconstructions from a gradient field (over N*N grid) is of
dimension N^21. In this paper, we present a framework to derive a
range of meaningful solutions in this space. While previous approaches
such as least square based Poisson solver and FrankotChellappa
algorithm assign equal weights to the gradients, we present a range of
solutions based on the degree of anisotropy of the gradients in the
integration process. The above figure summarizes the range of solutions
obtained using our framework. Poisson solvers can be thought of as
using an isotropic spatiallyinvariant Laplacian kernel and results in
a smooth solution which might not be feature preserving. By making this
kernel spatiallyvarying based on the discontinuities in the surface,
one can get feature preserving reconstructions. We propose four new
solutions based on our framework as shown above: Alphasurface
using binary weights, Mestimators and regularization using continuous
weights and a diffusion based approach based on an affine
transformation of the gradient field.
General Framework: Let Z be the desired surface and (p,q) denote the given gradient field. A general solution can be obtained by considering error functionals of the following form (this can be further generalized by taking higher order derivatives) The EulerLagrange equation gives This is the main equation. Notice that in a least square solution (Poisson solver), one simply have Zx, Zy, p and q instead of functions of them. By changing these four function (f1, f2, f3, f4) one can get different solutions. The following table shows how these functions can be changed for various solutions The Poisson
solver uses a spatiallyinvariant Laplacian kernel . The Poisson
equation can be written as LZ = u, where u is the divergence of
the given gradient field. L denotes the Laplacian matrix obtained using
.
By
assigning
different
weights
to
the given gradients, this kernel is modified to a
weighted kernel which results in a weighted
Laplacian matrix Lb or Lw (Lb is using binary weights and Lw is using
continuous weights). Both Lb and Lw uses individual weights in x and y
gradients. At the next higher level, one can have an affine
transformation of the gradient field using a diffusion tensor D
(similar to that used in anisotropic diffusion for edge preserving
smoothing). This results in diffusion based Laplacian matrix LD. Thus,
by changing the anisotropy of weights assigned to the gradients, one
can obtain better feature preserving reconstructions.

AlphaSurface: We now discuss the
alphasurface algorithm in detail. This
algorithm is based on applying the concept of robust estimation
(RANSAC) to the integration of the gradient field. We show that the 2D
integration problem is analogous to the simple problem of estimating a
line from a bunch of 2D points.
Consider estimating a line from 2D points (with noise and outliers). It is wellknown that even the presence of a single outlier can skew the solution given by a least square fit. RANSAC is commonly used for robust estimation. In RANSAC, a set of inliers is identified and used for estimation instead of using all the data points. Note that a Poisson solver for gradient integration uses all the gradients and thus the least square fit will not preserve discontinuities in presence of outliers in the gradient field. To apply RANSAC to gradient integration, we need to know the minimum number of gradients required for integration. For example, in estimating a line, we need m=2 points. For plane fitting, we need m=3 points and so on. Consider a N*N discrete grid. We have N^2 x gradients and N^2 y gradients (2*N^2 observations) but we wish to compute only N^2 surface values (or image pixel values). We show that the minimum number of gradients required for integration is N^21 and this set of gradients should form a spanning tree of the underlying planar graph. Consider a planar graph where nodes represent the surface values and edges represent the gradients (horizontal edges=x gradients and vertical edges = y gradients). 2D integration is essentially a process to find the value of nodes, given the value of edges. If we can reach all nodes in the graph using some path, integration can be done. The minimum configuration where all nodes can be reached is the spanning tree. However, applying brute force RANSAC is computationally impossible as the number of spanning trees of a graph are exponential in the number of nodes. We first fix a spanning tree by assigning weights to the gradients and finding the Minimum Spanning Tree (MST). This is equivalent to fixing initial two points in estimating a line. In order to handle noise, we would like to use more and more inliers in the estimation. The alphasurface approach uses a value alpha, which gives the accepted tolerance in presence of noise. In other words, estimation is done using those set of points which lie within a distance of alpha from the solution. If alpha=0, one uses only the initial set of points (or gradients corresponding to the initial spanning tree). As we increase alpha, we consider more and more gradients as inliers. When alpha is large, all gradients are considered inliers and solution essentially reduces to the least square solution. Thus by changing alpha, one can trace a path in the solution space and can have trade off between smoothness and robustness. The following videos show the reconstructed surfaces using the alpha surface algorithm. The initial spanning tree was found by assigning gradient magnitude as weights to the edges of the graph and finding the minimum spanning tree. Notice that when alpha is small, outliers are discarded but the solution is noisy as all inliers are not used. When alpha is large, one gets a smooth solution equivalent to the least square solution. In ramppeaks video, notice that at alpha=2.7653, the effect of outliers starts distorting the reconstructed surface. For the face video, note the change at alpha=0.8203, where the discontinuity between the right cheek and the background is smoothed out. Face video (3.5 MB), RampPeaks video (3.5 MB) Discussion and Future Work: Although we describe a range of
solutions using a general
framework, the choice of using a particular algorithm for a given
application remains an open problem. In our experience, alphasurface
and diffusion based approach seems to outperform other approaches in
almost all the cases. As stated above, the space of all possible
reconstructions is of dimension (N^21) for a N*N grid and it is not
practical to represent all the solutions. The ranges of solutions
we explore represent a family of solutions that are meaningful and
likely to be close to the true surface and does not span all possible
solutions. Other meaningful solutions may be obtained using our
framework and by considering higher order derivatives. In addition,
there might be solutions which does not fit our framework.We believe
that even within a particular solution that we have presented, there
might be a class of solutions. For example, for the diffusion based
approach, we considered edge preserving diffusion tensor, but other
tensors such as those used in coherence preserving image smoothing
might be useful. In our solutions, we have ignored the
dE/dZ term, which can be utilized, for example, if one have control
points information in photometric stereo.
Results: The following images shows reconstructed surfaces using various algorithms for different datasets. The gradient field was obtained by simple photometric stereo algorithm on the intensity images for mozart, vase and flowerpot. The RampPeaks (top left) example is on a synthetic surface. Notice that in all examples, diffusion and alphasurface algorithms recovers the salient features while discounting noise. 
Also see 