Edge Suppression by Gradient Field Transformation Using Cross-Projection Tensors
Figure 1. Recovering foreground layer. A and B are two images from same viewpoint with significant illumination variations. To recover the foreground layer A', we remove all edges from image A which are also present in image B. This is done by estimating projection tensors from A and B (a 2*2 matrix at each pixel) and affine transforming the gradient field of A using it. A' is obtained by integrating the modified gradient field. Note that all background texture is removed (including texture inside the shadow of the box). Simple frame differencing or cross-correlation will fail here due to illumination variations. Since only a single background image is used, a statistical model for background will be difficult to estimate. Color based algorithms will also have problems as the color of the foreground object (the box) is similar to the background (red book). Our method, however, does not rely on pixel intensities but on the direction of the gradient and can handle the illumination variations easily.
We make the usual assumption that illumination and reflectance edges do not coincide. Any reflectance edge in the foreground layer which coincides with the illumination edge in background image B cannot be recovered. Also, the shadow of the box is not removed as it is a new edge in image A. (See example below for shadow removal using flash/no-flash image pair).
diagram of the algorithm: We remove all edges from image A
which are also present in image B. This is done by estimating
projection tensors and transforming the gradient field of image A using
them. Integration of the transformed gradient field gives A' with
corresponding edges removed. Integration of the residual field gives
A'' which consist of only those edges in A which are also present in B.
Thus, this method can perform edge suppression in image A using a
different image (image B). Image B can be taken under different
We have applied this simple technique for several applications such as
(a) Removing shadows from color images
(b) Removing glass reflections
(c) Recovering intrinsic images for non-Lambertian scenes.
|Removing shadows from color images
|A and B are
two images taken under ambient (no-flash) and flash illumination. The
flash image B was used to remove shadows from A. Note that the shadows
due to flash are not transferred to the result. Also notice that the
estimated illumination map (image A') is free of the complex texture
the face of the book.
example where A and F denote a pair of images under no-flash and flash
illumination. Notice that the shadow free image A'' does not have flash
shadows from F. Also, in this example the color tone of two images are
very different. This is because no-flash images are usually
yellow-reddish due to ambient room lighting and flash images are
usually bluish. However, our method does not need any color calibration
or white balancing. The ambient illumination map is shown in image A'.
In this example, we want to remove glass reflections (reflections of the checkerboard) from a flash image using a low-contrast no-flash image. Note that the color tone of both images are different but F'' does not have any color artifacts.
Relationship with Gradient Projection:
In SIGGRAPH 2005,. we proposed the technique of Gradient Projection to remove glass reflections from flash image using no-flash image. The idea was to take the projection of flash image gradient vector onto the ambient image gradient vector at every pixel. In this paper, we show that taking projection is a special case of affine transformation of gradient field. A lot of research has been done on image restoration using diffusion tensors (see papers by Joachim Weickert & David Tschumperle). We show how to derive "projection tensors" using an image and apply them to another image to achieve edge suppression under variable illumination.
Available for download