Since the next few posts will talk about binary descriptors, I thought it would be a good idea to post a short introduction to the subject ofpatch descriptors. The following post will talk about the motivation to patch descriptors, the common usage and highlight theHistogram of Oriented Gradients (HOG) based descriptors.
I think the best way to start is to consider one application of patch descriptors and to explain the common pipeline in their usage. Consider, for example, the application of image alignment: we would like to align two images of the same scene taken at slightly different viewpoints. One way of doing so is by applying the following steps:
-
Compute distinctive keypoints in both images (for example, corners).
-
Compare the keypoints between the two images to find matches.
-
Use the matches to find a general mapping between the images (for example, a homography).
-
Apply the mapping on the first image to align it to the second image.
Let’s focus on the second step. Given a small patch around a keypointtaken from the first image, and a second small patch around a keypointtaken from the second image, how can we determine if it’s indeed the same point?
In general, the problem we are focusing on is that ofcomparing two image patches and measuring their similarity. Given two such patches, how can we determine their similarity? We can measure the pixel to pixel similarity by measuring their Euclidean distance, but that measure is very sensitive to noise, rotation, translation and illumination (光照)changes. In most applications we would like to be robust to such change. For example, in the image alignment application, we would like to be robust to small view-point changes – that means robustness to rotation and to translation.
This is where patch descriptors come in handy. A descriptor is some function that is applied on the patch to describe it in a way that is invariant to all the image changesthat are suitable to our application (e.g. rotation, illumination, noise etc.). A descriptor is “built-in”with a distance function to determine the similarity, or distance, of two computed descriptors. So to compare two image patches, we’ll compute their descriptors and measure their similarity by measuring the descriptor similarity, which in turn is done by computing their descriptor distance. The following diagram illustrates this process:
The common pipeline for using patch descriptors is:
-
Detect keypoints in image (distinctive points such as corners).
-
Describe each region around a keypoint as a feature vector, using a descriptor.
-
Use the descriptors in the application (for comparing descriptors – use the descriptor distance or similarity) function
The following diagram illustrates this process:
HOG descriptors
So, now that we understand how descriptors are used, let’s give an example to one family of descriptors. We will consider the family ofHistograms of Oriented Gradients (HOG) based descriptors. Notable examples of this family are SIFT[1], SURF[2] and GLOH[3]. Of the members of this family, we will describe it’s most famous member – the SIFT descriptor.
SIFT was presented in 1999 by David Lowe and includes both a keypoint detector and descriptor. SIFT is computed as follows:
-
First, detect keypoints using the SIFT detector, which also detects scale and orientation of the keypoint.
-
Next, for a given keypoint, warp the region around it to canonical orientation and scale and resize the region to 16X16 pixels.
-
Compute the gradients for each pixels (orientation and magnitude).
-
Divide the pixels into 16, 4X4 pixels squares.
-
For each square, compute gradient direction histogram over 8 directions
-
concatenate(连接) the histograms to obtain a 128 (16*8) dimensional feature vector:
SIFT descriptor illustration:
SIFT is invariant to illumination changes, as gradients are invariant to light intensity shift. It’s also somewhat invariant to rotation, as histograms do not contain any geometric information.
Other members of this family, for example SURF and GLOH are also based on taking histograms of gradients orientation. SIFT and SURF are patented, so they can’t be freely used in applications.
So, that’s it for now:) In the next few posts we will talk about binary descriptors which provide an alternative as they are light, fast and not patented.
Gil.
References:
[1] Lowe, David G. “Object recognition from local scale-invariant features.”Computer vision, 1999. The proceedings of the seventh IEEE international conference on. Vol. 2. Ieee, 1999.
[2] Bay, Herbert, Tinne Tuytelaars, and Luc Van Gool. “Surf: Speeded up robust features.” Computer Vision–ECCV 2006. Springer Berlin Heidelberg, 2006. 404-417.
[3] Mikolajczyk, Krystian, and Cordelia Schmid. “A performance evaluation of local descriptors.” Pattern Analysis and Machine Intelligence, IEEE Transactions on27.10 (2005): 1615-1630.