Devi Parikh Slide credit Kristen Grauman 1 Disclaimer Most slides have been borrowed from Kristen Grauman who may have borrowed some of them from others Any time a slide did not already have a credit on it I have credited it to Kristen So there is a chance some of these credits are in ID: 791628
Download The PPT/PDF document "Local features: detection and descriptio..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
Local features:
detection and description
Devi Parikh
Slide credit: Kristen Grauman
1
Disclaimer: Most slides have been borrowed from Kristen
Grauman
, who may have borrowed some of them from others. Any time a slide did not already have a credit on it, I have credited it to Kristen. So there is a chance some of these credits are inaccurate.
Slide2Announcements
Project proposalsDue on WednesdayPS3 outDue in <3 weeks (October 24th)
Slide credit: Kristen Grauman
2
Slide3Topics overview
IntroMultiple views and motion
Local invariant features
Features & filtersFilters
GradientsEdges
Blobs/regions
Local invariant features
Grouping & fitting
Recognition
Video processing
3
Slide credit: Kristen Grauman
Slide4Last timeDetecting corner-like points in an image
Slide credit: Kristen Grauman4
Slide5Today
Local invariant featuresDetection of interest points(Harris corner detection)Scale invariant blob detection: LoGDescription of local patchesSIFT : Histograms of oriented gradients
Slide credit: Kristen
Grauman
5
Slide6Local features: main components
Detection: Identify the interest points
Description: Extract vector feature descriptor surrounding each interest point.
Matching:
Determine correspondence between descriptors in two views
Kristen
Grauman
6
Slide7Properties of the Harris corner detector
Rotation invariant? Scale invariant?
Yes
Slide credit: Kristen
Grauman
7
Slide8Properties of the Harris corner detector
Rotation invariant? Scale invariant?
All points will be classified as
edges
Corner !
Yes
No
Slide credit: Kristen
Grauman
8
Slide9Scale invariant interest points
How can we independently select interest points in each image, such that the detections are repeatable across different scales?
Slide credit: Kristen
Grauman
9
Slide10Automatic scale selection
Intuition:
Find scale that gives local maxima of some function
f
in both position and scale.
f
region size
Image 1
f
region size
Image 2
s
1
s
2
Slide credit: Kristen
Grauman
10
Slide11What can be the “signature” function?
Slide credit: Kristen Grauman11
Slide12Recall: Edge detection
f
Source: S. Seitz
Edge
Derivative
of Gaussian
Edge = maximum
of derivative
12
Slide13f
Edge
Second derivative
of Gaussian
(Laplacian)
Edge = zero crossing
of second derivative
Source: S. Seitz
Recall: Edge detection
13
Slide14From edges to blobs
Edge = rippleBlob = superposition of two ripples
Spatial selection
: the
magnitude of the
Laplacian
response will achieve a maximum at the center of
the blob, provided the scale of the
Laplacian
is
“matched” to the scale of the blob
maximum
Slide credit: Lana Lazebnik14
Slide15Blob detection in 2DLaplacian of Gaussian: Circularly symmetric operator for blob detection in 2D
Slide credit: Kristen
Grauman
15
Slide16Blob detection in 2D: scale selection
Laplacian-of-Gaussian = “blob” detector
filter scales
img1
img2
img3
Bastian
Leibe
16
Slide17Blob detection in 2D
We define the characteristic scale as the scale that produces peak of Laplacian response
characteristic scale
Slide credit: Lana
Lazebnik
17
Slide18Example
Original image at ¾ the size
Kristen
Grauman
18
Slide19Original image at ¾ the size
Kristen
Grauman
19
Slide20Kristen
Grauman
20
Slide21Kristen
Grauman
21
Slide22Kristen
Grauman
22
Slide23Kristen
Grauman
23
Slide24Kristen
Grauman
24
Slide25s
1
s
2
s
3
s
4
s
5
L
ist of
(x, y,
σ
)
scale
Scale invariant interest points
Interest points are local maxima in both position and scale.
Squared filter response maps
Slide credit: Kristen
Grauman
25
Slide26Scale-space blob detector: Example
T.
Lindeberg. Feature detection with automatic scale selection. IJCV 1998.
26
Slide source: Kristen
Grauman
Slide27Scale-space blob detector: Example
Image credit: Lana Lazebnik
27
Slide28We can approximate the Laplacian with a difference of Gaussians; more efficient to implement.
(Laplacian)
(Difference of Gaussians)
Technical detail
Slide credit: Kristen
Grauman
28
Slide29Local features: main components
Detection: Identify the interest points
Description:Extract
vector feature descriptor surrounding each interest point.
Matching:
Determine correspondence between descriptors in two views
Slide credit: Kristen
Grauman
29
Slide30Geometric transformations
e.g. scale, translation, rotation
Slide credit: Kristen
Grauman
30
Slide31Photometric transformations
Figure from T. Tuytelaars ECCV 2006 tutorial
Slide credit: Kristen
Grauman
31
Slide32Raw patches as local descriptors
The simplest way to describe the neighborhood around an interest point is to write down the list of intensities to form a feature vector.
But this is very sensitive to even small shifts, rotations.
Slide credit: Kristen
Grauman
32
Slide33SIFT descriptor [Lowe 2004]
Use histograms to bin pixels within sub-patches according to their orientation.
0
2
p
Why
subpatches
?
Why does SIFT have some illumination invariance?
Slide credit: Kristen
Grauman
33
Slide34CSE 576: Computer Vision
Making descriptor rotation invariant
Image from Matthew Brown
Rotate patch according to its dominant gradient orientation
This puts the patches into a canonical orientation.
Slide credit: Kristen
Grauman
34
Slide35Extraordinarily robust matching technique
Can handle changes in viewpointUp to about 60 degree out of plane rotationCan handle significant changes in illuminationSometimes even day vs. night (below)Fast and efficient—can run in real time
Lots of code available
http://people.csail.mit.edu/albert/ladypack/wiki/index.php/Known_implementations_of_SIFT
Steve Seitz
SIFT descriptor [Lowe 2004]
35
Slide36Example
NASA Mars Rover images
Slide credit: Kristen
Grauman
36
Slide37NASA Mars Rover images
with SIFT feature matches
Figure by Noah Snavely
Example
Slide credit: Kristen
Grauman
37
Slide38SIFT properties
Invariant toScale RotationPartially invariant toIllumination changes
Camera viewpointOcclusion, clutter
Slide credit: Kristen
Grauman
38
Slide39Local features: main components
Detection: Identify the interest points
Description
:Extract
vector feature descriptor surrounding each interest point.
Matching:
Determine correspondence between descriptors in two views
Slide credit: Kristen
Grauman
39
Slide40Matching local features
Kristen
Grauman
40
Slide41Matching local features
?
To generate
candidate matches
, find patches that have the most similar appearance (e.g., lowest SSD)
Simplest approach: compare them all, take the closest (or closest
k
, or within a
thresholded
distance)
Image 1
Image 2
Kristen
Grauman
41
Slide42Ambiguous matches
At what SSD value do we have a good match?
To add robustness to matching, can consider
ratio
: distance to best match / distance to second best match
If low, first match looks good.
If high, could be ambiguous match.
Image 1
Image 2
?
?
?
?
Kristen
Grauman
42
Slide43Matching SIFT DescriptorsNearest neighbor (Euclidean distance)
Threshold ratio of nearest to 2nd nearest descriptor
Lowe IJCV 2004
Slide credit: Kristen
Grauman
43
Slide44Recap: robust feature-based alignment
Source: L. Lazebnik
44
Slide45Recap: robust feature-based alignment
Extract features
Source: L. Lazebnik
45
Slide46Recap: robust feature-based alignment
Extract features
Compute
putative matches
Source: L. Lazebnik
46
Slide47Recap: robust feature-based alignment
Extract features
Compute
putative matchesLoop:Hypothesize transformation T (small group of putative matches that are related by
T)
Source: L. Lazebnik
47
Slide48Recap: robust feature-based alignment
Extract features
Compute
putative matchesLoop:Hypothesize transformation T (small group of putative matches that are related by
T)Verify transformation (search for other matches consistent with T)
Source: L. Lazebnik
48
Slide49Recap: robust feature-based alignment
Extract featuresCompute putative matchesLoop:Hypothesize transformation T (small group of putative matches that are related by T)
Verify transformation (search for other matches consistent with T)
Source: L. Lazebnik
49
Slide50Applications of local invariant features
Wide baseline stereoMotion trackingPanoramasMobile robot navigation3D reconstructionRecognition
…
Slide credit: Kristen Grauman
50
Slide51Automatic mosaicing
http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html
Slide credit: Kristen
Grauman
51
Slide52Wide baseline stereo
[Image from T. Tuytelaars ECCV 2006 tutorial]
Slide credit: Kristen
Grauman
52
Slide53Recognition of specific objects, scenes
Rothganger et al. 2003
Lowe 2002
Schmid and Mohr 1997
Sivic and Zisserman, 2003
Kristen
Grauman
53
Slide54Summary
Interest point detectionHarris corner detectorLaplacian of Gaussian, automatic scale selectionInvariant descriptorsRotation according to dominant gradient directionHistograms for robustness to small shifts and translations (SIFT descriptor)
54
Slide55Questions?
55
Slide credit: Devi Parikh