Fundamental matrix Fundamental matrix result Properties of the Fundamental Matrix is the epipolar line associated with is the epipolar line associated with 4 T 0 ID: 783751
Download The PPT/PDF document "Reconstruction Fundamental matrix" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
Reconstruction
Slide2Fundamental matrix
Fundamental matrix
Slide3Fundamental matrix result
Slide4Properties of the Fundamental Matrix
is the
epipolar line associated with is the epipolar line associated with 4T
0
Slide5Properties of the Fundamental Matrix
is the
epipolar line associated with is the epipolar line associated with and All epipolar lines contain epipoleT
0
Slide6Properties of the Fundamental Matrix
is the
epipolar line associated with is the epipolar line associated with and is rank 26T
0
Slide7Why is F rank 2?
F is a 3 x 3 matrix
But there is a vector c1 and c2 such that Fc1 = 0 and FTc2 = 0
Slide8Estimating F
If we don’t know
K1, K2, R, or t, can we estimate F for two images?Yes, given enough correspondences
Slide9Estimating F – 8-point algorithm
The fundamental matrix F is defined by
for any pair of matches x and x’ in two images.Let x=(u,v,1)T and x’=(u’,v’,1)T,each match gives a linear equation
Slide108-point algorithm
In reality, instead of solving , we seek
f to minimize , least eigenvector of .
Slide118-point algorithm – Problem?
F
should have rank 2To enforce that F is of rank 2, F is replaced by F’ that minimizes subject to the rank constraint. This is achieved by SVD. Let , where , let then is the solution.
Slide12Recovering camera parameters from F / E
Can we recover R and t between the cameras from F?
No: K1 and K2 are in principle arbitrary matricesWhat if we knew K1 and K2 to be identity?
Slide13Recovering camera parameters from E
t
is a solution to ETx = 0Can’t distinguish between t and ct for constant scalar cHow do we recover R?
Slide14Recovering camera parameters from E
We know E and
tConsider taking SVD of E and [t]X
Slide15Recovering camera parameters from E
t
is a solution to ETx = 0Can’t distinguish between t and ct for constant scalar c
Slide16Pros: it is linear, easy to implement and fast
Cons: susceptible to noise
Degenerate: if points are on same planeNormalized 8-point algorithm: HartleyPosition origin at centroid of image pointsRescale coordinates so that center to farthest point is sqrt (2)8-point algorithm
Slide17Structure-from-motion
Given a bunch of uncalibrated images of a scene
Recover camera parametersRecover 3D scene structureStart from correspondencesEstimate ERecover camera parametersSolve for 3D points using (multi-view) stereoWherefrom correspondences?
Slide18The correspondence problem
Slide19Till now
Geometry of image formation
Stereo reconstructionGiven 3D 2D correspondence, find K, R, tGiven 2 images, correspondence, K, R, t, find 3D pointsGiven 2 images, correspondence, find F, E, R, t, 3D points
Slide20Till now
Geometry of image formation
Stereo reconstructionGiven 3D 2D correspondence, find K, R, tGiven 2 images, correspondence, K, R, t, find 3D pointsGiven 2 images, correspondence, find F, E, R, t, 3D points
Slide21Other applications of correspondence
Image alignment
Motion trackingRobot navigation
Slide22Correspondence can be challenging
Fei-Fei Li
Slide23Correspondence
by
Diva Sianby swashford
Slide24Harder case
by
Diva Sianby scgbt
Slide25Harder still?
Slide26NASA Mars Rover images
with SIFT feature matches
Answer below (look for tiny colored squares…)
Slide27Sparse vs dense correspondence
Sparse correspondence: produce a few, high confidence matches
Good enough for estimating pose or relationship between camerasDense correspondence: try to match every pixelNeeded if we want 3D location of every pixel
Slide28Sparse correspondence
Which pixels should be searching correspondence for?
Feature points / keypoints
Slide29Snoop demo
What makes a good feature point?
Slide30Characteristics of good feature points
Repeatability / invariance
The same feature point can be found in several images despite geometric and photometric transformations Saliency / distinctivenessEach feature point is distinctiveFewer ”false” matches
Slide31Goal: repeatability
We want to detect (at least some of) the same points in both images.
Yet we have to be able to run the detection procedure independently per image.
No chance to find true matches!
Kristen
Grauman
Slide32Goal: distinctiveness
The feature point should be distinctive enough that it is easy to match
Should at least be distinctive from other patches nearby????
Slide33The correspondence problem
Slide34What does an image look like?
71
1096171866696777677639882796465105739283901078096113
Slide35The aperture problem
Slide36The aperture problem
Individual pixels are ambiguous
Idea: look at whole patch!71109617186669677767763988279646510573928390
107
8096
113
Slide37The aperture problem
Individual pixels are ambiguous
Idea: Look at whole patches!
Slide38The aperture problem
Individual pixels are ambiguous
Idea: Look at whole patches!
Slide39The aperture problem
Some local neighborhoods
are ambiguous
Slide40The aperture problem
Slide41Sparse correspondences
For many applications, a few good correspondences suffice
Camera calibrationEstimating essential matrixReconstructing a sparse cloud of 3D pointsDetect points that will produce good correspondencesMatch detected points from both images
Slide42Interest point detectors
Informative: Must be able to reliably match from two views
Reproducible: Must be detected in both views
Slide43Harris corner detector
Main idea: Translating patch should cause large differences
An example of an interest point detector
Slide44Corner Detection: Basic Idea
We should easily recognize the point by looking through a small window
Shifting a window in any direction should give a large change in intensity“edge”:no change along the edge direction
“corner”
:
significant change in all directions
“flat”
region:
no change in all directions
Source: A. Efros
Slide45Corner detection the math
Consider shifting the window
W by (u,v)how do the pixels in W change?Write pixels in window as a vector: W
Slide46Consider shifting the window
W
by (u,v)how do the pixels in W change?compare each pixel before and after bysumming up the squared differences (SSD)this defines an SSD “error” E(u,v):We want E(u,v) to be as high as possible for all u, v!Corner detection: the math
W
Slide47Corner Detection: Mathematics
Change in appearance of window
w(x,y) for the shift [u,v]:
I
(
x
,
y
)
E
(
u
,
v
)
E
(3,2)
w
(
x
,
y
)
Slide48Corner Detection: Mathematics
Intensity
Shifted intensity
Window function
or
Window function
w(x,y)
=
Gaussian
1 in window, 0 outside
Source: R. Szeliski
Change in appearance of window
w
(
x
,
y
)
for the shift [
u,v
]:
Slide49Corner Detection: Mathematics
I
(x, y)
E
(
u
,
v
)
E
(0,0)
w
(
x
,
y
)
Change in appearance of window
w
(
x
,
y
)
for the shift [
u,v
]:
Slide50Corner Detection: Mathematics
We want to find out how this function behaves for small shifts
Change in appearance of window w(x,y) for the shift [
u,v]:
E
(
u
,
v
)
Slide51Taylor Series expansion of
I
:If the motion (u,v) is small, then first order approximation is goodPlugging this into the formula on the previous slide…Small motion assumption
Slide52Corner detection: the math
Consider shifting the window
W by (u,v)define an SSD “error” E(u,v):W
Slide53Corner detection: the math
Consider shifting the window
W by (u,v)define an “error” E(u,v):W
Thus, E
(u,v) is locally approximated as a quadratic error function
Slide54Recall that we want E(
u,v
) to be as large as possible for all u,vWhat does this mean in terms of M?Interpreting the second moment matrix
Second moment matrix
Slide55Solutions to
Mx
= 0 are directions for which E is 0: window can slide in this direction without changing appearance
Slide56Solutions to
Mx
= 0 are directions for which E is 0: window can slide in this direction without changing appearanceFor corners, we want no such directions to exist
Slide57u
v
E(
u,v
)
E(
u,v
)
E(
u,v
)
E(
u,v
)
v
v
v
u
u
u
Slide58Eigenvalues and eigenvectors of M
: x is an eigenvector of M with eigenvalue 0
M is 2 x 2, so it has 2 eigenvalues with eigenvectors
(eigenvectors have unit norm)
Eigenvalues and eigenvectors of M
Eigenvalues and eigenvectors of M
Define shift directions with the smallest and largest change in errorxmax = direction of largest increase in Emax = amount of increase in direction xmaxxmin = direction of smallest increase in E min = amount of increase in direction xmin x
min
x
max
M
M
Slide60E very high in all directions
Corner
E remains close to 0 along
Edge
are small;
E
is almost 0 in all directions
Flat patch
Interpreting the eigenvalues
Slide61Corner detection: the math
How are
max, xmax, min, and xmin relevant for feature detection?Need a feature scoring functionWant E(u,v) to be large for small shifts in all directionsthe minimum of E(u,v) should be large, over all unit vectors [u v]this minimum is given by the smaller eigenvalue (min) of M
Slide62Corner detection summary
Here’s what you do
Compute the gradient at each point in the imageCreate the M matrix from the entries in the gradientCompute the eigenvalues Find points with large response (min > threshold)Choose those points where min is a local maximum as features
Slide63Corner detection summary
Here’s what you do
Compute the gradient at each point in the imageCreate the H matrix from the entries in the gradientCompute the eigenvalues. Find points with large response (min > threshold)Choose those points where min is a local maximum as features
Slide64The Harris operator
min is a variant of the “Harris operator” for feature detectionThe trace is the sum of the diagonals, i.e., trace(H) = h11 + h22Very similar to min but less expensive (no square root)Called the “Harris Corner Detector” or “Harris Operator”Actually the Noble variant of the Harris Corner DetectorLots of other detectors, this is one of the most popular
Slide65Corner
Edge
Flat patch
Corner response function
Slide66The Harris operator
Harris
operator
Slide67Harris Detector [
Harris88
]Second moment matrix67
1. Image derivatives
2. Square of derivatives
3. Gaussian
filter
g(
s
I
)
I
x
I
y
I
x
2
I
y
2
I
x
I
y
g(I
x
2
)
g(I
y
2
)
g(I
x
I
y
)
4. Cornerness function
–
both eigenvalues are strong
har
5. Non-maxima suppression
(optionally, blur first)
Slide68Weighting the derivatives
In practice, using a simple window
W doesn’t work too wellInstead, we’ll weight each derivative value based on its distance from the center pixel
Slide69Harris detector example
Slide70f value (red high, blue low)
Slide71Threshold (f > value)
Slide72Find local maxima of f
Slide73Harris features (in red)
Slide74Slide from Tinne Tuytelaars
Lindeberg et al, 1996
Slide from Tinne TuytelaarsLindeberg et al., 1996
Slide75Slide76Slide77Slide78Slide79Slide80Implementation
Instead of computing
f for larger and larger windows, we can implement using a fixed window size with a Gaussian pyramid
Slide81Feature extraction: Corners and blobs
Slide82Another common definition of f
The
Laplacian of Gaussian (LoG) (very similar to a Difference of Gaussians (DoG) – i.e. a Gaussian minus a slightly smaller Gaussian)
Slide83Scale selection
At what scale does the Laplacian achieve a maximum response for a binary circle of radius r?
rimageLaplacian
Slide84Laplacian of Gaussian
“Blob” detector
Find maxima and minima of LoG operator in space and scale*=maximumminima
Slide85Characteristic scale
The scale that produces peak of
Laplacian responsecharacteristic scaleT. Lindeberg (1998). "Feature detection with automatic scale selection." International Journal of Computer Vision 30 (2): pp 77--116.
Slide86Find local maxima in position-scale space
K. Grauman, B. Leibe
s
s
2
s
3
s
4
s
5
L
ist of
(x, y, s)
Slide87Scale-space blob detector: Example
Slide88Scale-space blob detector: Example
Slide89Scale-space blob detector: Example
Slide90Matching feature points
We know how to detect good points
Next question: How to match them?Two interrelated questions:How do we describe each feature point?How do we match descriptions?
?