/
Generic Object Detection using Feature Maps Generic Object Detection using Feature Maps

Generic Object Detection using Feature Maps - PowerPoint Presentation

yoshiko-marsland
yoshiko-marsland . @yoshiko-marsland
Follow
420 views
Uploaded On 2016-07-21

Generic Object Detection using Feature Maps - PPT Presentation

Oscar Danielsson osda02kthse Stefan Carlsson stefanckthse Outline Detect all Instances of an Object Class The classifier needs to be fast on average This is typically accomplished by ID: 414005

examples training regions classifier training examples classifier regions detection space feature search decision hierarchical weak object learner stump split positive images image

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Generic Object Detection using Feature M..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Generic Object Detection using Feature Maps

Oscar Danielsson (osda02@kth.se)Stefan Carlsson (stefanc@kth.se)Slide2

OutlineSlide3

Detect all Instances of an Object Class

The classifier needs to be fast (on average). This is typically accomplished by:

Using image features that can be computed quickly

Using a cascade of increasingly complex classifiers (Viola and Jones IJCV 2004)Slide4

OutlineSlide5

Famous Object Detectors (1)

Dalal

and

Triggs

(CVPR 05) use a dense Histogram of Oriented Gradients (HOG) representation - the window is tiled into (overlapping) sub-regions and gradient orientation histograms from all sub-regions are concatenated. A linear SVM is used for classification.Slide6

Famous Object Detectors (2)

Felzenszwalb

et. al. (PAMI 10, CVPR 10) extend the

Dalal

and

Triggs

model to include high resolution parts with flexible location.Slide7

Famous Object Detectors (3)

Viola and Jones (IJCV 2004) construct a weak classifier by

thresholding

the response of a

Haar

filter (computed using integral images). Weak classifiers are combined into a strong classifier using

AdaBoost

.Slide8

OutlineSlide9

Motivation

Corners

Corners + Blobs

Regions

Edges

Different object classes are characterized by different features. So we want to leave the choice of features up to the user.

Therefore we construct an object detector based on feature maps. Any feature detectors in any combination can be used to generate feature maps. Slide10

Our Object Detector

We use

AdaBoost

to build a strong classifier. We construct a weak classifier by

thresholding

the distance from a measurement point to the closest occurrence of a given feature.Slide11

OutlineSlide12

Extraction of Training Data

Feature maps are extracted by some external feature detectors

Distance transforms are computed for each feature map

(

For each training window

)

Distances from each measurement point to the closest occurrence of the corresponding feature are concatenated into a vectorSlide13

Training

CascadeStrong Learner

Weak Learner

Decision Stump Learner

Require positive training examples and background images

Randomly sample background images to extract negative training examples

Loop:

Train strong classifier

Append strong classifier to current cascade

Run cascade on background images to harvest false positives

If number of false positives sufficiently few, stop

{

f

i

}, {

I

j

}

{

f

i

}, {

c

i

}, T

{

f

i

}, {

c

i

}, {

d

i

}

{

f

i

}, {

c

i

}, {

d

i

}

Viola-Jones Cascade ConstructionSlide14

Training

CascadeStrong Learner

Weak Learner

Decision Stump Learner

Require positive training examples and background images

Randomly sample background images to extract negative training examples

Loop:

Train strong classifier

Append strong classifier to current cascade

Run cascade on background images to harvest false positives

If number of false positives sufficiently few, stop

{

f

i

}, {

I

j

}

{

f

i

}, {

c

i

}, T

{

f

i

}, {

c

i

}, {

d

i

}

{

f

i

}, {

c

i

}, {

d

i

}

Viola-Jones Cascade ConstructionSlide15

Training

CascadeStrong Learner

Weak Learner

Decision Stump Learner

Require labeled training examples and number of rounds

Init. weights of training examples

For each round

Train weak classifier

Compute weight of weak classifier

Update weights of training examples

{

f

i

}, {

c

i

}, T

{

f

i

}, {

I

j

}

{

f

i

}, {

c

i

}, {

d

i

}

{

f

i

}, {

c

i

}, {

d

i

}

AdaBoostSlide16

Training

CascadeStrong Learner

Weak Learner

Decision Stump Learner

Require labeled training examples and number of rounds

Init. weights of training examples

For each round

Train weak classifier

Compute weight of weak classifier

Update weights of training examples

{

f

i

}, {

c

i

}, T

{

f

i

}, {

I

j

}

{

f

i

}, {

c

i

}, {

d

i

}

{

f

i

}, {

c

i

}, {

d

i

}

AdaBoostSlide17

Training

Cascade

Strong Learner

Weak Learner

Decision Stump Learner

Require labeled and weighted training examples

Compute node output

Train decision stump

Split training examples using decision stump

Evaluate stopping conditions

Train decision tree on left subset of training examples

Train decision tree on right subset of training examples

{

f

i

}, {

c

i

}, {

d

i

}

{

f

i

}, {

I

j

}

{

f

i

}, {

c

i

}, T

{

f

i

}, {

c

i

}, {

d

i

}

Decision Tree LearnerSlide18

Training

Cascade

Strong Learner

Weak Learner

Decision Stump Learner

Require labeled and weighted training examples

Compute node output

Train decision stump

Split training examples using decision stump

Evaluate stopping conditions

Train decision tree on left subset of training examples

Train decision tree on right subset of training examples

{

f

i

}, {

c

i

}, {

d

i

}

{

f

i

}, {

I

j

}

{

f

i

}, {

c

i

}, T

{

f

i

}, {

c

i

}, {

d

i

}

Decision Tree LearnerSlide19

Training

Cascade

Strong Learner

Weak Learner

Decision Stump Learner

Require labeled and weighted training examples

For each measurement point

Compute a threshold by assuming exponentially distributed distances

Compute classification error after split

If error lower than previous errors, store threshold and measurement point

{

f

i

}, {

c

i

}, {

d

i

}

{

f

i

}, {

I

j

}

{

f

i

}, {

c

i

}, T

{

f

i

}, {

c

i

}, {

d

i

}

Feature and Threshold SelectionSlide20

OutlineSlide21

Hierarchical Detection

Evaluate an “optimistic” classifier on regions in search space. Split positive regions recursively.Slide22

Hierarchical Detection

Evaluate an “optimistic” classifier on regions in search space. Split positive regions recursively.Slide23

Hierarchical Detection

Evaluate an “optimistic” classifier on regions in search space. Split positive regions recursively.Slide24

Hierarchical Detection

Evaluate an “optimistic” classifier on regions in search space. Split positive regions recursively.Slide25

Hierarchical Detection

Evaluate an “optimistic” classifier on regions in search space. Split positive regions recursively.Slide26

Hierarchical Detection

Evaluate an “optimistic” classifier on regions in search space. Split positive regions recursively.Slide27

Hierarchical Detection

x

y

s

x

y

s

Search space

Image space

Each point in search space corresponds to a window in the image. In the image is a measurement point.Slide28

Hierarchical Detection

x

y

s

x

y

Search space

Image space

A region in search space corresponds to a set of windows in the image. This translates to a set of locations for the measurement point.Slide29

Hierarchical Detection

x

y

s

x

y

Search space

Image space

We can then compute upper and lower bounds for the distance to the closest occurrence of the corresponding feature. Based on these bounds we construct an optimistic classifier.Slide30

OutlineSlide31

Experiments

Detection results obtained on the ETHZ Shape Classes dataset, which was used for testing only Training data downloaded from Google images : 106

applelogos, 128 bottles, 270 giraffes, 233 mugs and 165 swans

Detections counted as correct if

A

intersect

/

A

union

≥ 0.2 Features used: edges, corners, blobs,

Kadir-Brady + SIFT + quantizationSlide32

Results

Real

AdaBoost slightly better than Dirscrete and Gentle

AdaBoostSlide33

Results

Decision tree weak classifiers should be shallowSlide34

Results

Using all features are better than using only edgesSlide35

Results

Using the asymmetric weighting scheme of Viola and Jones yields a slight improvementSlide36

Results

Applelogos

Bottles

Mugs

SwansSlide37

Results

Hierarchical search yields a significant speed-upSlide38

OutlineSlide39

Conclusion

Proposed object detection scheme based on feature mapsUsed distances from measurement points to nearest feature occurrence in image to construct weak classifiers for boostingShowed promising detection performance on the ETHZ Shape Classes datasetShowed that a hierarchical detection scheme can yield significant speed-upsThanks for listening!Slide40

Famous Object Detectors (4)

Laptev (IVC 09) construct a weak classifier using a linear

discriminant

on a histogram of oriented gradients (HOG – computed by integral histograms) from a sub-region of the window. Again, weak classifiers are combined into a strong classifier using

AdaBoost

.