Precision Ranking Incorporating High Order Information Aim Motivations and Challenges HighOrder Information Action inside the bounding box Context helps HOBSVM HOAPSVM Encoding highorder information joint feature map ID: 307348
Download Presentation The PPT/PDF document "Optimizing Average" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
Optimizing Average
Precision (Ranking)
Incorporating High
-Order Information
Aim
Motivations and Challenges
High-Order Information
Action inside the bounding box ?
Context helps
HOB-SVM
HOAP-SVM
Encoding high-order information (joint feature map):
Parameter Learning:
Sort difference of max-marginal scores to get ranking
:
Single Score
Use dynamic graph cut for fast computation of max-
marginals
Max-
marginals
capture high-order information
Encode ranking and high-order information (AP-SVM + HOB-SVM):
Parameter Learning
Non Convex - > Difference of Convex -> CCCP
Ranking: Sort scores
Dynamic graph cut for fast upper bound
SVM
AP-SVM
HOB-SVM
HOAP-SVM
Results
Action Classification
Problem
Formulation: Given an image and a bounding box in the image, predict the action being performed in the bounding box.
Dataset- PASCAL
VOC
2011,
10 action classes,
, 4846
images
(2424
‘trainval’
+ 2422
‘test’
images).
Features: POSELET + GIST
High-Order Information: “Persons in the same are likely to perform
same action”. Connected bounding boxes belonging to the same image.
Conclusions
Learning to Rank using High-Order Information
Puneet
K.
Dokania
1
,
A.
Behl
2
,
C. V.
Jawahar
2
,
M.
Pawan
Kumar
1
1
Ecole
Centrale
Paris and INRIA
Saclay
- France,
2
IIIT Hyderabad - India
AP = 1
Accuracy = 1
AP = 0.55
Accuracy = 1
Average Precision Optimization
AP is the most commonly used evaluation metric
AP loss depends on the ranking of the samples
Optimizing 0-1 loss may lead to suboptimal AP
Notations
Set of positive samples:
Samples:
Labels:
Ranking Matrix:
Set of negative samples:
Loss function:
AP-SVM
Key Idea: Uses SSVM to encode ranking (joint score):
Parameter Learning
Ranking: Sort scores,
Optimizes AP (measure of ranking)
Optimization
Convex
Cutting plane -> Most violated constraint (greedy) -> O(|P||N|)
Incorporate High-order information
Optimizes Decomposable loss
For example, persons in the same image are likely to have same action
Ranking ??
Use Max-
marginals
I
ncorporate high-order information
Optimizes AP based loss
Methods
Loss
High-Order Information
Ranking
Objective
SVM
0-1
No
Yes
Convex
AP-SVM
AP Based
No
Yes
Convex
HOB-SVM
Decomposable
Yes
Yes
Convex
HOAP-SVM
AP Based
Yes
YesNon-Convex (Diff of Convex)
AP doesn’t decompose
High Order + Ranking -> No Method
No High-Order Information
Ranking:
Optimization:
Convex
Joint score similar to AP-SVM
Sample scores similar to HOB-SVM (max-marginals)
Optimization
Paired
ttest:
HOB-SVM better than SVM in 6 action classesHOB-SVM not better than AP-SVMHOAP-SVM better than SVM in 6 action classesHOAP-SVM better than AP-SVM in 4 actions classes
Code and Data: http://cvn.ecp.fr/projects/ranking-highorder/
Results