Joel oren university of toronto Joint work with Craig boutilier JÉrôme Lang and HEctor Palacios 1 Motivation Winner Determination under Candidate Uncertainty A committee with preferences over alternatives ID: 348709
Download Presentation The PPT/PDF document "Robust Winners and Winner Determination ..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
Robust Winners and Winner Determination Policies under Candidate Uncertainty
Joel oren, university of torontoJoint work with Craig boutilier, JÉrôme Lang and HÉctor Palacios.
1Slide2
Motivation – Winner Determination under Candidate Uncertainty
A committee, with preferences over alternatives: Prospective projects.Goals.Costly determination of availabilities:Market research for determining the feasibility of a project: engineering estimates, surveys, focus groups, etc.
“Best” alternative depends on
available
ones.
2
a
b
c
4 voters
3 voters
2 voters
a
b
c
b
c
a
c
a
b
Winner
a
?
?
?
c
Slide3
Efficient Querying Policies for Winner Determination
Voters submit votes in advance.Query candidates sequentially, until enough is known in order to a determine the winner.Example:
a wins.
a
b
c
4 voters
3 voters
2 voters
a
b
c
b
c
a
c
a
b
Winner
?
?
3
Slide4
The Formal Model
A set C of candidates.A vector, , of rankings (a preference profile).
Set is partitioned
:
– a priori known
availability
.
– the “unknown” set.Each candidate
is available with probability .
Voting rule:
is the election winner.
a
c
C
b
b
3 voters
2 voters
a
c
b
c
a
c
a
b
Y
(available)
U
(unknown)
4Slide5
Querying & Decision Making
At iteration submit query q(x),
.
Information set
.
Initial
available set
.
Upon querying candidate
:If
available: add to .
If unavailable: remove from .
–
restriction of
pref. profile to the candidate set
.
Stop when
is -sufficient – no additional querying will change
–
the “robust” winner.
a
c
C
b
b
3 voters
2 voters
a
c
b
c
a
c
a
b
0.5
0.7
0.4
?
a
b
?
5Slide6
Computing a Robust Winner
Robust winner: Given
,
is a robust winner if
.
A related question in voting:
[
Destructive
control by candidate addition] Candidate set
, disjoint spoiler set , pref. profile
over
, candidate
, voting rule
.
Question: is there a subset
, s.t.
?
Proposition:
Candidate
is a robust winner there is no destructive control against
, where the spoiler set is
.
Y
Y
x
6Slide7
Computing a Robust Winner
Proposition: Candidate is a robust winner
there is no destructive control
against
, where the spoiler set is
.
Implication: Pluarlity, Bucklin, ranked
pairs – coNP-complete; Copeland, Maximin -
polytime tractable.Additional results: Checking if
is a robust winner for top cycle, uncovered set, and
Borda can be done in polynomial time.Top-cycle & Uncovered set: prove useful criteria for the corresponding majority graph.
7Slide8
The Query Policy
Goal: design a policy for finding correct winner.Can be represented by a decision tree.Example for the vote profile (plurality):abcde, abcde
,
a
dbec, b
caed, bcead
,cdeab, cbade, cdbea
a
b
a
winsc
c
wins
a wins
b
b
wins
c
a
wins
U
a
b
c
d
a
b
b
c
8Slide9
Winner Determination Policies as Trees
r-Sufficient tree:Information set at each leaf is -sufficient.Each leaf is correctly labelled with the winner.
--
cost
of querying candidate/node
.
– expected cost of policy, over dist. of
.
a
b
a
wins
c
c
wins
a wins
b
b wins
c
a
wins
9Slide10
Cost of a tree:
.
For each node
– a training
set: Possible
true underlying sets
A, that agree with
.Example 1:
Example 2:
.
Can solve using a
dynamic-programming
approach.
Running time:
-- computationally heavy.
a
b
a
wins
c
c
wins
a wins
b
b wins
c
a
wins
Recursively Finding
Optimal Decision
Trees
10Slide11
Myopically Constructing Decision Trees
Well-known approach of maximizing information gain at every node until reached pure training sets – leaves (C4.5).Mypoic step: query the candidate for the highest “information gain” (decrease in entropy of the training set)
.
Running time:
11Slide12
Empirical Results
100 votes, availability probability
.
Dispersion
parameter
. (
uniform distribution).Tested for Plurality, Borda, Copeland.
Preference distributions drawn i.i.d. from Mallows -distribution: probabilities decrease exponentially with distance from a “reference” ranking.
=
0.3
0.5
0.9
Method
Plurality,
DP4.13.42.7
Plurality, Myopic4.13.52.8
Borda, DP3.72.7
1.7Borda, Myopic3.72.7
1.7
0.3
0.50.9
Method
Plurality, DP4.13.4
2.7Plurality, Myopic4.13.5
2.8Borda, DP3.7
2.71.7Borda, Myopic
3.72.7
1.7
Average cost (# of queries)12Slide13
Empirical Results
Cost decrease as increases – [ less uncertainty about the available candidates set].Myopic performed very close to the OPT DP alg.Not shown:Cost increases with
the dispersion parameter – “noisier”/more diverse preferences (not shown
).
-Approximation: stop the recursion when training set is
– pure.For plurality, ,
,
.For
,
.
0.3
0.5
0.9
Method
Plurality,
DP
4.1
3.42.7Plurality, Myopic
4.13.52.8Borda
, DP3.72.71.7
Borda, Myopic3.72.71.7
0.3
0.5
0.9Method
Plurality,
DP4.1
3.42.7Plurality, Myopic
4.13.5
2.8Borda, DP
3.72.7
1.7Borda, Myopic
3.72.71.7Average cost (# of queries)
13Slide14
Additional Results
Query complexity: expected number of queries under a worst-case preference profile.Result: For Plurality, Borda, and Copeland, worst-case exp. query complexity is
.
Simplified policies:
Assume
for all
. Then there is a simple iterative query policy that is asymptotically optimal as
.
14Slide15
Conclusions & Future Directions
A framework for querying candidates under a probabilistic availability model.Connections to control of elections. Two algorithms for generating decision trees: DP, Myopic.Future directions:Ways of pruning the decision trees (depend on the voting rules).
Sample-based methods for reducing training set size.
Deeper theoretical study of the query complexity.
15