/
Adaptivity Adaptivity

Adaptivity - PowerPoint Presentation

trish-goza
trish-goza . @trish-goza
Follow
407 views
Uploaded On 2017-03-15

Adaptivity - PPT Presentation

Gaps for stochastic probing Submodular amp XOS Functions Sahil singla Carnegie mellon university Joint work with Anupam gupta and viswanath nagarajan 18 th ID: 524537

min gap adap adaptivity gap min adaptivity adap xos stochastic probing submodular amp path stem idea distinct constraints monotone

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Adaptivity" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Adaptivity Gaps for stochastic probing(Submodular & XOS Functions)

Sahil singla Carnegie mellon universityJoint work with Anupam gupta and viswanath nagarajan

18

th

Jan,

2017Slide2

Stochastic probing

2Only 1 hour before shops close!Orienteering Constraint

Purchase

GiftsSlide3

Stochastic probing

3Only 1 hour before shops close!Orienteering Constraint0.8

0.5

0.5

0.25

0.4

Remark

:

If probabilities 1 & nodes distinct, then

Orienteering Problem

[Blum et al. FOCS’03]

30 min

15 min

25 min

15 min

20 min

30 min

20 min

20 minSlide4

Stochastic probing

4Input:Graph & Metric Probabilities:Independently PresentFunction(e.g., # Distinct Items)

Constraints

(

e.g.,

1 hour

Orienteering

)

OUTPUT:

Maximize the

Expected

Function Value

0.8

0.5

0.5

0.25

0.4

30 min

15 min

25 min

15 min

20 min

30 min

20 min

20 minSlide5

Simple strategy5

15 min

25 min

20 min

0.5(1+0.5+0)

+

0.5(0+0.5+0.8

) =1.4

0.5

0.5

0.8

0.8

0.5

0.5

0.25

0.4

30 min

15 min

25 min

15 min

20 min

30 min

20 min

20 min

0.5+0.5+0.4 =1.4

0.5

0.5

0.4

15 min

25 min

20 min

Can we Get Better?Slide6

Adaptivity gap

DefinitionRatio of best adaptive to best non-adaptive GAP :=

Example with

GAP

How

large

can the

GAP

be?

 

6

We Show

Small for many settings!Slide7

OUTLINEStochastic Probing & Adaptivity

GAPWhy Care About Adaptivity GAP?Proof Idea for Submodular FunctionsProof Idea for XOS Functions

Open Problems

7Slide8

best adaptive strategy8

Decision Tree

Every

Root-Leaf Path at most 1 Hour

Can Be

Exponential

Sized

!Slide9

Why Care about Adaptivity gap?9

BEST ADAPTIVEBEST NON-ADAPTIVE

ALGORITHM

Goal

: Bound Ratio

Adaptivity

GAP

Approx

ratio

 

.

GAP

 

GAP

:=

 

For what

Constraints

& Functions is the

GAP

small?Slide10

Constraints & Functions

ConstraintsDownward-Closed: If a set can be probed then also its subsets e.g. Knapsack, Matroid, OrienteeringFUNCTIONSSubmodular: If then

XOS

of

w

idth

:

Given

w

i

: V

R

+

for

S

V :

f(S

) = max

i

{

w

i

(S)

}

Subadditive

:

A,B

V: f(A

B)

f(A) + f(B

) Submodular

XOS Subadditive (when monotone)

Remark: XOS

approx

Subadditive

[Dobzinski-APPROX’07]

 

10Slide11

results11

Theorem 1

: Adaptivity Gap for

Constraints =

Downward-Closed

Function = Non-Negative

Submodular

is

.

Moreover, it

s

at most 3

when function also

Monotone

.

 

ADAP

NON-ADAP

ALGO

GAP

 

GAP

 

Theorem 2

: Adaptivity Gap for

Constraints

=

Downward-Closed

Function =

XOS

of

width W

is

.

 

e.g., # distinct items is monotone submodularSlide12

A Non-Adaptive algorithmQuasi-poly time

not possible unless NP

Quasi-poly

 

12

ADAP

NON-ADAP

ALGO

GAP

 

GAP

 

Theorem 3

: ALGO has

for

Constraints =

Orienteering

Function

= Number

of

Distinct

Items

 

Chekuri

and Pal – FOCS’05

Corollary

: Using Theorems 1 & 3 for

Constraints =

Orienteering

Function

= Number

of

Distinct

Items

ALGO is

approximation for ADAP.

 Slide13

Prior workAssumes Simple Constraints (e.g., any k items)

Bound ADAP using LPCompare ALGO to LP[GN-IPCO’13], [DGV-FOCS’04],[ANS-WINE’08], [ASW-STACS’14]We Want General ConstraintsCannot always bound ADAP

using

LP

Assumes Simple Functions

Our

previous

work

for

Matroid

Rank

Fns

[GNS-SODA’16

]

 

13

ADAP

NON-ADAP

ALGO

LPSlide14

OUTLINEStochastic Probing & Adaptivity

GAPWhy Care About Adaptivity GAP?Proof Idea for Submodular FunctionsProof Idea for XOS Functions

Open Problems

14Slide15

TWO IDEAS1. Random Root-Leaf Path GoodOnly show existence

2. Stem-By-Stem InductionStem Lemma15

ADAP

NON-ADAP

ALGO

GAP

 

GAP

 

0.7

0.3

0.5

0.5

0.3

0.2

0.8

0.7

NO

YES

Theorem 1

: Adaptivity Gap for

Constraints =

Orienteering

Function

= Number

of

Distinct

Items

is

at most 3

.Slide16

Only Existence

of a “Good” NA-PathAssume Best ADAP Strategy KnownRandom Path with ADAP ProbabilitiesExample:Red

path

prob

=

Here

ADAP

gets

Here

NA

gets

Show: E[

Random Path

]

ADAP

Since

Best

Path

E[

Random Path

]

Adaptivity

GAP

 

Random

root-leaf

path

16

0.7

0.3

0.5

0.5

0.3

0.2

0.8

0.7

NO

YESSlide17

Stem is the All-No Path

ADAP

= Distinct items that appear before

i

NA

Stem Lemma

 

Stem-by-stem

induction

17

i

0.7

0.3

0.5

0.5

0.3

0.2

0.8

0.7

0.2

0.8

0.7

0.3

0.5

0.5

0.2

0.8

 

NO

YESSlide18

Stem lemma

LEMMA: Let = Distinct items that appear before i

. Then

i.e.

PROOF

:

=

Now

i.e.

i.e.

i.e.

 

18

0.7

 

0.5

 

0.2

 

0.3

 

0.5

 

0.8

 

NO

YESSlide19

OUTLINEStochastic Probing & Adaptivity

GAPWhy Care About Adaptivity GAP?Proof Idea for Submodular FunctionsProof Idea for XOS Functions

Open Problems

19Slide20

PROOF IDEA FOR XOS

Can assume coefficients in wi are ``small’’ and we can truncate tree s.t. no-path has ``very large’’ value.Control variance of wi using Freedman’s concentration

inequality, and take

Union Bound

over all

w

i

Remark

: Some of these ideas also used in our previous work [GNS-SODA’16]

Can We Get

 

20

Theorem 2

: Adaptivity Gap for

Constraints

=

Downward-Closed

Function =

XOS

of

width W

is

.

 

For

,

f(S

)

:=

max

i

{

w

i

(S) }

 Slide21

OUTLINEStochastic Probing & Adaptivity

GAPWhy Care About Adaptivity GAP?Proof Idea for Submodular FunctionsProof Idea for XOS Functions

Open

Problems

21Slide22

Open Problems22

Question 1

: Is

Adaptivity

Gap for

Constraints

= Downward-Closed

Function = Non-Negative

Monotone

Submodular

at most

e/(e-1)

?

Question 2

: Is

Adaptivity

Gap for

Constraints

= Downward-Closed

Function = Non-Negative Monotone

Subadditive

at most

?

What

about

Max

Indep

Set

with

Stoch

Vertices?

 

Recollect

:

Proving

for XOS functions

suffice [Dobzinski-APPROX’07]Slide23

Lower Bound for XOSPROOF

:Complete k-ary tree, where kk = W = n/2Each edge active w.p. 1/kFunction = MaxRoot-Leaf Path {# active edges}Constraint = At most k2 probes

Now,

ADAP

=

NA

=

 

23

Theorem

: Adaptivity Gap for

Constraints

=

Downward-Closed

Function =

XOS

of

width W

is

.

 Slide24

summaryStochastic Probing Problem

Captures many natural problemsAdap GAP Small for Submodular & low-width XOS FnsFocus on simpler non-adaptive algosRandom root-leaf path & stem-by-stem inductionFreedman’s concentration inequalityOpen Problemse/(e-1) for Monotone Submodular Functions?p

olylog

(n) for Subadditive Functions?

24

Questions

?

ADAP

NON-ADAP

ALGO

GAP

 

GAP

 Slide25

referencesM. Adamczyk, M.

Sviridenko, and J. Ward. `Submodular Stochastic Probing on Matroids’. STACS’14.A. Asadpour, H. Nazerzadeh, and A. Saberi. `Maximizing Stochastic Monotone Submodular Functions’. WINE’08.A. Blum, S. Chawla, D. Karger, T. Lane, A. Meyerson, M.

Minkoff

. `

Approximation Algorithms for Orienteering and Discounted-Reward

TSP

’.

FOCS’03

.

C.

Chekuri

and M. Pal. `

A Recursive Greedy Algorithm for Walks in Directed Graphs

’.

FOCS’05

.

B.C. Dean, M.X.

Goemans

, and J.

Vondrak. `Approxmiating

the Stochastic Knapsack Problem: The Benefit of Adaptivity’. FOCS’04.S. Dobzinski. `Two randomized mechanisms for combinatorial auctions

’.

APPROX’07A. Gupta and V.

Nagarajan. `A Stochastic Probing Problem with Applications’.

IPCO’13.A. Gupta, V.

Nagarajan, and S. Singla. `Algorithms and

Adaptivity Gaps for Stochastic Probing’. SODA’16.

A. Gupta, V. Nagarajan, and S. Singla

. `Adaptivity Gaps for Stochastic Probing: Submodular and XOS Functions

’. SODA’17.

25