/
Belief Propagation in a Continuous World Belief Propagation in a Continuous World

Belief Propagation in a Continuous World - PowerPoint Presentation

onionchevrolet
onionchevrolet . @onionchevrolet
Follow
342 views
Uploaded On 2020-06-23

Belief Propagation in a Continuous World - PPT Presentation

Andrew Frank 11022009 Joint work with Alex Ihler and Padhraic Smyth TexPoint fonts used in EMF Read the TexPoint manual before you delete this box A Graphical Models Nodes represent random variables ID: 783943

divergence pbp belief continuous pbp divergence continuous belief propagation field bins reweighted 2005 message particle marginal discrete represent particles

Share:

Link:

Embed:

Download Presentation from below link

Download The PPT/PDF document "Belief Propagation in a Continuous World" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Belief Propagation in a Continuous World

Andrew Frank 11/02/2009Joint work with Alex Ihler and Padhraic Smyth

TexPoint fonts used in EMF.

Read the TexPoint manual before you delete this box.:

A

Slide2

Graphical Models

Nodes represent random variables.Edges represent dependencies.

C

B

A

C

B

A

C

B

A

Slide3

C

E

D

B

A

Markov Random Fields

E

D

B

C

A

D

A

C

E

B

B

 E | C, D

A

C | B

Slide4

Factoring Probability Distributions

Independence relations  factorization

D

C

B

A

p(A,B,C,D) = f(A) f(B) f(C) f(D) f(A,B) f(B,C) f(B,D)

Slide5

Toy Example: A Day in Court

W

A

E

V

A, E, W

є

{“Innocent”, “Guilty”}

V

є

{“Not guilty verdict”, “Guilty verdict”}

I

G

G

I

I

G

Slide6

Inference

Most probable explanation:Marginalization:

Slide7

Iterative Message Updates

x

Slide8

Belief Propagation

W

A

E

V

m

AE

(E)

m

WE

(E)

m

EV

(V)

Slide9

Loopy BP

C

A

B

D

C

A

B

D

Does this work? Does it make any sense?

Slide10

A Variational Perspective

Reformulate the problem:

True distribution, P

“Tractable” distributions

Best tractable

approximation, Q

Find Q to minimize the

divergence

.

Slide11

Desired traits:Simple enough to enable easy computation

Complex enough to represent P Choose an Approximating Family

e.g.

Fully factored:

Structured:

Slide12

Choose a Divergence Measure

Kullback-Liebler divergence:

Alpha divergence:

Common choices:

Slide13

Behavior of α-Divergence

Source: T.

Minka

. Divergence measures and message passing.

Technical Report MSR-TR-2005-173, Microsoft. Research, 2005.

Slide14

Resulting Algorithms

Assuming a fully-factored form of Q, we get…*Mean field, α = 0Belief propagation, α = 1Tree-reweighted BP, α ≥ 1

* By minimizing “local divergence”:

Q(X

1

, X

2

, …,

Xn) = f(X1) f(X2) … f(X

n)

Slide15

Local vs. Global Minimization

Source: T. Minka. Divergence measures and message passing. Technical Report MSR-TR-2005-173, Microsoft. Research, 2005.

Slide16

Applications

Slide17

Sensor Localization

A

B

C

Slide18

Protein Side Chain Placement

RTDCYGN

+

Slide19

Common traits?

?

Continuous state space:

Slide20

Easy Solution: Discretize!

10 bins

10 bins

Domain size:

d = 100

20 bins

20 bins

Domain size:

d = 400

Each message:

O(d

2

)

Slide21

Particle BP

We’d like to pass “continuous messages”…

C

A

B

D

B

m

AB

(B)

1

4

4.2

5

2.5

Instead, pass discrete messages over sets of particles:

{ b

(

i

)

} ~ W

B

(B)

m

AB

({b

(

i

)

})

b

(1)

b

(2)

b

(N)

. . .

Slide22

PBP: Computing the Messages

Re-write as an expectation:

Finite-sample approximation:

Slide23

Choosing“Good” Proposals

C

A

B

D

Proposal should “match” the integrand.

Sample from the belief:

Slide24

Iteratively Refine Particle Sets

(2)

f(

x

s

,

x

t)

Draw a set of particles, {xs(i)} ~ Ws(

xs).Discrete inference over the particle discretization.

Adjust Ws(xs)

(1)

(3)

XsX

t(1)

(3)

Slide25

Benefits of PBP

No distributional assumptions.Easy accuracy/speed trade-off.Relies on an “embedded” discrete algorithm.

Belief propagation, mean field, tree-reweighted BP…

Slide26

Exploring PBP: A Simple Example

x

s

||

x

s

x

t

||

Slide27

Continuous Ising Model

Marginals

Approximate

Exact

Mean Field PBP

α = 0

PBP

α = 1

TRW PBP

α = 1.5

* Run with 100 particles per node

Slide28

A Localization Scenario

Slide29

Exact Marginal

Slide30

PBP Marginal

Slide31

Tree-reweighted PBP Marginal

Slide32

Estimating the Partition Function

Mean field provides a lower bound.Tree-reweighted BP provides an upper bound.p(A,B,C,D) = f(A) f(B) f(C) f(D) f(A,B) f(B,C) f(B,D)

Z = f(A) f(B) f(C) f(D) f(A,B) f(B,C) f(B,D)

Slide33

Partition Function Bounds

Slide34

Conclusions

BP and related algorithms are useful!Particle BP let’s you handle continuous RVs.Extensions to BP can work with PBP, too.Thank You!