/
Belief Propagation and Approximate Inference: Belief Propagation and Approximate Inference:

Belief Propagation and Approximate Inference: - PowerPoint Presentation

SunnySailor
SunnySailor . @SunnySailor
Follow
343 views
Uploaded On 2022-08-04

Belief Propagation and Approximate Inference: - PPT Presentation

Compensating for Relaxations Arthur Choi Bayesian Networks Reasoning in Bayesian networks artificial intelligence machine learning computer vision information theory statistical physics information retrieval computational biology ID: 934753

darwiche model equivalence choi model darwiche choi equivalence amp constraints map relax recover structure network edges edge compensate approximation

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Belief Propagation and Approximate Infer..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Belief Propagation and Approximate Inference: Compensating for Relaxations

Arthur Choi

Slide2

Bayesian NetworksReasoning in Bayesian networks:

artificial intelligence, machine learning, computer vision, information theory, statistical physics, information retrieval, computational biology …

This thesis:

develop a perspective on approximate algorithms for inference that:

yields more accuracte and effective approximations

allows us to easily design new

approximations

Slide3

Example: Coding

U

0

U

1

U

2

U

3

X0

X

1

X

2

X

3

Y

0

Y'0

Y

1

Y'

1

Y

2

Y'

2

Y

3

Y'

3

X'

0

S

0

X'

1

S

1

X'

2

S

2

X'

3

S

3

Pr

(

U

,

S

,

X

,

Y

) = ∏

i

Pr

(

U

i

Pr

(

X

i

|

U

i

P

r

(

Y

i

|

X

i

)

·

Pr

(

S

i

|

S

i

-1

U

i

Pr

(

X'

i

|

S

i

P

r

(

Y'

i

|

X'

i

)

Slide4

Example: Coding

U

0

U

1

U

2

U

3

X

0

X

1

X

2

X

3

Y

0

Y'0

Y

1

Y'

1

Y

2

Y'

2

Y

3

Y'

3

X'

0

S

0

X'

1

S

1

X'

2

S

2

X'

3

S

3

Pr

(

U

,

S

,

X

,

Y

) = ∏

i

Pr

(

U

i

Pr

(

X

i

|

U

i

P

r

(

Y

i

|

X

i

)

·

Pr

(

S

i

|

S

i

-1

U

i

Pr

(

X'

i

|

S

i

Pr(Y'i|X'i)

Slide5

Example: Coding

U

0

U

1

U

2

U

3

X0

X

1

X

2

X

3

Y

0

Y'0

Y

1

Y'

1

Y

2

Y'

2

Y

3

Y'

3

X'

0

S

0

X'

1

S

1

X'

2

S

2

X'

3

S

3

Query:

argmax

{

Pr

(

U

i

=0 |

y

),

Pr

(

U

i

=1 |

y

) }

Slide6

Treewidth

Slide7

Main Idea

Given model

M

Slide8

Main Idea

Given model

M

Relax the model

Slide9

Main Idea

Given model

M

Relax the model

Reason insimpler model

?

?

Slide10

Main Idea

Given model

M

Relax the model

Reason in

simpler model

Compensate

Slide11

Main Idea

Given model

M

Relax the model

Reason in

simpler model

Compensate

Reason inimproved model

?

?

Slide12

Main Idea

Given model

M

Relax the model

Reason in

simpler model

CompensateReason in

improved modelRecover

Slide13

Main Idea

Given model

M

Relax the model

Reason in

simpler model

CompensateReason in

improved modelRecover

Generic approach: use in Bayes nets,

probabilistic graphical models, SAT, etc.

Slide14

Model + Eq

Relax

Compensate

Recover

Intractable model, augmented

with equivalence constraints

Simplify network structure:

Relax equivalence constraints

Compensate for relaxation:

Restore a weaker equivalence

Recover structure, identify

an improved approximation

Slide15

Model + Eq

Relax

Compensate

Recover

Intractable model, augmented

with equivalence constraints

Simplify network structure:

Relax equivalence constraints

Compensate for relaxation:

Restore a weaker equivalence

Recover structure, identify

an improved approximation

Slide16

Probabilistic Graphical Models

Bayesian Network:

Pr

(

x

)

= ∏xu Pr(x

|u) =

∏xu θx|

uMarkov Network (or factor graph): Pr(

x) = Z-1

·∏aψa(x

a)

Z = ∑

x∏aψ

a(xa)

A

CB

DFE

G

I

H

Slide17

Probabilistic Queries

Pr

(

x

) =

Z

-1·∏aψa

(xa)MAP explanations:

x* = argmaxx

Pr(x) = argmaxx ∏

aψa

(xa)Partition function: Z

=

∑x ∏a

ψa

(xa)

A

CB

DFE

G

I

H

Slide18

Probabilistic Queries

Pr

(

x

) =

Z

-1·∏aψa

(xa)Marginals:

Pr(X=x

) =

Z-1·

∑x:X=x

∏aψ

a(xa)

A

C

B

DF

EG

I

H

Slide19

Model + Eq

Relax

Compensate

Recover

Intractable model, augmented

with equivalence constraints

Simplify network structure:

Relax equivalence constraints

Compensate for relaxation:

Restore a weaker equivalence

Recover structure, identify

an improved approximation

Slide20

Relax: Treewidth

Slide21

Relax Equivalence Constraints

Equivalence constraint:

ψ

eq

(

Xi=x

i,Xj=x

j) = 1 if xi

= xj 0 otherwise

A

C

B

D

F

E

G

I

H

Slide22

Relaxing Equivalence Constraints

Model

M

A

B

C

D

E

F

G

H

I

Slide23

Relaxing Equivalence Constraints

Model + Eq.

A

B

C

1

D

E

1

F

G

H

1

I

1

E

2

Slide24

Relaxing Equivalence Constraints

Model + Eq.

A

B

C

1

D

E

1

F

G

H

1

I

1

C

2

E

2

I

2

H2

Slide25

Relaxing Equivalence Constraints

Relaxed

Treewidth 1

A

B

C

1

D

E

1

F

G

H

1

I

1

C

2

E2I2H2

Slide26

Relaxing Equivalence Constraints

Model

M

A

B

C

D

E

F

G

H

I

Slide27

Relaxing Equivalence Constraints

Model + Eq.

A

B

C

1

D

E

1

G

1

C

2

E

2

F

G

2HI

Slide28

Relaxing Equivalence Constraints

Relaxed

Decomposed

A

B

C

1

D

E

1

G

1

C

2

E

2

F

G2HI

Slide29

Relaxing Equivalence Constraints

MAP in original model (with eq. constraints):

MAP = max

x

aψa(x

a) · ∏ijψ

eq(Xi=x

i,Xj=

xj)

MAP in relaxed model: r-MAP = maxx ∏aψ

a(xa

) ≥ MAP

Slide30

Model + Eq

Relax

Compensate

Recover

Intractable model, augmented

with equivalence constraints

Simplify network structure:

Relax equivalence constraints

Compensate for relaxation:

Restore a weaker equivalence

Recover structure, identify

an improved approximation

Slide31

Relaxation

[

Choi, Chavira & Darwiche UAI-07

]

Mini-buckets algorithm

:Approximation algorithm on exact model, equivalent to exact algorithm on relaxed modelBranch-and-bound depth-first search

Identify new properties of approximationReduce size of search spaceDesign mini-bucket approximationsUse better exact algorithms on relaxed model

Slide32

Relaxation: Search Space

x

1

x

s

x

2

x

s

+1

xn

Deeper Search

Size:

|

X

|

s

Size: |X|nReducedSpaceFullSearch Space

Slide33

Relaxation: Better Inference

Approximate inference as exact inference in a simplified model

Use state-of-the-art exact algorithms

X

Y

Z

Factorization

Better Factorization

Slide34

Empirical Impications

 

Search

AC

MB

Relative

Network

Nodes

Time (s)

Time (s)

Improvement

90-20-1

14985

18

2417

135

90-20-2

137783

111

15953

144

90-20-3

3065

4

1271

334

90-20-4

4545

3

988

355

90-20-5

29343

38

6579

173

90-20-6

5065

3

630

227

90-20-7

2987

2

1155

485

90-20-8

6213

6

812

146

90-20-9

5121

5

2367

480

90-20-10

8419

10

2343

235

Slide35

Model + Eq

Relax

Compensate

Recover

Intractable model, augmented

with equivalence constraints

Simplify network structure:

Relax equivalence constraints

Compensate for relaxation:

Restore a weaker equivalence

Recover structure, identify

an improved approximation

Slide36

Model + Eq

Relax

Compensate

Recover

Intractable model, augmented

with equivalence constraints

Simplify network structure:

Relax equivalence constraints

A new semantics for BP

marginals

partition function

MAP

Ideal compensations

Recover structure, identify

an improved approximation

Slide37

Belief Propagation

Slide38

Belief Propagation

Slide39

Belief Propagation:What if there are loops?

Slide40

Application: Information Theory

Slide41

Application: Information Theory

Turbo

Codes, LDPC Code

Berrou

, Glavieux 1993

Mackay

, Neal 1995 (Gallager 1962)Decoding is loopy belief propagation in BNMcEliece, MacKay, Chang 1998“A revolution: BP in graphs with cycles”Frey, MacKay 1998

Slide42

Larger Shift

 Closer Object

Smaller Shift

 Further Object

Slide43

Output: Depth Map

Slide44

Input: L&R Image

Output: Depth Map

Markov Network

Images Define a

Markov Network

Reasoning in

Markov Network

Estimates Depth

Slide45

Stereo Vision

http://vision.middlebury.edu/stereo/eval/

Top 7 highest ranking are

loopy BP or extend loopy BP

Slide46

Application: Satisfiability

Survey Propagation:

Surprisingly effective for random k-SAT

SP good up to

α

= 4.23 < 4.26 critical threshold

Slide47

Previously …Previously, on edge deletion …

[

Choi, Chan & Darwiche UAI-05

]

Approximate inference by edge deletion

[Choi & Darwiche UAI-06]:

A variational approach: minimize KL-divergence[Choi & Darwiche AAAI-06]:An edge deletion semantics for belief propagationmarginal approximations

Slide48

Deleting an Equivalence Edge

X

i

X

j

Slide49

Deleting an Equivalence Edge

X

i

X

j

Slide50

Deleting an Equivalence Edge

X

i

X

j

Slide51

Model + Eq

Relax

Compensate

Recover

Intractable model, augmented

with equivalence constraints

Simplify network structure:

Relax equivalence constraints

A new semantics for BP

marginals

partition function

MAP

Ideal compensations

Recover structure, identify

an improved approximation

Slide52

Deleting an Equivalence Edge

X

i

X

j

Restore a

weaker notion of equivalence

:

[

Choi

& Darwiche

AAAI-06

]

Slide53

Parametrizing Edges Iteratively: ED-BP

Iteration

t

= 0

Initialization

Slide54

Parametrizing Edges Iteratively: ED-BP

Iteration

t

= 1

Slide55

Parametrizing Edges Iteratively: ED-BP

Iteration

t

= 2

Slide56

Parametrizing Edges Iteratively: ED-BP

Iteration

t

Convergence

Slide57

Belief Propagation as Edge Deletion

Iteration

t

Iteration

t

Slide58

BP is a disconnected approximation.

BP is

any

polytree approximation.

Deleting Edges and

Loopy

Belief Propagation

BP in a

network

.

[

Choi

& Darwiche

AAAI-06

]

Slide59

A New Semantics for Belief Propagation

ED-BP networks

:

[

Choi

& Darwiche

AAAI-06

]

Slide60

A New Semantics for Belief Propagation

Loopy BP

marginals

ED-BP networks

:

[

Choi

& Darwiche

AAAI-06

]

Slide61

A New Semantics for Belief Propagation

Loopy BP

marginals

Exact

Inference

ED-BP networks

:

[

Choi

& Darwiche

AAAI-06

]

Slide62

Model + Eq

Relax

Compensate/

Correct

Recover

Intractable model, augmented

with equivalence constraints

Simplify network structure:

Relax equivalence constraints

A

new semantics for BP

marginals

partition function

MAP

Ideal compensations

Recover structure, identify

an improved approximation

Slide63

Relaxing Equivalence Constraints

In original model (with eq. constraints):

Pr

(

x

) = Z-1·∏a

ψa(xa) ·

∏ijψeq(X

i=xi,X

j=xj

) Z = ∑x ∏

aψa

(xa) · ∏ij

ψeq(

Xi=xi,X

j=x

j)In relaxed model: Z0 = ∑x ∏aψa(xa)In compensated model: Z' = ∑

x ∏aψa(xa) · ∏ijθ(Xi=xi)θ

(Xj

=xj

)

Slide64

Deleting an Equivalence Edge

X

i

X

j

Restore a

weaker notion of equivalence

:

[

Choi

&

Darwiche

UAI-08

]

Slide65

Prop.:

If

MI

(

X

i,Xj

) = 0 in ED-BP network M', then:

whereAn Easy Case: Delete a Single Edge

X

i

X

j

With multiple edges deleted

(

ZERO-EC

):

[

Choi

&

Darwiche

UAI-08]

Slide66

Prop.:

For any edge in

ED-BP network

M

'

:where

An Easy Case: Delete a Single Edge

X

i

X

j

With multiple edges deleted

(

GENERAL-EC

):

[

Choi

&

Darwiche

UAI-08

]

Slide67

Bethe free energy approximation:

as a partition function approximation:

Bethe Free Energy is

ZERO-EC

Theorem:

The Bethe approximation

is ZERO-EC when

M'

is a tree :

M

M'

[

Choi

&

Darwiche

UAI-08

]

Slide68

Overview

tree

exact

marginals

zero-EC

general-EC

LBP

Bethe

IJGP

exact

marginals

exact Z

recover edges

joingraph

free energies

Slide69

Edge Correction

0

25

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

edges recovered

relative error

6x6 grid

EC-Z,rand

EC-G,rand

Bethe

exact Z

Slide70

Model + Eq

Relax

Compensate

Recover

Intractable model, augmented

with equivalence constraints

Simplify network structure:

Relax equivalence constraints

A

new semantics for BP

marginals

partition function

MAP

Ideal compensations

Recover structure, identify

an improved approximation

Slide71

Compensating for Relaxations

[

Choi & Darwiche NIPS-09

]

Max-product belief propagation as compensation

New approximation based on ideal compensationtighter upper-bounds than a relaxation (empirically)

Slide72

Relaxing Equivalence Constraints

In original model (with eq. constraints):

MAP =

max

x

∏aψa(

xa)·∏ijψ

eq(Xi=x

i,Xj=x

j)In relaxed model:

r-MAP = maxx ∏aψ

a(xa

)In compensated model: c-MAP = maxx

∏aψ

a(xa)·∏ijθ

(Xi

=xi)θ(Xj=xj) MAP ≤ c-MAP ≤ r-MAP ?

Slide73

Compensation: REC-BP

X

i

X

j

Recover a

weaker notion of equivalence

:

Intuition [REC-BP]:

A compensation should be exact if a model is split into two independent sub-models:

[

Choi

&

Darwiche

NIPS-09

]

Slide74

BP is a disconnected approximation.

BP is

any

polytree approximation.

Deleting Edges and

Loopy

Belief Propagation

BP in a Bayesian network.

[

Choi

&

Darwiche

NIPS-09

]

Slide75

Model + Eq

Relax

Compensate

Recover

Intractable model, augmented

with equivalence constraints

Simplify network structure:

Relax equivalence constraints

A

new semantics for BP

marginals

partition function

MAP

Ideal compensations

Recover structure, identify

an improved approximation

Slide76

Relaxing Equivalence Constraints

In original model (with eq. constraints):

MAP =

max

x

∏aψa(

xa)·∏ijψ

eq(Xi=x

i,Xj=x

j)In relaxed model:

r-MAP = maxx ∏aψ

a(xa

)In compensated model: c-MAP = maxx

∏aψ

a(xa)·∏ijθ

(Xi

=xi)θ(Xj=xj) MAP ≤ c-MAP ≤ r-MAP ?[Choi & Darwiche

NIPS-09]

Slide77

Compensation: Idealized Case

Say we relax a single equivalence constraint…

A

compensation has

valid configurations

if:c-MAP(

Xi=x) = c-MAP

(Xj=x) = c-MAP

(Xi=x,Xj=x)

A compensation has scaled values if:log c-MAP(Xi=x,X

j=x) =

κ · log MAP(Xi=x,X

j=x)

A compensation with valid configurations and scaled values is idealit is as good as having Xi ≡

Xj

[Choi & Darwiche NIPS-09]

Slide78

Compensation: REC-I

X

i

X

j

Recover

a

weaker notion of equivalence

:

Intuition [REC-I]:

A compensation should be ideal, i.e., have valid configurations and scaled

values.

Proposition:

If a compensation is ideal, then it recovers the following weaker notion of equivalence.

[

Choi

&

Darwiche

NIPS-09]

Slide79

Properties

Proposition 1

: For a single equivalence constraint relaxed:

MAP ≤ c-MAP ≤ r-MAP

Theorem

1

: For any k equivalence constraints relaxed in REC-I:MAP ≤ c-MAP

[Choi & Darwiche NIPS-09]

Slide80

Experiments

Set initial parameters so that:

c-MAP

=

r-MAP

Slide81

Iterative Dynamics

[

Choi

&

Darwiche

NIPS-09

]

Slide82

Iterative Dynamics

[

Choi

&

Darwiche

NIPS-09

]

Slide83

Iterative Dynamics

[

Choi

&

Darwiche

NIPS-09

]

Slide84

Model + Eq

Relax

Compensate

Recover

Intractable model, augmented

with equivalence constraints

Simplify network structure:

Relax equivalence constraints

Compensate for relaxation:

Restore a weaker equivalence

Recover structure, identify

an improved approximation

Slide85

Which edges do we recover?

A minimal polytree.

A

maximal

polytree:

let

us rank all edges.

Slide86

Edge Recovery: ZERO-EC

i

j

Recover edges with largest

MI

(

X

i

;

X

j

)

M

i

M

j

[

Choi

&

Darwiche

UAI-08

]

Slide87

Edge Recovery: GENERAL-EC

j

i

t

s

Recover edges with largest

MI

(

X

i

,

X

j

;

X

s

,

X

t

)

M

i

M

j

[

Choi

& Darwiche UAI-08]

Slide88

Edge Recovery

0

25

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

edges recovered

relative error

6x6 grid

EC-Z,rand

Bethe

exact Z

[

Choi

&

Darwiche

UAI-08

]

Slide89

Edge Recovery

0

25

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

edges recovered

relative error

6x6 grid

EC-Z,rand

EC-G,rand

Bethe

exact Z

[

Choi

&

Darwiche

UAI-08

]

Slide90

Edge Recovery

0

25

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

edges recovered

relative error

6x6 grid

EC-Z,rand

EC-G,rand

EC-Z,MI

Bethe

exact Z

[

Choi

&

Darwiche

UAI-08

]

Slide91

Edge Recovery

0

25

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

edges recovered

relative error

6x6 grid

EC-Z,rand

EC-G,rand

EC-Z,MI

EC-G,MI

Bethe

exact Z

[

Choi

&

Darwiche

UAI-08

]

Slide92

Edge Recovery

0

25

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

edges recovered

relative error

6x6 grid

EC-Z,rand

EC-G,rand

EC-Z,MI

EC-G,MI

EC-G,MI2

Bethe

exact Z

[

Choi

&

Darwiche

UAI-08

]

Slide93

Many-Pairs Mutual Information

X

Y

mutual

information

can be

expensive

[

Choi

&

Darwiche

AAAI-08a

]

Slide94

Soft d-Separation in Polytrees

W

Sequential

Valve

Theorem 1:

MI

(

X

;

Y

|

z)

 ENT(W |

z)

X

W

Y

[

Choi

&

Darwiche AAAI-08a]

Slide95

Soft d-Separation in Polytrees

W

Divergent

Valve

Theorem 1:

MI

(

X

;

Y

|

z) 

ENT(W | z

)

X

W

Y

[

Choi

&

Darwiche AAAI-08a]

Slide96

Soft d-Separation in Polytrees

N

1

W

N

1

Convergent

Valve

Theorem 2:

MI

(

X;Y

| z)  MI(

N1

;N2 | z

)

XN1

WN2Y

[

Choi

& Darwiche AAAI-08a]

Slide97

Many-Pairs Mutual Information

MI

can be

expensive, even

in

polytreesBayesian network

n variables, at most w parents and s statesOne run of BP: O(ns

w) timesingle pair: MI: O(s) runs of BP, O(s

nsw) timePr(X,Y|z) = Pr(X|Y,z) Pr(

Y|z)sd-sep: one run of BP, O(n + nsw) time

k-pairs:MI: O(ks) runs of BP, O(

ks nsw) timesd-sep: one run of BP, O(kn +

nsw) time

[Choi & Darwiche AAAI-08a]

Slide98

Empirical Results

network

method

0%

10%

20%

rank time

# deleted

# params

barley

random

115ms

120ms

141ms

0ms

37

130180

MI

111ms

93ms

2999ms

sd-sep

110ms

125ms

46ms

65.84x

diabetes

random

732ms

1103ms

1651ms

0ms

190

461069

MI

550ms

674ms

84604ms

sd-sep

957ms

1639ms

132ms

641.99x

mildew

random

238ms

241ms

243ms

0ms

12

547158

MI

233ms

263ms

6661ms

sd-sep

245ms

323ms

42ms

157.26x

munin1

random

13ms

14ms

22ms

0ms

94

19466

MI

12ms

10ms

680ms

sd-sep

10ms

10ms

35ms

19.57x

[

Choi

&

Darwiche

AAAI-08a

]

Slide99

Empiricial Results

0

152

1

2

3

4

5

x 10

edges recovered

average KL-error

pigs

random

true-MI

sd-sep

[

Choi

&

Darwiche

AAAI-08a

]

Slide100

Focusing Approximations

Different queries suggest recovery of different edges

[

Choi

&

Darwiche

AAAI-08b

]

Slide101

Focusing Approximations

query node

Different queries suggest recovery of different edges

[

Choi

&

Darwiche

AAAI-08b

]

Slide102

Focusing Approximations

query node

Different queries suggest recovery of different edges

[

Choi

&

Darwiche

AAAI-08b

]

Slide103

Loopy BP

marginals

Exact

Inference

recover edges

Focusing Approximations

[

Choi

&

Darwiche

AAAI-08b

]

Slide104

Model + Eq

Relax

Compensate

Recover

Intractable model, augmented

with equivalence constraints

Simplify network structure:

Relax equivalence constraints

Compensate for relaxation:

Restore a weaker equivalence

Recover structure, identify

an improved approximation

More…

Applied to Max-SAT

Tag-SNP selection

Inference Evaluation

Public implementation: SamIam

Slide105

An Application to Max-SAT

[

Choi, Standley & Darwiche CP-09

]:

Compensate for Max-SAT relaxations

[Pipatsrisawat, Palyan, Chavira, Choi & Darwiche JSAT-09]:

Relaxations of Max-SAT problemsdepth-first brand-and-bound search

Slide106

Weighted Max-SAT

A

C

B

D

F

E

G

I

H

(

a

b

,

w

1

)

(

¬

a  ¬b,w

2)(b  c,

w

3)

(¬b  ¬c

,w4)(b 

¬e

,wa)(¬b 

e,wb)…

[

Choi, Standley

&

Darwiche

CP-09

]

Slide107

Weighted Max-SAT: Equivalence Constraints

Relax an equivalence constraint:

(

X

Y,∞) = {(x 

¬y,∞), (¬x

 y,∞)}Compensate with unit clauses:

{(x,wx), (

¬x,w

¬x), (y,wy

), (¬y,

w¬y)}How do we set these new weights?

[Choi, Standley & Darwiche

CP-09]

Slide108

Weighted Max-SAT: Compensation

A compensation has

valid configurations

if:

G

(

x) = G(y) = G

(x,y)G(

¬x) = G(¬y

) = G(¬x, ¬y

)A compensation has scaled values if: G

(x,y) = κ

· F(

x,y)G(¬x

,¬y) =

κ · F(¬x

,¬y)

A compensation with valid configurations and scaled values is idealit is as good as having X ≡ Y[Choi, Standley & Darwiche CP-09]

Slide109

Weighted Max-SAT: Experiments

Tighter upper-bounds: more efficient depth-first branch-and-bound search.

Compensation bounds embedded in Clone

[

Choi, Standley

&

Darwiche

CP-09]

Slide110

Application: Biology

[

Choi,Zaitlen,Hahn,Pipatsrisawat,Darwiche&Eskin WABI-08

]:

Optimal tag SNP selection

(s

1

∨ s2)

(s1 ∨ s2 ∨ s3

)(s2 ∨ s

3 ∨ s4 ∨ s

5)…

Slide111

Inference Evaluations

UAI-06: participant

UCLA: only group to solve all models for all tasks

UAI-08: participant, co-organizer

ED-BP: leader in multiple benchmarks, for approx PRE and approx MAR tasks

results presented at:UAI-08 conference and workshopCP-08 workshop

NIPS-08 workshop

Slide112

Probabilistic Reasoning Evaluation of UAI’08

Evaluation Chairs:

Adnan Darwiche (UCLA)

Rina

Dechter (UCI)Student Organizers:

Arthur Choi (UCLA)Vibhav Gogate (UCI)Lars Otten (UCI)Special Thanks:

Eleazar Eskin (UCLA)Evaluation CommitteeFahiem Bacchus (

UToronto)Jeff Bilmes (UW)Hector Geffner (UPF)Alexander Ihler (UCI)

Joris Mooij (Radboud)Kevin Murphy (UBC)

Slide113

Probabilistic Reasoning Evaluation of UAI’08

Scope:

Probability of Evidence (partition function)

Most Probable Explanation (energy minimization)

Node Marginals

Evaluated exact and approximate inference1,181 Bayesian and Markov networks

26 solvers evaluated from 7 groups

Slide114

Approx PE Results (Binary)

higher score

faster solver

Slide115

Workshop page: http://graphmod.ics.uci.edu/uai08/

Slide116

http://reasoning.cs.ucla.edu/samiam/

Slide117

Slide118

Slide119

Slide120

Slide121

Slide122

http://reasoning.cs.ucla.edu

Slide123

Arthur

Choi

,

Hei

Chan, and Adnan Darwiche. On Bayesian Network Approximation by Edge Deletion. In Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence (

UAI), 2005.Arthur Choi and Adnan Darwiche. An Edge Deletion Semantics for Belief Propagation and its Practical Impact on Approximation Quality. In Proceedings of the 21st National Conference on Artificial Intelligence (

AAAI), 2006.Arthur Choi and Adnan Darwiche. A Variational Approach for Approximating Bayesian Networks by Edge Deletion. In

Proceedings of the 22nd Conference on Uncertainty in Artificial Intelligence (UAI), 2006.Arthur Choi, Mark Chavira, and Adnan Darwiche. Node Splitting: A Scheme for Generating Upper Bounds in Bayesian Networks. In

Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence (UAI), 2007.Arthur

Choi and Adnan Darwiche. Approximating the Partition Function by Deleting and then Correcting for Model Edges. In Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence (UAI

), 2008.Arthur Choi and Adnan Darwiche. Focusing Generalizations of Belief Propagation on Targeted Queries. In Proceedings of the 23rd AAAI Conference on Artificial Intelligence (AAAI)

, 2008.Arthur Choi and Adnan Darwiche. Many-Pairs Mutual Information for Adding Structure to Belief Propagation Approximations. In

Proceedings of the 23rd AAAI Conference on Artificial Intelligence (AAAI), 2008.Arthur Choi and Adnan Darwiche. Approximating MAP by Compensating for Structural Relaxations. In

Proceedings of the Twenty-Third Annual Conference on Neural Information Processing Systems (NIPS), 2009.

Arthur Choi, Trevor Standley, and Adnan Darwiche. Approximating Weighted Max-SAT Problems by Compensating for Relaxations. In Proceedings of the 15th International Conference on Principles and Practice of Constraint Programming (CP), 2009.

Arthur Choi, Noah Zaitlen,

Buhm Hahn, Knot Pipatsrisawat, Adnan Darwiche, and Eleazar Eskin. Efficient Genome Wide Tagging by Reduction to SAT. In Proceedings of the 8th Workshop on Algorithms in Bioinformatics (WABI), 2008.Knot Pipatsrisawat, Akop Palyan, Mark Chavira, Arthur Choi, and Adnan Darwiche. Solving Weighted Max-SAT Problems in a Reduced Search Space: A Performance Analysis. Journal on Satisfiability Boolean Modeling and Computation (JSAT), 2008.Publications

Slide124

Thanks …

David Allen, Kurt Angle, Omer Bar-or, Jeff Bergman, Keith Cascio, Hei Chan, Mark Chavira, Yang Chen, Alex Choy, Rina Dechter, Bailu Ding, Alex Dow, Eleazar Eskin, Kamron Farrokh, Buhm Han, Dan He, Jinbo Huang, Deepak Khosla, Robert Lee, Glen Lenker, Tsai-Ching Lu, Sam Luckenbill, JD Park, Akop Palyan, Knot Pipatsrisawat, Wojtek Przytula, Ethan Schreiber, Grace Shih, Trevor Standley, Sam Talaie, Alan Yuille, Yulia Zabiyaka, Noah Zaitlen

Committee members: Adam Meyerson, Demetri Terzopoulos, Alan Yuille, Jan de Leeuw

… and Adnan Darwiche