/
Probabilistic Inference Agenda Probabilistic Inference Agenda

Probabilistic Inference Agenda - PowerPoint Presentation

myesha-ticknor
myesha-ticknor . @myesha-ticknor
Follow
363 views
Uploaded On 2018-03-15

Probabilistic Inference Agenda - PPT Presentation

Random variables Bayes rule Intro to Bayesian Networks Cheat Sheet Probability Distributions Event boolean propositions that may or may not be true in a state of the world For any event x ID: 651779

cavity probability variables state probability cavity state variables independent random val alarm distribution toothache joint values conditional marycalls 001

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Probabilistic Inference Agenda" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Probabilistic InferenceSlide2

Agenda

Random variables

Bayes rule

Intro to Bayesian NetworksSlide3

Cheat Sheet: Probability Distributions

Event:

boolean

propositions that may or may not be true in a state of the worldFor any event x:

P(x)

0

(

axiom

)

P(x)

=1-P(x)

For any events

x,y

:

P(

x

y

) = P(

y

x

)

(symmetry)

P(

x

y

) =

P(

y

x

)

(

symmetry

)

P(

x

y

) =

P(x)+P(y)-P(

x

y

)

(axiom)

P(x) = P(

x

y

)+P(x



y)

(marginalization)

x and y are independent

iff

P(

x

y

)=P(x)P(y). This is an explicit assumptionSlide4

Cheat Sheet: conditional distributions

P(

x|y

) reads “probability of x given y”P(x) is equivalent to P(

x|true

)

P(

x|y

)

P(

y

|x

)

P(

x|y

) = P(

x

y

)/P(y)

(definition)

P(

x|yz

) = P(

xy|z

)/P(

y|z

)

(

definition

)

P(

x|y

)=1-P(

x|y

). i.e., P(•|y) is a probability distributionSlide5

Random Variables

5Slide6

Random Variables

In a possible world, a random variable X can take on

one

of a set of values Val(X)={x1

,…,

x

n

}

Such an

event

is written ‘X=x’

Capital: random variable

Lowercase: assignment of variable to value

Truth assignments to boolean random variables may also be expressed as ‘X’ or ‘X’

6Slide7

Notation with Random Variables

Capital letters A,B,C denote random variables

Each random variable X can take one of a set of possible values

x

Val

(X)

Boolean random variable has Val(X)={

True,False

}

Although the most unambiguous way of writing a probabilistic belief is over an event…

P(X=x) = a number

P(X=x

 Y=y) = a number

…it is tedious to list a large number of statements that hold for multiple values x and y

Random variables allow using a shorthand

notation. (Unfortunately this is a

source of a lot of initial confusion!)Slide8

Decoding Probability Notation

Mental rule #1: Lowercase: assignments are often left implicit when unambiguous

P(a) = P(A=a) = a numberSlide9

Decoding Probability Notation (Boolean variables)

P(X=True) is written P(X)

P(X=False) is written P(X)

[Since P(X) = 1-P(X), knowing P(X) is enough to specify the whole distribution over X=True or X=False]Slide10

Decoding Probability Notation

Mental rule #2: Drop the AND, use commas

P(

a

,b

)

= P(

a

b

) = P(A=a

B=b

) = a numberSlide11

Decoding Probability Notation

Mental rule #3: Uppercase => values left implicit

Suppose Val(X) = {1,2,3}

When I write P(X), it states “the distribution defined over

all

of P(X=1), P(X=2), P(X=3)”

It is not a single number, but rather a set of numbers

P(X) =

[A

probability table]Slide12

Decoding Probability Notation

P(A,

B

) = [P(A=a  B=b

) for all combinations of

a

Val

(A),

b

Val

(B

)]A probability table with |Val(A)|x|Val(B)| entries

12Slide13

Decoding Probability Notation

Mental rule #3: Uppercase => values left

implicit

So when you see f(A,B)=g(A,B) this means:

“f(

a,b

) = g(

a,b

) for all values of

a

Val

(A) and

b

Val

(B)”

f(A,B)=g(A) means:

“f(

a,b

) = g(a) for all values of

a

Val

(A) and

b

Val

(B

)”

f(

A,b

)=g(

A,b

) means:

f(

a,b

) = g(

a,b

) for all values of

a

Val

(A

)”

Order doesn’t matter. P(A,B) is equivalent to P(B,A)Slide14

Another Mnemonic: Functional Equalities

P(X) is treated as a function over a variable X

Operations

and relations are on “function objects”

If you say f(x) = g(x) without a value of x, then you can infer f(x) = g(x) holds for all x

Likewise if you say f(

x,y

) = g(x) without stating a value of x or y, then you can infer f(

x,y

) = g(x) holds for all

x,y

14Slide15

Quiz: What does this mean?

P(A

B)

= P(A)+P(B)- P(A

B)

P(A=a

 B=b)

= P(A=a) + P(B=b)

P(A=a

B=b)

For all a

Val(A)

and b

Val(B)Slide16

Marginalization

If X, Y are

boolean

random variables that describe the state of the world, then

This generalizes to multiple variables

+

+

Etc.

 

16Slide17

Marginalization

If X, Y are random variables:

This generalizes to multiple variables

Etc.

 

17Slide18

Decoding Probability Notation (Marginalization)

Mental rule #4: domains are usually implicit

Suppose a belief state P(X,Y,Z) is defined over X, Y, and Z

If I write P(X), I am implicitly marginalizing over Y and Z

P(X) =

S

y

S

z

P(

X,y

,

z

)

P(X) =

S

y

S

z

P(X  Y=y  Z=z

)

P(X=x)

=

S

y

S

z

P(X=x

 Y=y  Z=z

) for all x

By

convention, each of y and z are summed over Val(Y), Val(Z)(should be interpreted as)(should be interpreted as)Slide19

Conditional Probability for Random Variables

P(A|B) is the

posterior probability

of A given knowledge of B

“For each

b

Val

(B): g

iven that I know B=b, what would I believe is the distribution over A?”

If a new piece of information C arrives, the agent’s new belief (if it obeys the rules of probability) should be

P(A|B,C)Slide20

Conditional Probability for

Random Variables

P(A

,B) = P(A|B) P(B)

= P(B|A) P(A)

P(A|B) is the

posterior probability

of A given knowledge of B

Axiomatic definition:

P(A|B) = P(A

,

B)/P(B)Slide21

Conditional Probability

P(A

,

B) = P(A|B) P(B) = P(B|A) P(A)

P(A,B,C) = P(A|B,C) P(B

,

C)

= P(A|B,C) P(B|C) P(C)

P(Cavity) =

S

t

S

p

P(

Cavity,t,p

)

=

S

t

S

p

P(

Cavity|t,p

) P(

t,p

)

=

S

t

S

p

P(

Cavity|t,p

)

P(

t|p

) P(p)Slide22

Independence

Two random variables A and B are

independent

if P(A,B) = P(A) P(B)

hence P(A|B) = P(A)

Knowing

B doesn’t give you any information about

A

[This equality has

to hold for all combinations of values that

A,B

can take on

]Slide23

Remember: Probability Notation Leaves Values Implicit

P(A

B)

= P(A)+P(B)- P(A

B) means

P(A=a

 B=b)

= P(A=a) + P(B=b)

P(A=a

B=b)

For all a

Val(A)

and b

Val(B)

A and B are

random variables

.

A=a and B=b are

events.

Random variables indicate many possible combinations of eventsSlide24

Conditional Probability

P(A

,

B) = P(A|B) P(B) = P(B|A) P(A)

P(A|B) is the

posterior probability

of A given knowledge of B

Axiomatic definition:

P(A|B) = P(A

,

B)/P(B)Slide25

Probabilistic Inference

25Slide26

Conditional Distributions

State

P(state)

C, T, P

0.108

C, T,

P

0.012

C,

T, P

0.072

C,

T,

P

0.008

C, T, P

0.016

C, T,

P

0.064

C,

T, P

0.144

C,

T,

P0.576P(Cavity|Toothache) = P(CavityToothache

)/P(Toothache)

=

(

0.108+0.012)/(0.108+0.012+0.016+0.064) = 0.6

Interpretation: After observing Toothache, the patient is no longer an “average” one, and the prior probability (0.2) of Cavity is no longer valid

P(

Cavity|Toothache

) is calculated by keeping the ratios of the probabilities of the 4

cases of Toothache

unchanged, and normalizing their sum to 1 Slide27

Updating the Belief State

The patient walks into the dentists door

Let

D now observe

evidence E: Toothache holds with

probability 0.8 (e.g., “the patient says so”)

How should D update its belief state?

State

P(state)

C, T, P

0.108

C, T,

P

0.012

C,

T, P

0.072

C,

T,

P

0.008

C, T, P

0.016

C, T,

P

0.064

C,

T, P

0.144C, T,

P0.576Slide28

Updating the Belief State

P(

Toothache|E

) = 0.8We want to compute

P(C

,

T,P

|E

)

=

P(C

,

P

|T,E

) P(T|E)

Since E is not directly related to the cavity or the probe catch

, we consider that C and P are independent of E given T, hence:

P(C

,

P|T,E

) = P(

C

P

|T)

P(C,

T,P

|E

)

=

P(C

,

P,T

)

P(T|E)/P(T)

State

P(state)

C, T, P0.108C, T, P0.012C, T, P0.072C, T, P0.008C, T, P0.016C, T, 

P0.064

C,

T, P

0.144

C,

T,

P

0.576Slide29

Updating the Belief State

P(

Toothache|E

) = 0.8We want to compute P(

C

TP

|E)

= P(C

P

|T,E) P(T|E)

Since E is not directly related to the cavity or the probe catch

, we consider that C and P are independent of E given T, hence:

P(C

P|T,E) = P(

C

P

|T)

P(C

TP

|E

)

=

P(C

,

P,T

)

P(T|E)/P(T)

State

P(state)

C, T, P

0.108

C, T,

P

0.012

C, T, P0.072C, T, P0.008C, T, P0.016C, T, P

0.064

C,

T, P

0.144

C,

T,

P

0.576

These rows should be scaled to sum to 0.8

These rows should be scaled to sum to 0.2Slide30

Updating the Belief State

P(

Toothache|E

) = 0.8We want to compute P(

C

TP

|E)

= P(C

P

|T,E) P(T|E)

Since E is not directly related to the cavity or the probe catch

, we consider that C and P are independent of E given T, hence:

P(C

P|T,E) = P(

C

P

|T)

P(C

TP

|E

)

=

P(C

,

P,T

)

P(T|E)/P(T)

State

P(state)

C, T, P

0.108

0.432

C, T,

P

0.012 0.048C, T, P0.072 0.018C, T, P0.008 0.002C, T, P

0.016 0.064

C, T,

P

0.064

0.256

C,

T, P

0.144

0.036

C,

T,

P

0.576

0.144

These rows should be scaled to sum to 0.8

These rows should be scaled to sum to 0.2Slide31

Issues

If a state is described by n propositions, then a belief state contains

2

n

states (possibly, some have probability 0)

Modeling difficulty: many numbers must be entered in the first place

Computational issue: memory size and timeSlide32

Independence of events

Two events A=a and B=b are

independent

if

P(A=a

B=b) = P(A=a) P(B=b)

hence P(A=

a|B

=b) = P(A=a)

Knowing B=b

doesn’t give you any information about

whether A=a is trueSlide33

Independence of random variables

Two random variables A and B are

independent

if

P(A,B) = P(A) P(B)

hence P(A|B) = P(A)

Knowing

B doesn’t give you any information about

A

[This equality has

to hold for all combinations of values that

A and B

can take

on, i.e., all events A=a and B=b are independent]Slide34

Significance of independence

If A and B are independent, then

P(A,B) = P(A) P(B)

=> The joint distribution over A and B can be defined as a product of the distribution of A and the distribution of B

Rather than storing a big probability table over all combinations of A and B, store two much smaller probability tables!

To compute P(A=a

B=b), just look up P(A=a) and P(B=b) in the individual tables and multiply them togetherSlide35

Conditional Independence

Two random variables A and B are

conditionally independent given C

, if

P(A, B

|

C

) = P(A|C) P(B

|

C)

hence P(A|B,C) = P(A|C

)

Once you know C, learning B

doesn’t give you any information about A

[again, this has to hold for all combinations of values that A,B,C can take on]Slide36

Significance of Conditional independence

Consider Rainy, Thunder, and

RoadsSlippery

Ostensibly, thunder doesn’t have anything directly to do with slippery roads…

But they happen together more often when it rains, so they are not independent…

So it is reasonable to believe that Thunder and

RoadsSlippery

are conditionally independent given Rainy

So if I want to estimate whether or not I will hear thunder, I don’t need to think about the state of the roads, just whether or not it’s raining!Slide37

Toothache and

PCatch

are independent given Cavity, but this relation is hidden in the numbers!

[Quiz]

Bayesian networks

explicitly represent independence among propositions

to reduce the number of probabilities defining a belief state

State

P(state)

C, T, P

0.108

C, T,

P

0.012

C,

T, P

0.072

C,

T,

P

0.008

C, T, P

0.016

C, T,

P

0.064

C,

T, P

0.144C, T, P0.576Slide38

Bayesian Network

Notice that Cavity is the “cause” of both Toothache and

PCatch

, and represent the causality links explicitly

Give the prior probability distribution of Cavity

Give the conditional probability tables of Toothache and

PCatch

Cavity

Toothache

PCatch

5 probabilities, instead of 7

P(C

,

T,P

)

=

P(T

,

P|C

) P(C)

=

P(

T

|C) P(P|C) P(C)

Cavity

Cavity

P(T|C)

0.6

0.1

P(Cavity)

0.2

Cavity

Cavity

P(P|C)

0.9

0.02Slide39

Conditional Probability Tables

Cavity

Toothache

P(Cavity)

0.2

Cavity

Cavity

P(T|C)

0.6

0.1

PCatch

Columns sum

to 1

If X takes

n

values, just store

n

-1 entries

P(C

,

T,P

)

=

P(T,

P|C

) P(C)

=

P(

T

|C) P(P|C) P(C)

Cavity

Cavity

P(T|C)

0.6

0.1

P(

T|C)

0.4

0.9

Cavity

Cavity

P(P|C)

0.9

0.02Slide40

Significance of Conditional independence

Consider Grade(CS101), Intelligence, and SAT

Ostensibly, the grade in a course doesn’t have a direct relationship with SAT

scores

but good students are more likely to get

good SAT scores

, so they are not independent…

It is reasonable to believe that

Grade(CS101) and SAT are

conditionally independent given IntelligenceSlide41

bayesian

Network

Explicitly

represent independence among

propositions

Notice that Intelligence is the “cause” of both Grade and SAT, and

the causality

is represented explicitly

Intel.

Grade

P(I=x)

high

0.3

low

0.7

SAT

7

probabilities, instead of

11

P(I

,

G

,

S

)

=

P(G

,

S|I) P(I)

=

P(G|I) P(S|I) P(I)

P(G=

x|I

)

I=low

I=high

‘A’

0.2

0.74

‘B’

0.34

0.17

‘C’

0.46

0.09

P(S=

x|I

)

I=low

I=high

low

0.95

0.2

high

0.05

0.8Slide42

Significance of Bayesian Networks

If we know that variables

are conditionally independent, we should be able to decompose joint distribution to take advantage of

it

Bayesian networks are a way of efficiently

factoring

the joint distribution into conditional probabilities

And also building complex joint distributions from smaller models of probabilistic relationships

But…

What knowledge does the BN encode about the distribution?

How do we use a BN to compute probabilities of variables that we are interested in?Slide43

A More Complex BN

Burglary

Earthquake

Alarm

MaryCalls

JohnCalls

causes

effects

Directed

acyclic graph

Intuitive meaning of arc from x to y: “x has direct influence on y”Slide44

B

E

P(A|

)

TTFF

TFTF

0.95

0.94

0.29

0.001

Burglary

Earthquake

Alarm

MaryCalls

JohnCalls

P(B)

0.001

P(E)

0.002

A

P(J|…)

TF

0.90

0.05

A

P(M|…)

TF

0.70

0.01

Size of the CPT for a

node with k parents: 2

k

A More Complex BN

10 probabilities, instead of 31Slide45

What does the BN encode?

Each of the beliefs

JohnCalls

and

MaryCalls

is independent of Burglary and Earthquake given Alarm or

Alarm

Burglary

Earthquake

Alarm

MaryCalls

JohnCalls

For example, John does

not observe any burglaries

directly

P(B

J)

P(B) P(J)

P(

B

J

|A)

=

P(

B

|A) P(

J

|A)Slide46

What does the BN encode?

The beliefs

JohnCalls

and

MaryCalls

are independent given Alarm or

Alarm

For instance, the reasons why

John and Mary may not call if

there is an alarm are unrelated

Burglary

Earthquake

Alarm

MaryCalls

JohnCalls

P(

B

J

|A)

=

P(

B

|A) P(

J

|A)

P(J

M|A)

=

P(J|A) P(M|A)

A node is independent of its non-descendants given its parentsSlide47

What does the BN encode?

Burglary

Earthquake

Alarm

MaryCalls

JohnCalls

A node is independent of its non-descendants given its parents

Burglary and

Earthquake are

independent

The beliefs JohnCalls and MaryCalls are independent given Alarm or

Alarm

For instance, the reasons why

John and Mary may not call if

there is an alarm are unrelated Slide48

Locally Structured World

A world is

locally structured (or sparse)

if each of its components interacts directly with relatively few other components

In a sparse world, the CPTs are small and the BN contains much fewer probabilities than the full joint distribution

If the # of entries in each CPT is bounded by a constant, i.e., O(1), then the # of probabilities in a BN is

linear

in n – the # of propositions – instead of

2

n

for the joint distributionSlide49

Equations Involving Random Variables Give Rise to Causality Relationships

C = A

B

C = max(A,B)

Constrains joint probability P(A,B,C)

Nicely encoded as causality relationship

C

A

B

Conditional probability given by equation rather than a CPTSlide50

Naïve Bayes Models

P(Cause,Effect

1

,…,Effectn

)

= P(Cause)

P

i

P(

Effect

i

| Cause)

Cause

Effect

1

Effect

2

Effect

nSlide51

Bayes’ Rule and other Probability Manipulations

P(A

,

B) = P(A|B) P(B)

= P(B|A) P(A)

P(A|B) = P(B|A) P(A) / P(B)

Gives us a way to manipulate distributions

e.g. P(B) =

S

a

P(B|A=a) P(A=a)

Can derive P(A|B), P(B) using only P(B|A) and P(A)Slide52

Naïve Bayes Classifier

P(Class,Feature

1

,…,Featuren

)

= P(Class)

P

i

P(

Feature

i

| Class)

Class

Feature

1

Feature

2

Feature

n

P(C|F

1

,….,

F

k

) = P(C,F

1

,….,

F

k

)/P(F

1

,….,

F

k

)

= 1/Z P(C)

P

i

P(

Fi|C

)

Given features, what class?

Spam / Not Spam

English / French/ Latin

Word occurrencesSlide53

But does a BN represent a belief state?

In other words, can we compute the full joint distribution of the propositions from it?Slide54

Calculation of Joint Probability

B

E

P(A|

)

TTFF

TFTF

0.95

0.94

0.29

0.001

Burglary

Earthquake

Alarm

MaryCalls

JohnCalls

P(B)

0.001

P(E)

0.002

A

P(J|…)

TF

0.90

0.05

A

P(M|…)

TF

0.70

0.01

P(J

M

A

B

E)

= ??Slide55

P(J

M

AB

E

)

= P(

J

M|A,B,

E

)

P(A

BE)

= P(J

|A,

B

,

E)

P(M

|A,B,E

)

P(A



B



E

)

(J and M are independent given A)

P(J

|A,

B

,E

) = P(

J

|A)(J and B and J and E are independent given A)P(M|A,B,E) = P(M|A)P(ABE) = P(A|B,E)  P(B|E)  P(E) = P(A|B,E)  P(B)  P(E)(B and E are independent)P(JMABE) = P(J|A)P(M|A)P(A|B,

E)P(B)P(

E

)

Burglary

Earthquake

Alarm

MaryCalls

JohnCallsSlide56

Calculation of Joint Probability

B

E

P(A|

)

TTFF

TFTF

0.95

0.94

0.29

0.001

Burglary

Earthquake

Alarm

MaryCalls

JohnCalls

P(B)

0.001

P(E)

0.002

A

P(J|…)

TF

0.90

0.05

A

P(M|…)

TF

0.70

0.01

P(J

M

A

B

E)

= P(J|A)P(M|A)P(A|

B,

E)P(

B)P(

E)

= 0.9 x 0.7 x 0.001 x 0.999 x 0.998

= 0.00062Slide57

Calculation of Joint Probability

B

E

P(A|

)

TTFF

TFTF

0.95

0.94

0.29

0.001

Burglary

Earthquake

Alarm

MaryCalls

JohnCalls

P(B)

0.001

P(E)

0.002

A

P(J|…)

TF

0.90

0.05

A

P(M|…)

TF

0.70

0.01

P(J

M

A

B

E)

=

P(

J

|A)P(M|A)P(A|

B,

E)P

(

B)P

(

E)

= 0.9 x 0.7 x 0.001 x 0.999 x 0.998

= 0.00062

P(x

1

x

2

x

n

) =

P

i=1,…,n

P(x

i

|parents(X

i

))

full joint distribution tableSlide58

Calculation of Joint Probability

B

E

P(A|

)

TTFF

TFTF

0.95

0.94

0.29

0.001

Burglary

Earthquake

Alarm

MaryCalls

JohnCalls

P(B)

0.001

P(E)

0.002

A

P(J|…)

TF

0.90

0.05

A

P(M|…)

TF

0.70

0.01

P(x

1

x

2

x

n

) =

P

i=1,…,n

P(x

i

|parents(X

i

))

full joint distribution table

P(J

M

A

B

E)

=

P(

J

|A)P(M|A)P(A|

B,

E)P

(

b)P(

e)

= 0.9 x 0.7 x 0.001 x 0.999 x 0.998

= 0.00062

Since a BN defines the full joint distribution of a set of propositions, it represents a belief stateSlide59

Homework

Read R&N 14.1-3