/
Beam Sampling for the Infinite Hidden Markov Model Beam Sampling for the Infinite Hidden Markov Model

Beam Sampling for the Infinite Hidden Markov Model - PowerPoint Presentation

lindy-dunigan
lindy-dunigan . @lindy-dunigan
Follow
413 views
Uploaded On 2016-03-07

Beam Sampling for the Infinite Hidden Markov Model - PPT Presentation

Van Gael et al ICML 2008 Presented by Daniel Johnson Introduction Infinite Hidden Markov Model iHMM is n onparametric approach to the HMM New inference algorithm for iHMM Comparison with Gibbs sampling algorithm ID: 246264

markov hidden gibbs model hidden markov model gibbs algorithm beam ihmm infinite sample sampler state sequence space 1st hmm

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Beam Sampling for the Infinite Hidden Ma..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Beam Sampling for the Infinite Hidden Markov Model

Van Gael, et al. ICML 2008

Presented by Daniel JohnsonSlide2

Introduction

Infinite Hidden Markov Model (

iHMM

) is

n

onparametric approach to the HMM

New inference algorithm for

iHMM

Comparison with Gibbs sampling algorithm

ExamplesSlide3

Hidden Markov Model (HMM)

Markov Chain with finite state space 1,…,K

Hidden state sequence:

s

= (s

1

, s

2

, … ,

s

T

)

π

ij

= p(

s

t

= j|s

t-1

=

i

)

Observation sequence:

y

= (y

1

, y

2

, … ,

y

T

)

Parameters

ϕ

s

t

such that

p(

y

t

|s

t

) = F(

ϕ

s

t

)

Known:

y

,

π

,

ϕ

,

F

Unknown:

sSlide4

Infinite Hidden Markov Model (

iHMM

)

Known:

y,

F

Unknown:

s, π, ϕ, KStrategy: use BNP priors to deal with additional unknowns: Slide5

Gibbs Methods

Teh

et al., 2006: marginalize out

π

,

ϕ

Update prediction for each s

t individually Computation of O(TK)Non-conjugacy handled in standard Neal wayDrawback: potential slow mixingSlide6

Beam Sampler

Introduce auxiliary variable

u

Conditioned on

u

, # possible trajectories finite

Use dynamic programming filtering algorithm Avoid marginalizing out π, ϕIteratively sample u, s, π, ϕ

, β

,

α

,

γSlide7

Auxiliary Variable u

Sample each u

t

~ Uniform(0,

π

s

t-1

st)u acts as a threshold on πOnly trajectories with πst-1st

≥ u

t

are possibleSlide8

Forward-Backward Algorithm

Forwards: compute p(s

t

|y

1:t

,u

1:t)

from t = 1..TBackward: compute p(st|st+1,y1:T,u1:T) and sample st from t = T..1Slide9

Non-Sticky ExampleSlide10

Sticky ExampleSlide11

Example: Well DataSlide12

Issues/Conclusions

Beam sampler is elegant and fairly straight forward

Beam sampler allows for bigger steps in the MCMC state space than the Gibbs method

Computational cost similar to Gibbs method

Potential for poor mixing

Bookkeeping can be complicated