for the IoT Nirupam Roy MW 200315pm CHM 1224 CMSC 715 Fall 2021 Lecture 31 Machine Learning for IoT Happy or sad Happy or sad Happy or sad Happy or sad Past experience P The dolphin is happy ID: 915094
Download Presentation The PPT/PDF document "Wireless and Mobile Systems" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
Wireless and Mobile Systemsfor the IoT
Nirupam Roy
M-W 2:00-3:15pm
CHM 1224
CMSC 715 : Fall 2021
Lecture
3.1: Machine Learning for IoT
Slide2Happy or sad?
Slide3Happy or sad?
Slide4Happy or sad?
Slide5Happy or sad?
Past experience
P (
The dolphin is happy
|
Experience
)
Slide6Inference from sensor data
Tracking arm motion
Slide7Inference from sensor data
Partial information from sensors
Tracking arm motion
Slide8Inference from sensor data
Partial information from sensors
+
Tracking arm motion
Knowledge about the arm’s motion
(probabilistic models)
Slide9Inference from sensor data
Partial information from sensors
+
Arm’s gesture
and movement
tracking
Tracking arm motion
Knowledge about the arm’s motion
(probabilistic models)
Slide10Slide11Sensors are not perfect
Slide12The physical world
Statistical models, learning, perception
Meaningful information
Slide13The physical world
Statistical models, learning, perception
Meaningful information
Neither perfect nor adequate
Inherently probabilistic
in nature
Slide14Probability refresher
Slide15A few basic probability rules
Likelihood of multiple events occurring simultaneously
Joint probability: P (
A
,
B
) = P (B
, A)
Conditional probability: P (
A
|B) = P(A
,
B)/P(B
)
Probability of an event, given another event has occurred
Marginal probability: P (
B
) =
A few basic probability rules
Likelihood of multiple events occurring simultaneously
Joint probability: P (
A
,
B
) = P (B
, A)
Conditional probability: P (
A
|B) = P(A
,
B)/P(B
)
Probability of an event, given another event has occurred
Marginal probability: P (
B
) =
A1
A2
A3
B
Slide17A few basic probability rules
Likelihood of multiple events occurring simultaneously
Joint probability: P (
A
,
B
) = P (B
, A)
Conditional probability: P (
A
|B) = P(A
,
B)/P(B
)
Probability of an event, given another event has occurred
Marginal probability: P (
B
) =
Chain rule: P (
A
,
B
,
C
) = P(
A
|
B
,
C
) P(
B
,
C
) = P(
A
|
B
,
C
) P(
B
|
C
) P(
C
)
Slide18Bayes rule
Bayes rule: P (
A
i|
B) =
Bayes rule
Posterior
Prior
Likelihood
Bayes rule: P (
A
i
|
B
) =
Bayes rule
Posterior
Prior
Likelihood
=
Bayes rule: P (
A
i
|
B
) =
Bayes rule
=
Posterior
Prior
Likelihood
=
Bayes rule: P (
A
i
|
B
) =
Bayes rule
Posterior
Prior
Likelihood
Relates inverse representations of the probabilities concerning two events
=
=
Bayes rule: P (
A
i
|
B
) =
Bayes rule: View 1
Healthy: A1
Diabetes: A2
Cancer: A3
Smoker: B
Slide24Bayes rule: View 1
Healthy: A1
Diabetes: A2
Cancer: A3
Smoker: B
Posterior
Prior
Likelihood
Bayes rule: P (
A
i
|
B
) =
Bayes rule: View 1
Posterior
Prior
Likelihood
Relates inverse representations of the probabilities concerning two events
Healthy: A1
Diabetes: A2
Cancer: A3
Smoker: B
=
=
Bayes rule: P (
A
i
|
B
) =
Bayes rule: View 2
Not cancer: ~A
Cancer: A
Smoker: B
Slide27Bayes rule: View 2
Not cancer: ~A
Cancer: A
Smoker: B
Posterior
Prior
Likelihood
Bayes rule: P (
A
|
B
) =
Bayes rule: View 2
Posterior
Prior
Likelihood
Updates the believes, based on evidences
Not cancer: ~A
Cancer: A
Smoker: B
=
Bayes rule: P (
A
|
B
) =
Happy or sad?
Slide30Happy or sad?
Given the “sequence” of tricky questions asked today, what should be the answer?
Slide31Markov Model
Slide32Markov Model
Sunny
Sunny
Rainy
Sunny
Rainy
Rainy
t1 t2 t3 t4 t5 t6 (time)
Slide33Markov Model
Sunny
day
Rainy
day
Sunny
Sunny
Rainy
Sunny
Rainy
Rainy
t1 t2 t3 t4 t5 t6 (time)
Slide34Markov Model
Sunny
day
Rainy
day
Sunny
Sunny
Rainy
Sunny
Rainy
Rainy
P (sunny after sunny)
P (rainy after sunny)
P (sunny after rainy)
P (rainy after rainy)
t1 t2 t3 t4 t5 t6 (time)
M
th
order Markov assumption: Current state depends on past M states
Slide35Markov Model
Sunny
day
Rainy
day
Sunny
Sunny
Rainy
Sunny
Rainy
Rainy
P (sunny after sunny)
P (rainy after sunny)
P (sunny after rainy)
P (rainy after rainy)
The future depends on the present only, not on the past
t1 t2 t3 t4 t5 t6 (time)
Slide36Markov Model
Sunny
day
Rainy
day
Sunny
Sunny
Rainy
Sunny
Rainy
Rainy
P (sunny after sunny)
P (rainy after sunny)
P (sunny after rainy)
P (rainy after rainy)
t1 t2 t3 t4 t5 t6 (time)
Slide37Markov Model
Sunny
day
Rainy
day
Sunny
Sunny
Rainy
Sunny
Rainy
Rainy
P (sunny after sunny)
P (rainy after sunny)
P (sunny after rainy)
P (rainy after rainy)
Temp.
Humidity
Wind
Temp.
Humidity
Wind
t1 t2 t3 t4 t5 t6 (time)
Slide38Hidden Markov Model
Slide39Hidden Markov Model: Toy robot localization example
Find location of the robot
(Hidden information)
S
1
S
2
S
3
…
Observations =
sensor measurement
Slide40t=0
1
0
Prob
Hidden Markov Model: Toy robot localization example
Slide41t=1
1
0
Prob
Hidden Markov Model: Toy robot localization example
Slide42t=1
1
0
Prob
Hidden Markov Model: Toy robot localization example
Slide43t=2
1
0
Prob
Hidden Markov Model: Toy robot localization example
Slide44t=3
1
0
Prob
Hidden Markov Model: Toy robot localization example
Slide45t=4
1
0
Prob
Hidden Markov Model: Toy robot localization example
Slide46t=5
1
0
Prob
Hidden Markov Model: Toy robot localization example
Slide47Hidden Markov Model: Toy robot localization example
State = location on the grid = S
i
S
1
S
2
S
3
…
Slide48Hidden Markov Model: Toy robot localization example
State = location on the grid = S
i
S
1
S
2
S
3
…
Observations =
sensor measurement = M
i
Slide49Hidden Markov Model: Toy robot localization example
State = location on the grid = S
i
S
1
S
2
S
3
…
Observations =
sensor measurement = M
i
Depends on the current state
only
(Emission)
S
i
Slide50Hidden Markov Model: Toy robot localization example
State = location on the grid = S
i
S
1
S
2
S
3
…
Observations =
sensor measurement = M
i
Depends on the current state
only
(Emission)
S
i
S
i
S
i+1
S
i+2
…
States change over time
(Transition)
Slide51Hidden Markov Model: Definition
S
2
M
1
S
1
S
3
S
4
M
2
M
3
M
4
Slide52S
2
M
1
S
1
S
3
S
4
M
2
M
3
M
4
Hidden states
Hidden Markov Model: Definition
Slide53S
2
M
1
S
1
S
3
S
4
M
2
M
3
M
4
Hidden states
Observations
Hidden Markov Model: Definition
Slide54M
1
S
1
Emission
S
2
S
1
Transition
Hidden Markov Model: Definition
Slide551st order Markov assumption:
Transition probability depends on the current state only.
Output independence assumption:
Output/Emission probability depends on the current state only.
M
1
S
1
Emission
S
2
S
1
Transition
Hidden Markov Model: Definition
Slide56Probability of a sequence of hidden states, given a sequence of observations
Hidden Markov Model: Definition
Slide57Probability of a sequence of hidden states, given a sequence of observations
Hidden Markov Model: Definition
Slide58Probability of a sequence of hidden states, given a sequence of observations
Hidden Markov Model: Definition
Slide59Chain rule
Hidden Markov Model: Definition
Slide60Observation depends on the current state only
Hidden Markov Model: Definition
Slide61Observation depends on the current state only
Future state depends on the current state only
Hidden Markov Model: Definition
Slide62Hidden Markov Model: Definition
For N hidden states and a sequence of T observations, N
T
different combinations
Problems solved by HMM
1) Likelihood:
Determine the likelihood of an observation sequence.
2) Decoding:
Given an observation sequence, determine the best sequence of hidden states.
3) Learning:
Given an observation sequence and a sequence of states, learn the HMM parameters:
i
) Transition probabilities
ii) Emission probabilities
[Viterbi algorithm]
[Forward-backward algorithm]
[Baum-Welch algorithm]