/
Lecture 3 Math & Probability Background Lecture 3 Math & Probability Background

Lecture 3 Math & Probability Background - PowerPoint Presentation

stefany-barnette
stefany-barnette . @stefany-barnette
Follow
356 views
Uploaded On 2018-09-22

Lecture 3 Math & Probability Background - PPT Presentation

ch 12 of Machine Vision by Wesley E Snyder amp Hairong Qi General notes about the book The book is an overview of many concepts Top quality design requires Reading the cited literature ID: 675057

state probability markov function probability state function markov linear image amp process vector machine scalar analysis output vectors matrix

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Lecture 3 Math & Probability Backgro..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Lecture 3

Math & Probability Background

ch.

1-2 of

Machine Vision

by Wesley E. Snyder &

Hairong

QiSlide2

General notes about the book

The book is an overview of many concepts

Top quality design requires:Reading the cited literatureReading more literatureExperimentation & validation

2Slide3

Two themes

Consistency

A conceptual tool implemented in many/most algorithmsOften must fuse information from many local measurements and prior knowledge to make global conclusions about the imageOptimization

Mathematical mechanism

The

“workhorse” of machine vision

3Slide4

Image Processing Topics

Enhancement

CodingCompressionRestoration“Fix” an image

Requires model of image degradation

Reconstruction

4Slide5

Machine Vision Topics

AKA:

Computer visionImage analysisImage understanding

Pattern recognition:

Measurement of features

Features characterize the image, or some part of it

Pattern classification

Requires knowledge about the possible classes

5

Our Focus

Feature Extraction

Classification & Further Analysis

Original Image

CNN:

Convolutional Neural Network

FCN:

Fully Connected (Neural) Network

…or ”another” CNN, e.g. U-Net decoder sectionSlide6

Feature measurement

6

Noise removal

Segmentation

Original Image

Shape Analysis

Consistency Analysis

Matching

Features

Restoration

Ch. 6-7

Ch. 8

Ch. 10-11

Ch. 12-16

Ch. 9

Varies GreatlySlide7

Probability

Probability of an event

a occurring:Pr(

a

)

Independence

Pr

(

a

)

does not depend on the outcome of event

b

, and vice-versaJoint probability

Pr(a,b) = Prob. of both a

and b occurring

Conditional probabilityPr(

a|b) = Prob. of a if we already know the outcome of event

bRead “probability of

a given b”

7Slide8

Probability for continuously-valued functions

Probability distribution function:

P(x) = Pr

(

z<x

)Probability density function:

8Slide9

Linear algebra

Unit vector:

|x| = 1

Orthogonal vectors:

x

Ty = 0

Orthonormal: orthogonal unit vectors

Inner product of continuous functions

Orthogonality

&

orthonormality

apply here too

9Slide10

Linear independence

No one vector is a linear combination of the others

xj 

a

i x

i

for any

a

i

across all

i

 j

Any linearly independent set of d vectors {xi

=1…d

} is a basis set that spans the space d

Any other vector in d may be written as a linear combination of {x

i}Often convenient to use orthonormal basis sets

Projection: if y=

ai xi

then ai=yTxi

10Slide11

Linear transforms

= a matrix, denoted e.g.

AQuadratic form:Positive definite:

Applies to

A

if 11Slide12

More derivatives

Of a scalar function of

x:Called the gradientReally important!

Of a vector function of x

Called the

Jacobian

Hessian = matrix of 2nd derivatives of a scalar function

12Slide13

Misc. linear algebra

Derivative operators

Eigenvalues & eigenvectorsTranslates “most important vectors”

Of a linear transform (e.g., the matrix

A)

Characteristic equation:

A

maps

x

onto itself with only a change in length

is an eigenvalue

x is its corresponding eigenvector

13Slide14

Function minimization

Find the vector

x which produces a minimum of some function f (x

)

x

is a parameter vectorf(

x

)

is a scalar function of

x

The

objective function”The minimum value of

f is denoted:The minimizing value of x is denoted:

14Slide15

Numerical minimization

Gradient descent

The derivative points away from the minimumTake small steps, each one in the “down-hill

direction

Local vs. global minimaCombinatorial optimization:

Use simulated annealing

Image optimization:

Use mean field annealing

More recent improvements to gradient descent:

Momentum, changing step size

Training CNN: Grad.

Desc. w/ Mom. or else ADAM

15Slide16

Markov models

For temporal processes:

The probability of something happening is dependent on a thing that just recently happened.For spatial processesThe probability of something being in a certain state is dependent on the state of something nearby.

Example: The value of a pixel is dependent on the values of its neighboring pixels.

16Slide17

Markov chain

Simplest Markov model

Example: symbols transmitted one at a timeWhat is the probability that the next symbol will be w?For a “simple” (i.e. first order) Markov chain:

The probability conditioned on all of history is identical to the probability conditioned on the last symbol received.

17Slide18

Hidden Markov models (HMMs)

18

1

st

Markov

Process

2

nd

Markov

Process

f

(

t

)

f

(

t

)Slide19

HMM switching

Governed by a finite state machine (FSM)

19

Output 1

st

Process

Output 2

nd

ProcessSlide20

The HMM Task

Given only the output

f (t), determine:

The most likely state sequence of the switching FSM

Use the Viterbi algorithm (much better than brute force)

Computational Complexity of:

Viterbi:

(# state values)

2

* (# state changes)

Brute force: (# state values)

(# state changes)

The parameters of each hidden Markov model

Use the iterative process in the bookBetter, use someone else’s debugged code that they’ve shared

20