/
Christine Lew Christine Lew

Christine Lew - PowerPoint Presentation

alexa-scheidler
alexa-scheidler . @alexa-scheidler
Follow
441 views
Uploaded On 2015-10-24

Christine Lew - PPT Presentation

Dheyani Malde Everardo Uribe Yifan Zhang Supervisors Ernie Esser Yifei Lou BARCODE RECONITION TEAM UPC Barcode What type of barcode What is a barcode Structure Our barcode representation ID: 171144

filter jumps signal wiener jumps filter wiener signal barcode gradient projection mao

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Christine Lew" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Christine LewDheyani MaldeEverardo UribeYifan ZhangSupervisors:Ernie EsserYifei Lou

BARCODE RECONITION TEAMSlide2

UPC BarcodeWhat type of barcode? What is a barcode? Structure?Our barcode representation? Vector of 0s and 1s Slide3

Mathematical RepresentationBarcode Distortion Mathematical Representation:

What is convolution?

Every value in the blurred signal is given by the same combination of nearby values in the original

signal and the kernel determines these combinations.

Kernel

For our case,

the blur kernel k, or point spread function, is assumed to be a GaussianNoiseThe noise we deal with is white Gaussian noise

 Slide4

0.2 Standard DeviationSlide5

0.5 Standard DeviationSlide6

0.9 Standard DeviationSlide7

DeconvolutionWhat is deconvolution?It is basically solving for the clean barcode signal, .Difference between non-blind deconvolution and blind deconvolution:

Non-blind

deconvolution

:

we know how the signal was

blurred,

ie: we assume k is knownBlind deconvolution: we may know some or no information about how the signal was blurred. Very difficult.

 Slide8

Simple Methods of DeconvolutionThresholdingBasically converting signal to binary signal, seeing whether the amplitude at a specific point is closer to 0 or 1 and rounding to the value its closer to.Wiener filterClassical method of reconstructing a signal after being distorted, using known knowledge of kernel and noise. Slide9

Wiener FilterWe have: The

Wiener

Filter solves for:

Filter is easily described in frequency domain.

Wiener filter defines

, such that

x =

, where

is the estimated original

signal:

Note that if there is no noise, r =0, and

So

reduces to

.

 Slide10

0.7 Standard Deviation, 0.05 Sigma NoiseSlide11

0.7 Standard Deviation, 0.2 Sigma NoiseSlide12

0.7 Standard Deviation, 0.5 Sigma NoiseSlide13

Non-blind Deblurring using Yu Mao’s Method

By: Christine Lew

Dheyani MaldeSlide14

Overview

2 general approaches:

-

Yifei

(blind: don’t know blur kernel)

-Yu Mao (non-blind: know blur kernel

General goal:

-Taking a blurry barcode with noise and making it as clear as possible through gradient projection.

-Find method with best results and least errorSlide15

Data Model

Method’s goal to solve

Convex Model

K: blur kernel

U: clear barcode

B: blurry barcode with noise

b = k*u + noise

Find the minimum through gradient projection

Exactly like gradient descent, only we project onto [0,1] every iteration

Once

we find min u, we can predict clear signal

 Slide16

Classical Method

Compare with

Wiener

Filter in terms of error rate

Error rate: difference between reconstructed signal and ground

truth

 Slide17

Comparisons for Yu Mao’s Method

Yu Mao’s Gradient Projection

Wiener

FilterSlide18

Comparisons for Yu Mao’s Method (Cont.)

Wiener

Filter

Yu Mao’s Gradient ProjectionSlide19

Jumps

How does the number of jumps

affect the result

?

What happens if we apply the amount of jumps to the different methods of de-blurring?

Compared Yu Mao’s method &

Wiener

Filter

Created a code to calculate number of jumps

3 levels of jumps:

Easy: 4 jumps

Medium: 22 jumps

Hard: 45 jumps (regular barcode)Slide20

Created a code to calculate number of jumps:Jump: when the binary goes from 0 to 1 or 1 to 03 levels of jumps:

Easy: 4 jumps

Medium: 22 jumps

Hard: 45 jumps

(regular barcode)Slide21

How does the number of jumps affect the result (clear barcode)?Compare Yu Mao’s method & Weiner FilterSlide22

Comparison for Small Jumps (4 jumps)

Yu Mao’s Gradient Projection

Wiener

FilterSlide23

Comparison for Medium Jumps (22 jumps)

Yu Mao’s Gradient Projection

Wiener

FilterSlide24

Comparison for Hard Jumps (45 jumps)

Wiener

Filter

Yu Mao’s Gradient ProjectionSlide25

Wiener

Filter with Varying Jumps

- More jumps, greater error

-

Drastically

gets worse with more jumpsSlide26

Yu Mao's Gradient Projection with Varying Jumps

- More jumps, greater error

-

Slightly

gets worse with more jumpsSlide27

Conclusion

Yu Mao's method better

overall:

produces

less

error

from

jump cases: consistent error rate of 20%-30%

Wiener

filter did not have a consistent error

rate:

consistent

only for small/medium jumps

at

45 jumps, 40%- 50% error rateSlide28

Blind DeconvolutionYifan ZhangEverardo UribeSlide29

Derivation of ModelWe have:

For our approach, we assume that

, the kernel, is a symmetric point-spread function.

Since its symmetric, flipping it will produce an equivalent:

We

flip entire equation and began reconfiguration:

Y and N are matrix representations

 Slide30

Derivation of ModelSignal Segmentation & Final Equation:

Middle bars are always the same, represented as vector [0 1 0 1 0] in our case.

We have to solve for x in:

 Slide31

Gradient Projection

Projection of Gradient Descent ( first-order optimization)

Advantage:

Allows us to set a range

Disadvantage:

Takes very long time

Not extremely accurate results

Underestimate signal

 Slide32
Slide33

Least Squares

estimates unknown parameters

minimizes sum of squares of errors

considers observational errors

Slide34

Least Squares (cont.)

Advantages:

return results faster than other methods

easy to implement

reasonably accurate results

great results for low and high noise

Disadvantage:

doesn’t work well when there are errors inSlide35
Slide36

Total Least Squares

Least squares data modeling

Also considers errors of

SVD (C)

Singular Value Decomposition

Factorization

Slide37

Total Least Squares (Cont.)

Advantage:

works on data in which others does not

better than least squares when more errors in

Disadvantages:

doesn’t work for most data not in extremities

overfits data

not accurate

takes a long time

xSlide38