Advanced Training Techniques PowerPoint Presentation, PPT - DocSlides

Advanced Training Techniques PowerPoint Presentation, PPT - DocSlides

2019-03-14 1K 1 0 0

Description

Prajit Ramachandran. Outline. Optimization. Regularization. Initialization. Optimization. Optimization Outline. Gradient Descent. Momentum. RMSProp. Adam. Distributed SGD. Gradient Noise. Optimization Outline. ID: 756025

Embed code:

Download this presentation



DownloadNote - The PPT/PDF document "Advanced Training Techniques" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.

Presentations text content in Advanced Training Techniques

Slide1

Advanced Training Techniques

Prajit Ramachandran

Slide2

Outline

Optimization

Regularization

Initialization

Slide3

Optimization

Slide4

Optimization Outline

Gradient Descent

Momentum

RMSProp

Adam

Distributed SGD

Gradient Noise

Slide5

Optimization Outline

Gradient Descent

Momentum

RMSProp

Adam

Distributed SGD

Gradient Noise

Slide6

http://weandthecolor.com/wp-content/uploads/2014/11/Hills-in-the-fog-Landscape-photography-by-Kilian-Sch%C3%B6nberger-a-photographer-from-Cologne-Germany.jpg

Slide7

Gradient Descent

Goal: optimize parameters to minimize loss

Step along the direction of steepest descent (negative gradient)

Slide8

Gradient Descent

Andrew Ng’s Machine Learning Course

Slide9

https://www.cs.cmu.edu/~ggordon/10725-F12/slides/05-gd-revisited.pdf

Slide10

2

nd

order

Taylor series approximation around x

Slide11

Wikipedia

Hessian measures curvature

Slide12

Slide13

Approximate Hessian with scaled identity

Slide14

Slide15

Set gradient of function to 0 to get minimum

Slide16

Take the gradient

0

Slide17

Solve for y

Slide18

Same equation!

Slide19

Computing the gradient

Use backpropagation to compute gradients efficiently

Need a differentiable function

Can’t use functions like argmax or hard binary

Unless using a different way to compute gradients

Slide20

Stochastic Gradient Descent

Gradient over entire dataset is impractical

Better to take quick, noisy steps

Estimate gradient over a mini-batch of examples

Slide21

Mini-batch tips

Use as large of a batch as possible

Increasing batch size on GPU is essentially free up to a point

Crank up learning rate when increasing batch size

Trick: use small batches for small datasets

Slide22

How to pick the learning rate?

Slide23

Too big learning rate

https://www.cs.cmu.edu/~ggordon/10725-F12/slides/05-gd-revisited.pdf

Slide24

Too small learning rate

https://www.cs.cmu.edu/~ggordon/10725-F12/slides/05-gd-revisited.pdf

Slide25

How to pick the learning rate?

Too big = diverge, too small = slow convergence

No “one learning rate to rule them all”

Start from a high value and keep cutting by half if model diverges

Learning rate schedule: decay learning rate over time

Slide26

http://cs231n.github.io/assets/nn3/learningrates.jpeg

Slide27

Optimization Outline

Gradient Descent

Momentum

RMSProp

Adam

Distributed SGD

Gradient Noise

Slide28

What will SGD do?

Start

Slide29

Zig-zagging

http://dsdeepdive.blogspot.com/2016/03/optimizations-of-gradient-descent.html

Slide30

What we would like

Avoid sliding back and forth along high curvature

Go fast in along the consistent direction

Slide31

Slide32

Same as vanilla gradient descent

Slide33

https://91b6be3bd2294a24b7b5-da4c182123f5956a3d22aa43eb816232.ssl.cf1.rackcdn.com/contentItem-4807911-33949853-dwh6d2q9i1qw1-or.png

Slide34

SGD with Momentum

Move faster in directions with consistent gradient

Damps oscillating gradients in directions of high curvature

Friction / momentum hyperparameter

μ

typically set to {0.50, 0.90, 0.99}

Nesterov’s Accelerated Gradient is a variant

Slide35

Momentum

Cancels out oscillation

Gathers speed in direction that matters

Slide36

Per parameter learning rate

Gradients of different layers have different magnitudes

Different units have different firing rates

Want different learning rates for different parameters

Infeasible to set all of them by hand

Slide37

Slide38

Adagrad

Gradient update depends on history of magnitude of gradients

Parameters with small / sparse updates have larger learning rates

Square root important for good performance

More tolerance for learning rate

Duchi et al 2011. “Adaptive Subgradient Methods for Online Learning and Stochastic Optimization”

Slide39

What happens as

t

increases?

Slide40

Maintain entire history of gradients

Sum of magnitude of gradients always increasing

Forces learning rate to 0 over time

Hard to compensate for in advance

Adagrad learning rate goes to 0

Slide41

Don’t maintain all history

Monotonically increasing because we hold all the history

Instead, forget gradients far in the past

In practice, downweight previous gradients exponentially

Slide42

Slide43

Slide44

RMSProp

Only cares about recent gradients

Good property because optimization landscape changes

Otherwise like Adagrad

Standard gamma is 0.9

Hinton et al. 2012, http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf

Slide45

Momentum

RMSProp

Slide46

Slide47

Adam

Essentially, combine RMSProp and Momentum

Includes bias correction term from initializing m and v to

0

Default parameters are surprisingly good

Trick: learning rate decay still helps

Trick: Adam first then SGD

Slide48

What to use

SGD + momentum and Adam are good first steps

Just use default parameters for Adam

Learning rate decay always good

Slide49

Alec Radford

Slide50

Optimization Outline

Gradient Descent

Momentum

RMSProp

Adam

Distributed SGD

Gradient Noise

Slide51

How to scale beyond 1 GPU:

Model parallelism: partition model across multiple GPUs

Dean et al. 2012. “Large scale distributed deep learning”.

Slide52

Hogwild!

Lock-free update of parameters across multiple threads

Fast for sparse updates

Surprisingly can work for dense updates

Niu et al. 2011. “Hogwild! A lock-free approach to parallelizing stochastic gradient descent”

Slide53

Data Parallelism and Async SGD

Dean et al. 2012. “Large scale distributed deep learning”.

Slide54

Async SGD

Trivial to scale up

Robust to individual worker failures

Equally partition variables across parameter server

Trick:

at the start of training, slowly add more workers

Slide55

Stale Gradients

These are gradients for

w

, not

w’

Slide56

Stale Gradients

Each worker has a different copy of parameters

Using old gradients to update new parameters

Staleness grows as more workers are added

Hack: reject gradients that are from too far ago

Slide57

Chen

et al. 2016. “Revisiting Distributed Synchronous SGD”

Sync SGD

Wait for all gradients before update

Slide58

https://github.com/tensorflow/models/tree/master/inception

Slide59

Sync SGD

Equivalent to increasing up the batch size N times, but faster

Crank up learning rate

Problem: have to wait for slowest worker

Solution: add extra backup workers, and update when N gradients received

Chen et al. 2016. “Revisiting Distributed Synchronous SGD”

Slide60

https://research.googleblog.com/2016/04/announcing-tensorflow-08-now-with.html

Slide61

Add Gaussian noise to each gradient

Can be a savior for exotic models

Gradient Noise

Neelakantan et al. 2016 “Adding gradient noise improves learning for very deep networks”

Anandkumar and Ge, 2016 “Efficient approaches for escaping higher order saddle points in non-convex optimization”

http://www.offconvex.org/2016/03/22/saddlepoints/

Slide62

Regularization

Slide63

Regularization Outline

Early stopping

L1 / L2

Auxiliary

classifiers

Penalizing confident output distributions

Dropout

Batch normalization + variants

Slide64

Regularization Outline

Early stopping

L1 / L2

Auxiliary

classifiers

Penalizing confident output distributions

Dropout

Batch normalization + variants

Slide65

Stephen Marsland

Slide66

L1 / L2 regularization

Slide67

L1 / L2 regularization

L1 encourages sparsity

L2 discourages large weights

Gaussian prior on weight

http://www.efunda.com/math/hyperbolic/images/tanh_plot.gif

Slide68

Lee et al. 2015. “Deeply Supervised Nets”

Auxiliary

Classifiers

Slide69

Regularization Outline

Early stopping

L1 / L2

Auxiliary

classifiers

Penalizing confident output distributions

Dropout

Batch normalization + variants

Slide70

Penalizing confident distributions

Do not want overconfident model

Prefer smoother output distribution

Invariant to model parameterization

(1)

Train towards smoother distribution

(2)

Penalize entropy

Pereyra et al. 2017 “Regularizing neural networks by penalizing confident output distributions”

Slide71

Szegedy et al. 2015 “Rethinking the Inception architecture for computer vision”

True Label

Baseline

Mixed Target

Slide72

Szegedy et al. 2015 “Rethinking the Inception architecture for computer vision”

When is uniform a good choice? Bad?

Slide73

http://3.bp.blogspot.com/-RK86-WFbeo0/UbRSIqMDclI/AAAAAAAAAcM/UI6aq-yDEJs/s1600/bernoulli_entropy.png

Slide74

http://3.bp.blogspot.com/-RK86-WFbeo0/UbRSIqMDclI/AAAAAAAAAcM/UI6aq-yDEJs/s1600/bernoulli_entropy.png

Enforce entropy to be above some threshold

Slide75

Regularization Outline

Early stopping

L1 / L2

Auxiliary

classifiers

Penalizing confident output distributions

Dropout

Batch normalization + variants

Slide76

Srivastava et al. 2014. “Dropout: a simple way to prevent neural networks from overfitting”

Dropout

Slide77

Dropout

Complex co-adaptations probably do not generalize

Forces hidden units to derive useful features on own

Sampling from 2

n

possible related networks

Srivastava et al. 2014. “Dropout: a simple way to prevent neural networks from overfitting”

Slide78

Srivastava et al. 2014. “Dropout: a simple way to prevent neural networks from overfitting”

Slide79

Bayesian interpretation of dropout

Variational inference for Gaussian processes

Monte Carlo integration over GP posterior

http://mlg.eng.cam.ac.uk/yarin/blog_3d801aa532c1ce.html

Slide80

Dropout for RNNs

Can dropout layer-wise connections as normal

Recurrent connections use same dropout mask over time

Or dropout specific portion of recurrent cell

Zaremba et al. 2014. “Recurrent neural network regularization”

Gal 2015. “A theoretically grounded application of dropout in recurrent neural networks”

Semenuita et al. 2016. “Recurrent dropout without memory loss”

Slide81

Regularization Outline

Early stopping

L1 / L2

Auxiliary

classifiers

Penalizing confident output distributions

Dropout

Batch normalization + variants

Slide82

https://img.rt.com/files/2016.10/original/57f28764c36188fc0b8b45e8.jpg

Slide83

Internal Covariate Shift

Distribution of inputs to a layer is changing during training

Harder to train: smaller learning rate, careful initialization

Easier if distribution of inputs stayed same

How to enforce same distribution?

Ioffe and

Szegedy, 2015. “Batch normalization: accelerating deep network training by reducing internal covariate shift”

Slide84

Fighting internal covariate shift

Ioffe and

Szegedy, 2015. “Batch normalization: accelerating deep network training by reducing internal covariate shift”

Whitening would be a good first step

Would remove nasty correlations

Problems with whitening?

Slide85

Problems with whitening

Ioffe and

Szegedy, 2015. “Batch normalization: accelerating deep network training by reducing internal covariate shift”

Slow (have to do PCA for every layer)

Cannot backprop through whitening

Next best alternative?

Slide86

Normalization

Ioffe and

Szegedy, 2015. “Batch normalization: accelerating deep network training by reducing internal covariate shift”

Make mean = 0 and standard deviation = 1

Doesn’t eliminate correlations

Fast and can backprop through it

How to compute the statistics?

Slide87

How to compute the statistics

Ioffe and

Szegedy, 2015. “Batch normalization: accelerating deep network training by reducing internal covariate shift”

Going over the entire dataset is too slow

Idea: the batch is an approximation of the dataset

Compute statistics over the batch

Slide88

Ioffe and

Szegedy, 2015. “Batch normalization: accelerating deep network training by reducing internal covariate shift”

Mean

Variance

Normalize

Batch size

Slide89

Distribution of an activation before normalization

Slide90

Distribution of an activation after normalization

Slide91

Not all distributions should be normalized

A rare feature should not be forced to fire 50% of the time

Let the model decide how the distribution should look

Even undo the normalization if needed

Slide92

Ioffe and

Szegedy, 2015. “Batch normalization: accelerating deep network training by reducing internal covariate shift”

Slide93

Test time batch normalization

Want

deterministic

inference

Different test batches will give different results

Solution: precompute mean and variance on training set and use for inference

Practically: maintain running average of statistics during training

Ioffe and

Szegedy, 2015. “Batch normalization: accelerating deep network training by reducing internal covariate shift”

Slide94

Advantages

Enables higher learning rate by stabilizing gradients

More resilient to parameter scale

Regularizes model, making dropout unnecessary

Most SOTA CNN models use BN

Ioffe and

Szegedy, 2015. “Batch normalization: accelerating deep network training by reducing internal covariate shift”

Slide95

Batch Norm for RNNs?

Naive application doesn’t work

Compute different statistics for different time steps?

Ideally should be able to reuse existing architectures like LSTM

Laurent et al. 2015 “Batch normalized recurrent neural networks”

Slide96

Cooijmans et al. 2016. “Recurrent Batch Normalization”

Slide97

Cooijmans et al. 2016. “Recurrent Batch Normalization”

Slide98

Recurrent Batch Normalization

Maintain independent statistics for the first T steps

t

> T uses the statistics from time T

Have to initialize 𝛾 to ~0.1

Cooijmans et al. 2016. “Recurrent Batch Normalization”

Slide99

Layer Normalization

Ba

et al. 2016. “Layer Normalization”

Hidden

Batch

Hidden

Batch

Batch

Normalization

Slide100

Advantages of LayerNorm

Don’t have to worry about normalizing across time

Don’t have to worry about batch size

Slide101

Practical tips for regularization

Batch normalization for feedforward structures

Dropout still gives good performance for RNNs

Entropy regularization good for reinforcement learning

Don’t go crazy with regularization

Slide102

Initialization

Slide103

Initialization Outline

Basic initialization

Smarter initialization schemes

Pretraining

Slide104

Initialization Outline

Basic initialization

Smarter initialization schemes

Pretraining

Slide105

Baseline Initialization

Weights can

no

t be initialized to same value because all the gradients will be the same

Instead, draw from some distribution

Uniform from [-0.1, 0.1] is a reasonable starting spot

Biases may need special constant initialization

Slide106

Initialization Outline

Basic initialization

Smarter initialization schemes

Pretraining

Slide107

Call variance of input Var[y

0

]

and of last layer activations

Var[y

L

]

If

Var[y

L

]

>>

Var[y0]?If Var[yL

] << Var[y0]?

He initialization for ReLU networks

He et al. 2015. “Delving deep into rectifiers: surpassing human level performance on ImageNet classification”

Slide108

He initialization for ReLU networks

Call variance of input

Var[y

0

]

and of last layer activations

Var[y

L

]

If

Var[y

L

] >> Var[y

0], exploding activations → divergeIf Var[yL]

<< Var[y0], diminishing activations → vanishing gradientKey idea: Var[yL

] = Var[y0]

He et al. 2015. “Delving deep into rectifiers: surpassing human level performance on ImageNet classification”

Slide109

He initialization

Number of inputs to neuron

He et al. 2015. “Delving deep into rectifiers: surpassing human level performance on ImageNet classification”

Slide110

Slide111

Identity RNN

Basic RNN with ReLU as nonlinearity (instead of tanh)

Initialize hidden-to-hidden matrix to identity matrix

Le et al. 2015. “A simple way to initialize recurrent neural networks of rectified linear units”

Slide112

Initialization Outline

Basic initialization

Smarter initialization schemes

Pretraining

Slide113

Pretraining

Initialize with weights from a network trained for another task / dataset

Much faster convergence and better generalization

Can either freeze or finetune the pretrained weights

Slide114

Zeiler and Fergus, 2013. “Visualizing and Understanding Convolutional Networks”

Slide115

Pretraining for CNNs in vision

http://cs231n.github.io/transfer-learning/

New dataset is

s

mall

New dataset is

l

arge

Pretrained dataset is

s

imilar

to new dataset

Freeze weights and train linear classifier from top level features

Fine-tune all layers (pretrain for faster convergence and better generalization)

Pretrained dataset is

different from new datasetFreeze weights and train linear classifier from non-top level features

Fine-tune all the layers (pretrain for improved convergence speed)

Slide116

Razavian et al. 2014. “CNN features off-the-shelf: an astounding baseline for recognition”

Slide117

Pretraining for Seq2Seq

Ramachandran et al. 2016. “Unsupervised pretraining for sequence to sequence learning”

Slide118

Progressive networks

Rusu et al 2016. “Progressive Neural Networks”

Slide119

Key Takeaways

Adam and SGD + momentum address key issues of SGD, and are good baseline optimization methods to use

Batch norm, dropout, and entropy regularization should be used for improved performance

Use smart initialization schemes when possible

Slide120

Questions?

Slide121

Appendix

Slide122

Proof of He initialization

Slide123

Slide124

Slide125

Slide126

Slide127

Slide128

Slide129

Slide130

Slide131

Slide132

Slide133

Slide134

Slide135

Slide136

Slide137

Slide138

Slide139

Slide140

Slide141

Slide142

Slide143

Slide144

Slide145

Slide146

Slide147

Slide148


About DocSlides
DocSlides allows users to easily upload and share presentations, PDF documents, and images.Share your documents with the world , watch,share and upload any time you want. How can you benefit from using DocSlides? DocSlides consists documents from individuals and organizations on topics ranging from technology and business to travel, health, and education. Find and search for what interests you, and learn from people and more. You can also download DocSlides to read or reference later.