/
Eigensystems , SVD, PCA Big Data Seminar, Dedi Gadot, December 14 Eigensystems , SVD, PCA Big Data Seminar, Dedi Gadot, December 14

Eigensystems , SVD, PCA Big Data Seminar, Dedi Gadot, December 14 - PowerPoint Presentation

crunchingsubway
crunchingsubway . @crunchingsubway
Follow
343 views
Uploaded On 2020-08-27

Eigensystems , SVD, PCA Big Data Seminar, Dedi Gadot, December 14 - PPT Presentation

th 2014 Eigvals and eigvecs Eigvals Eigvecs An eigenvector of a square matrix A is a nonzero vector V that when multiplied with A yields a scalar multiplication of itself by ID: 803660

data matrix singular pca matrix data pca singular eigenvectors square axis svd eigenvalues principal eigvecs vector algorithm step variance

Share:

Link:

Embed:

Download Presentation from below link

Download The PPT/PDF document "Eigensystems , SVD, PCA Big Data Seminar..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Eigensystems, SVD, PCA

Big Data Seminar, Dedi Gadot, December 14

th

, 2014

Slide2

Eigvals and eigvecs

Slide3

Eigvals + Eigvecs

An eigenvector of a

square matrix

A is a

non-zero vector V that when multiplied with A yields a scalar multiplication of itself by LAMBDA (the eigenvalue) If A is a square, diagonalizable matrix –

 

Slide4

Eigvecs – Toy Example

For example, in 2D space:

v are eigenvectors of A (you can prove it to yourself) thus, say we use A to

transorm

a set of data:Points that lie on a vector from the origin to an eigenvector, remain on that vector after the transformationVectors parallel to the eigenvectors keep their directionOther vectors’ angles get altered

 

Slide5

Eigvecs – Toy Example

Slide6

Geometric Transformations

Slide7

SVD

Slide8

SVD

Singular Value Decomposition

A

factorization

of a given matrix to its components:M = UΣV∗When:M – an m x n real or complex matrixU – an m x m unitary matrix, called the left singular vectorsV – an n x n unitary matrix, called the right singular vectorsΣ – an m x n rectangular diagonal matrix, called the singular values

Slide9

Applications and Intuition

If M is a

real, square

matrix –

U,V can be referred to as rotation matrices and Σ as a scaling matrix M = UΣV∗

Slide10

Applications and Intuition

The columns of U and V are

orthonormal bases

Singular

vectors (of a square matrix) can be interpreted as the semiaxes of an ellipsoid in n-dimensional spaceSVD can be used to solve homogeneous linear equationsAx=0, A is a square matrix  x is the right singular vector which corresponds to a singular value of A which is zeroLow rank matrix approximationTake Σ of M and leave only the r largest singular values, rebuild the matrix using U,V and you’ll get a low rank approximation of M…

Slide11

SVD and Eigenvalues

Given an SVD of 

M

the following two relations hold:The columns of V are eigenvectors of M*MThe columns of U are eigenvectors of MM*The non-zero elements of Σ are the square roots of the non-zero eigenvalues of M*M or MM*

Slide12

PCA

Slide13

PCA

Principal Components Analysis

PCA can be thought as fitting an n-dimensional ellipsoid to the data, such that each axis of the ellipsoid represents a principal component, i.e. an axis of maximal variance

Slide14

PCA

X

1

X

2

Slide15

PCA – the algorithm

Step A

– subtract the mean of each data dimension, thus move all data-points to be centered around the origin

Step B

– calculate the covariance matrix of the data

Slide16

PCA – the algorithm

Step C

– calculate the

eigenvectors

and the eigenvalues of the covariance matrixThe eigenvectors of the covariance matrix are orthonormal (see below)The eigenvalues tell us the ‘amount of variance’ of the data along each specific new dimension/axis (eigenvector)

Slide17

PCA – the algorithm

Step D

– sort the eigenvalues in

descending order

Eigvec #1, which is correlated with Eigval #1, is the 1st principal component – i.e. the (new) axis with highest varianceStep E (optional) – take only ‘strong’ Principal ComponentsStep F – project the original data on the newly created base (the PCs, the eigenvectors) to get a rotated, translated coordinate system correlated with highest variance per each axis

Slide18

PCA – the algorithm

For dimensionality reduction – take only some of the new principal components to represent the data, accountable for the

highest amount of variance

(hence, data)