PDF-Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarc
Author : cheryl-pisano | Published Date : 2014-10-06
stanfordedu Roger Grosse rgrossecsstanfordedu Rajesh Ranganath rajeshrcsstanfordedu Andrew Y Ng angcsstanfordedu Computer Science Department Stanford University
Presentation Embed Code
Download Presentation
Download Presentation The PPT/PDF document "Convolutional Deep Belief Networks for S..." is the property of its rightful owner. Permission is granted to download and print the materials on this website for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarc: Transcript
stanfordedu Roger Grosse rgrossecsstanfordedu Rajesh Ranganath rajeshrcsstanfordedu Andrew Y Ng angcsstanfordedu Computer Science Department Stanford University Stanford CA 94305 USA Abstract There has been much interest in unsuper vised learning of. Quoc V. Le. Stanford University and Google. Purely supervised. Quoc V. . Le. Almost abandoned between 2000-2006. - . Overfitting. , slow, many local minima, gradient vanishing. In 2006, Hinton, et. al. proposed RBMs to . Early Work. Why Deep Learning. Stacked Auto Encoders. Deep Belief Networks. CS 678 – Deep Learning. 1. Deep Learning Overview. Train networks with many layers (vs. shallow nets with just a couple of layers). to Speech . EE 225D - . Audio Signal Processing in Humans and Machines. Oriol Vinyals. UC Berkeley. This is my biased view about deep learning and, more generally, machine learning past and current research!. Aaron Crandall, 2015. What is Deep Learning?. Architectures with more mathematical . transformations from source to target. Sparse representations. Stacking based learning . approaches. Mor. e focus on handling unlabeled data. etc. Convnets. (optimize weights to predict bus). bus. Convnets. (optimize input to predict ostrich). ostrich. Work on Adversarial examples by . Goodfellow. et al. , . Szegedy. et. al., etc.. Generative Adversarial Networks (GAN) [. ISHAY BE’ERY. ELAD KNOLL. OUTLINES. . Motivation. Model . c. ompression: mimicking large networks:. FITNETS : HINTS FOR THIN DEEP NETS . (A. Romero, 2014). DO DEEP NETS REALLY NEED TO BE DEEP . (Rich Caruana & Lei Jimmy Ba 2014). Presenter: . Yanming. . Guo. Adviser: Dr. Michael S. Lew. Deep learning. Human. Computer. 1:4. Human . v.s. . Computer. Deep learning. Human. Computer. 1:4. Human . v.s. . Computer. Deep Learning. Why better?. Sabareesh Ganapathy. Manav Garg. Prasanna. . Venkatesh. Srinivasan. Convolutional Neural Network. State of the art in Image classification. Terminology – Feature Maps, Weights. Layers - Convolution, . By, . . Sruthi. . Moola. Convolution. . Convolution is a common image processing technique that changes the intensities of a pixel to reflect the intensities of the surrounding pixels. A common use of convolution is to create image filters. Munif. CNN. The (CNN. ) . consists of: . . Convolutional layers. Subsampling Layers. Fully . connected . layers. Has achieved state-of-the-art result for the recognition of handwritten digits. Neural . Aaron Crandall, 2015. What is Deep Learning?. Architectures with more mathematical . transformations from source to target. Sparse representations. Stacking based learning . approaches. Mor. e focus on handling unlabeled data. Article and Work by. : Justin . Salamon. and Juan Pablo Bello. Presented by . : . Dhara. Rana. Overall Goal of Paper. Create a way to classify environmental sound given an audio clip. Other methods of sound classification: (1) dictionary learning and (2) wavelet filter banks . n,k. ) code by adding the r parity digits. An alternative scheme that groups the data stream into much smaller blocks k digits and encode them into n digits with order of k say 1, 2 or 3 digits at most is the convolutional codes. Such code structure can be realized using convolutional structure for the data digits.. 1. Deep Learning. Early Work. Why Deep Learning. Stacked Auto Encoders. Deep Belief Networks. Deep Learning Overview. Train networks with many layers (vs. shallow nets with just a couple of layers). Multiple layers work to build an improved feature space.
Download Document
Here is the link to download the presentation.
"Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarc"The content belongs to its owner. You may download and print it for personal use, without modification, and keep all copyright notices. By downloading, you agree to these terms.
Related Documents