UFLDL Recommended Readings

From Ufldl

Jump to: navigation, search
Line 13: Line 13:
* [http://www.cs.toronto.edu/~hinton/science.pdf]  Hinton, G. E. and Salakhutdinov, R. R. Reducing the dimensionality of data with neural networks. Science 2006.  If you want to play with the code, you can also find it at [http://www.cs.toronto.edu/~hinton/MatlabForSciencePaper.html].  
* [http://www.cs.toronto.edu/~hinton/science.pdf]  Hinton, G. E. and Salakhutdinov, R. R. Reducing the dimensionality of data with neural networks. Science 2006.  If you want to play with the code, you can also find it at [http://www.cs.toronto.edu/~hinton/MatlabForSciencePaper.html].  
* [http://www-etud.iro.umontreal.ca/~larocheh/publications/greedy-deep-nets-nips-06.pdf] Bengio, Y., Lamblin, P., Popovici, P., Larochelle, H. Greedy Layer-Wise Training of Deep Networks. NIPS 2006  
* [http://www-etud.iro.umontreal.ca/~larocheh/publications/greedy-deep-nets-nips-06.pdf] Bengio, Y., Lamblin, P., Popovici, P., Larochelle, H. Greedy Layer-Wise Training of Deep Networks. NIPS 2006  
-
* [http://www.cs.toronto.edu/~larocheh/publications/icml-2008-denoising-autoencoders.pdf] Pascal Vincent, Hugo Larochelle, Yoshua Bengio and Pierre-Antoine Manzagol. Extracting and Composing Robust Features with Denoising Autoencoders. ICML 2008.  (They have a nice model, but then backwards rationalize it into a probabilistic model.  Ignore the backwards rationalized probabilistic model.) (Someone please clarify eactly which section of the paper this is.)
+
* [http://www.cs.toronto.edu/~larocheh/publications/icml-2008-denoising-autoencoders.pdf] Pascal Vincent, Hugo Larochelle, Yoshua Bengio and Pierre-Antoine Manzagol. Extracting and Composing Robust Features with Denoising Autoencoders. ICML 2008.   
 +
** (They have a nice model, but then backwards rationalize it into a probabilistic model.  Ignore the backwards rationalized probabilistic model.) (Someone please clarify eactly which section of the paper this is.)
Line 19: Line 20:
* Larochelle, Erhan, Courville, Bergstra, Bengio, ICML 2007.  (Someone read this and let us know if this is worth keeping,.)  
* Larochelle, Erhan, Courville, Bergstra, Bengio, ICML 2007.  (Someone read this and let us know if this is worth keeping,.)  
* [http://www.jmlr.org/papers/volume11/erhan10a/erhan10a.pdf] Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. Why Does Unsupervised Pre-training Help Deep Learning? JMLR 2010   
* [http://www.jmlr.org/papers/volume11/erhan10a/erhan10a.pdf] Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. Why Does Unsupervised Pre-training Help Deep Learning? JMLR 2010   
-
* [http://cs.stanford.edu/~ang/papers/nips09-MeasuringInvariancesDeepNetworks.pdf] Ian J. Goodfellow, Quoc V. Le, Andrew M. Saxe, Honglak Lee and Andrew Y. Ng.Measuring invariances in deep networks. NIPS 2009.  
+
* [http://cs.stanford.edu/~ang/papers/nips09-MeasuringInvariancesDeepNetworks.pdf] Ian J. Goodfellow, Quoc V. Le, Andrew M. Saxe, Honglak Lee and Andrew Y. Ng. Measuring invariances in deep networks. NIPS 2009.  
RBMs:
RBMs:
-
* [http://deeplearning.net/tutorial/rbm.html] Tutorial on RBMs. But ignore the Theano code examples. (Someone tell us if this should be moved later.  Useful for understanding some of DL literature, but not needed for many of the later papers?)
+
* [http://deeplearning.net/tutorial/rbm.html] Tutorial on RBMs.  
 +
** But ignore the Theano code examples.
 +
** (Someone tell us if this should be moved later.  Useful for understanding some of DL literature, but not needed for many of the later papers?)

Revision as of 02:52, 1 March 2011

Personal tools