UFLDL Recommended Readings

From Ufldl

Jump to: navigation, search
Line 19: Line 19:
Analyzing deep learning/why does deep learning work:  
Analyzing deep learning/why does deep learning work:  
* [http://www.cs.toronto.edu/~larocheh/publications/deep-nets-icml-07.pdf] H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation. ICML 2007.
* [http://www.cs.toronto.edu/~larocheh/publications/deep-nets-icml-07.pdf] H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation. ICML 2007.
-
* Larochelle, Erhan, Courville, Bergstra, Bengio, ICML 2007.  (Someone read this and let us know if this is worth keeping,.)  
+
** (Someone read this and let us know if this is worth keeping,.)  
* [http://www.jmlr.org/papers/volume11/erhan10a/erhan10a.pdf] Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. Why Does Unsupervised Pre-training Help Deep Learning? JMLR 2010   
* [http://www.jmlr.org/papers/volume11/erhan10a/erhan10a.pdf] Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. Why Does Unsupervised Pre-training Help Deep Learning? JMLR 2010   
* [http://cs.stanford.edu/~ang/papers/nips09-MeasuringInvariancesDeepNetworks.pdf] Ian J. Goodfellow, Quoc V. Le, Andrew M. Saxe, Honglak Lee and Andrew Y. Ng. Measuring invariances in deep networks. NIPS 2009.  
* [http://cs.stanford.edu/~ang/papers/nips09-MeasuringInvariancesDeepNetworks.pdf] Ian J. Goodfellow, Quoc V. Le, Andrew M. Saxe, Honglak Lee and Andrew Y. Ng. Measuring invariances in deep networks. NIPS 2009.  

Revision as of 02:53, 1 March 2011

Personal tools