# Exercise:Sparse Autoencoder

 Revision as of 01:22, 22 April 2011 (view source)Maiyifan (Talk | contribs)← Older edit Latest revision as of 14:34, 10 July 2012 (view source)Richard (Talk | contribs) Line 1: Line 1: + ==Download Related Reading== + * [http://nlp.stanford.edu/~socherr/sparseAutoencoder_2011new.pdf sparseae_reading.pdf] + * [http://www.stanford.edu/class/cs294a/cs294a_2011-assignment.pdf sparseae_exercise.pdf] + ==Sparse autoencoder implementation== ==Sparse autoencoder implementation== Line 112: Line 116: We will use the L-BFGS algorithm.  This is provided to you in a function called We will use the L-BFGS algorithm.  This is provided to you in a function called - minFunc},\footnote{Code provided by Mark Schmidt. included in the starter code.  (For the purpose of this + minFunc (code provided by Mark Schmidt) included in the starter code.  (For the purpose of this assignment, you only need to call minFunc with the default parameters. You do assignment, you only need to call minFunc with the default parameters. You do not need to know how L-BFGS works.)  We have already provided code in train.m not need to know how L-BFGS works.)  We have already provided code in train.m Line 131: Line 135: should work, but feel free to play with different settings of the parameters as should work, but feel free to play with different settings of the parameters as well. well. + + '''Implementational tip:''' Once you have your backpropagation implementation correctly computing the derivatives (as verified using gradient checking in Step 3), when you are now using it with L-BFGS to optimize $J_{\rm sparse}(W,b)$, make sure you're not doing gradient-checking on every step.  Backpropagation can be used to compute the derivatives of $J_{\rm sparse}(W,b)$ fairly efficiently, and if you were additionally computing the gradient numerically on every step, this would slow down your program significantly. + ===Step 5: Visualization=== ===Step 5: Visualization=== Line 149: Line 156: - Our implementation took around 10 minutes to run on a fast computer. + Our implementation took around 5 minutes to run on a fast computer. In case you end up needing to try out multiple implementations or In case you end up needing to try out multiple implementations or different parameter values, be sure to budget enough time for debugging different parameter values, be sure to budget enough time for debugging Line 165: Line 172: [[Category:Exercises]] [[Category:Exercises]] + + + {{Sparse_Autoencoder}}