Neural Network Vectorization

From Ufldl

Jump to: navigation, search
Line 117: Line 117:
== Sparse autoencoder ==
== Sparse autoencoder ==
-
The [http://ufldl/wiki/index.php/Autoencoders_and_Sparsity sparse autoencoder] neural network has an additional sparsity penalty that constrains neurons' average firing rate to be close to some target activation <math>\rho</math>. We take into the account the sparsity penalty by computing the following:
+
The [http://ufldl.stanford.edu/wiki/index.php/Autoencoders_and_Sparsity sparse autoencoder] neural network has an additional sparsity penalty that constrains neurons' average firing rate to be close to some target activation <math>\rho</math>. We take into the account the sparsity penalty by computing the following:
:<math>\begin{align}
:<math>\begin{align}
Line 137: Line 137:
</syntaxhighlight>  
</syntaxhighlight>  
-
Recall that when we vectorizing the gradient computations, <tt>delta2</tt> is replaced with matrices. Furthermore, notice that the <tt>sparsity_delta</tt> is the same regardless of the example we are processing; This suggests that vectorizing the computation can be done by simply adding the same values to each example when constructing the <tt>delta2</tt> matrix. Thus, to vectorize the above computations, we can simply add <tt>sparsity_delta</tt> (e.g., using <tt>repmat</tt>) to <tt>delta2</tt>.
+
Recall that when we vectorizing the gradient computations, <tt>delta2</tt> is now a matrix with <math>m</math> columns corresponding to the <math>m</math> training examples. Furthermore, notice that the <tt>sparsity_delta</tt> term is the same regardless of the example we are processingThis suggests that vectorizing the computation above can be done by simply adding the same value to to each column when constructing the <tt>delta2</tt> matrix. Thus, to vectorize the above computations, we can simply add <tt>sparsity_delta</tt> (e.g., using <tt>repmat</tt>) to <tt>delta2</tt>.

Revision as of 18:42, 29 April 2011

Personal tools