Neural Network Vectorization

From Ufldl

Jump to: navigation, search
 
Line 93: Line 93:
\end{align}  
\end{align}  
</math>
</math>
-
Here, <math>\bullet</math> denote element-wise product.  For simplicity, our description here will ignore the derivatives with respect to <math>b^{(l)}</math>, though your implementation of backpropagation will have to compute those derivatives too.  
+
Here, <math>\bullet</math> denotes element-wise product.  For simplicity, our description here will ignore the derivatives with respect to <math>b^{(l)}</math>, though your implementation of backpropagation will have to compute those derivatives too.  
Suppose we have already implemented the vectorized forward propagation method, so that the matrix-valued <tt>z2</tt>, <tt>a2</tt>,  <tt>z3</tt> and <tt>h</tt> are computed as described above. We can then implement an ''unvectorized'' version of backpropagation as follows:
Suppose we have already implemented the vectorized forward propagation method, so that the matrix-valued <tt>z2</tt>, <tt>a2</tt>,  <tt>z3</tt> and <tt>h</tt> are computed as described above. We can then implement an ''unvectorized'' version of backpropagation as follows:
Line 117: Line 117:
== Sparse autoencoder ==
== Sparse autoencoder ==
-
The [http://ufldl/wiki/index.php/Autoencoders_and_Sparsity sparse autoencoder] neural network has an additional sparsity penalty that prevents constrains neurons to fire at a target activation. We take into the account the sparsity penalty by computing the following:
+
The [[Autoencoders_and_Sparsity|sparse autoencoder]] neural network has an additional sparsity penalty that constrains neurons' average firing rate to be close to some target activation <math>\rho</math>. When performing backpropagation on a single training example, we had taken into the account the sparsity penalty by computing the following:
:<math>\begin{align}
:<math>\begin{align}
Line 125: Line 125:
\end{align}</math>
\end{align}</math>
-
In the ''unvectorized'' case, this is computed as:
+
In the ''unvectorized'' case, this was computed as:
<syntaxhighlight>
<syntaxhighlight>
% Sparsity Penalty Delta
% Sparsity Penalty Delta
-
sparsity_delta = - rho / rho_hat + (1 - rho) / rho_hat;
+
sparsity_delta = - rho ./ rho_hat + (1 - rho) ./ (1 - rho_hat);
for i=1:m,
for i=1:m,
   ...
   ...
Line 137: Line 137:
</syntaxhighlight>  
</syntaxhighlight>  
-
Notice that the sparsity_delta is the same regardless of the example we are processing.  
+
The code above still had a <tt>for</tt> loop over the training set, and <tt>delta2</tt> was a column vector.  
-
Recall that when we vectorizing the gradient computations, <tt>delta2</tt> is replaced with matrices. Thus, to vectorize the <tt>sparsity_delta</tt>, we can simply add it using <tt>repmat</tt> while computing the <tt>delta2</tt> matrices.
+
In contrast, recall that in the vectorized case, <tt>delta2</tt> is now a matrix with <math>m</math> columns corresponding to the <math>m</math> training examples. Now, notice that the <tt>sparsity_delta</tt> term is the same regardless of what training example we are processing.  This suggests that vectorizing the computation above can be done by simply adding the same value to each column when constructing the <tt>delta2</tt> matrix. Thus, to vectorize the above computation, we can simply add <tt>sparsity_delta</tt> (e.g., using <tt>repmat</tt>) to each column of <tt>delta2</tt>.
 +
 
 +
 
 +
{{Vectorized Implementation}}
 +
 
 +
 
 +
{{Languages|神经网络向量化|中文}}

Latest revision as of 13:13, 7 April 2013

Personal tools