Neural Network Vectorization
From Ufldl
Line 117: | Line 117: | ||
== Sparse autoencoder == | == Sparse autoencoder == | ||
- | The [ | + | The [[Autoencoders_and_Sparsity|sparse autoencoder]] neural network has an additional sparsity penalty that constrains neurons' average firing rate to be close to some target activation <math>\rho</math>. When performing backpropagation on a single training example, we had taken into the account the sparsity penalty by computing the following: |
:<math>\begin{align} | :<math>\begin{align} | ||
Line 125: | Line 125: | ||
\end{align}</math> | \end{align}</math> | ||
- | In the ''unvectorized'' case, this | + | In the ''unvectorized'' case, this was computed as: |
<syntaxhighlight> | <syntaxhighlight> | ||
% Sparsity Penalty Delta | % Sparsity Penalty Delta | ||
- | sparsity_delta = - rho / rho_hat + (1 - rho) / rho_hat; | + | sparsity_delta = - rho ./ rho_hat + (1 - rho) ./ (1 - rho_hat); |
for i=1:m, | for i=1:m, | ||
... | ... | ||
Line 137: | Line 137: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
- | + | The code above still had a <tt>for</tt> loop over the training set, and <tt>delta2</tt> was a column vector. | |
+ | |||
+ | In contrast, recall that in the vectorized case, <tt>delta2</tt> is now a matrix with <math>m</math> columns corresponding to the <math>m</math> training examples. Now, notice that the <tt>sparsity_delta</tt> term is the same regardless of what training example we are processing. This suggests that vectorizing the computation above can be done by simply adding the same value to each column when constructing the <tt>delta2</tt> matrix. Thus, to vectorize the above computation, we can simply add <tt>sparsity_delta</tt> (e.g., using <tt>repmat</tt>) to each column of <tt>delta2</tt>. | ||
+ | |||
+ | |||
+ | {{Vectorized Implementation}} | ||
+ | |||
+ | |||
+ | {{Languages|神经网络向量化|中文}} |