Neural Network Vectorization
From Ufldl
Line 93: | Line 93: | ||
\end{align} | \end{align} | ||
</math> | </math> | ||
- | Here, <math>\bullet</math> | + | Here, <math>\bullet</math> denotes element-wise product. For simplicity, our description here will ignore the derivatives with respect to <math>b^{(l)}</math>, though your implementation of backpropagation will have to compute those derivatives too. |
Suppose we have already implemented the vectorized forward propagation method, so that the matrix-valued <tt>z2</tt>, <tt>a2</tt>, <tt>z3</tt> and <tt>h</tt> are computed as described above. We can then implement an ''unvectorized'' version of backpropagation as follows: | Suppose we have already implemented the vectorized forward propagation method, so that the matrix-valued <tt>z2</tt>, <tt>a2</tt>, <tt>z3</tt> and <tt>h</tt> are computed as described above. We can then implement an ''unvectorized'' version of backpropagation as follows: | ||
Line 117: | Line 117: | ||
== Sparse autoencoder == | == Sparse autoencoder == | ||
- | The [http://ufldl/wiki/index.php/Autoencoders_and_Sparsity sparse autoencoder] neural network has an additional sparsity penalty that | + | The [http://ufldl/wiki/index.php/Autoencoders_and_Sparsity sparse autoencoder] neural network has an additional sparsity penalty that constrains neurons' average firing rate to be close to some target activation <math>\rho</math>. We take into the account the sparsity penalty by computing the following: |
:<math>\begin{align} | :<math>\begin{align} |