Autoencoders and Sparsity

From Ufldl

Jump to: navigation, search
m
 
Line 1: Line 1:
-
So far, we have described the application of neural networks to supervised learning, in which we are have labeled
+
So far, we have described the application of neural networks to supervised learning, in which we have labeled
-
training examples.  Now suppose we have only unlabeled training examples set <math>\textstyle \{x^{(1)}, x^{(2)}, x^{(3)}, \ldots\}</math>,
+
training examples.  Now suppose we have only a set of unlabeled training examples <math>\textstyle \{x^{(1)}, x^{(2)}, x^{(3)}, \ldots\}</math>,
where <math>\textstyle x^{(i)} \in \Re^{n}</math>.  An
where <math>\textstyle x^{(i)} \in \Re^{n}</math>.  An
'''autoencoder''' neural network is an unsupervised learning algorithm that applies backpropagation,
'''autoencoder''' neural network is an unsupervised learning algorithm that applies backpropagation,
Line 135: Line 135:
<math>\textstyle J_{\rm sparse}(W,b)</math>.  Using the derivative checking method, you will be able to verify
<math>\textstyle J_{\rm sparse}(W,b)</math>.  Using the derivative checking method, you will be able to verify
this for yourself as well.
this for yourself as well.
 +
 +
 +
{{Sparse_Autoencoder}}
 +
 +
 +
{{Languages|自编码算法与稀疏性|中文}}

Latest revision as of 12:43, 7 April 2013

Personal tools