Autoencoders and Sparsity
From Ufldl
(Created page with "So far, we have described the application of neural networks to supervised learning, in which we are have labeled training examples. Now suppose we have only unlabeled training ...") |
m (Cleaned up quotes) |
||
Line 18: | Line 18: | ||
pixels) so <math>\textstyle n=100</math>, and there are <math>\textstyle s_2=50</math> hidden units in layer <math>\textstyle L_2</math>. Note that | pixels) so <math>\textstyle n=100</math>, and there are <math>\textstyle s_2=50</math> hidden units in layer <math>\textstyle L_2</math>. Note that | ||
we also have <math>\textstyle y \in \Re^{100}</math>. Since there are only 50 hidden units, the | we also have <math>\textstyle y \in \Re^{100}</math>. Since there are only 50 hidden units, the | ||
- | network is forced to learn a | + | network is forced to learn a ''compressed'' representation of the input. |
I.e., given only the vector of hidden unit activations <math>\textstyle a^{(2)} \in \Re^{50}</math>, | I.e., given only the vector of hidden unit activations <math>\textstyle a^{(2)} \in \Re^{50}</math>, | ||
it must try to '''reconstruct''' the 100-pixel input <math>\textstyle x</math>. If the input were completely | it must try to '''reconstruct''' the 100-pixel input <math>\textstyle x</math>. If the input were completely | ||
Line 36: | Line 36: | ||
is large. | is large. | ||
- | Informally, we will think of a neuron as being | + | Informally, we will think of a neuron as being "active" (or as "firing") if |
- | its output value is close to 1, or as being | + | its output value is close to 1, or as being "inactive" if its output value is |
close to 0. We would like to constrain the neurons to be inactive most of the | close to 0. We would like to constrain the neurons to be inactive most of the | ||
time.\footnote{This discussion assumes a sigmoid activation function. If you are | time.\footnote{This discussion assumes a sigmoid activation function. If you are |