Linear Decoders

From Ufldl

Jump to: navigation, search
Line 1: Line 1:
-
In the autoencoder exercises so far, you have been using '''sigmoid decoders''' to reconstruct the input from the activations of the hidden units (features). That is, the activation function of the output units has been a sigmoid function. Formally, taking <math>a</math> to be the activations of the hidden units, <math>W^{(2)}, \, b^{(2)}</math> to be the weight and bias terms for the output units, and <math>\hat{x}</math> to be the output of the output units, you have been reconstructing the output as <math>\hat{x} = \sigma(W^{(2)}a + b^{(2)})</math>, where <math>\sigma(x)</math> is the sigmoid function applied to <math>x</math>.
+
== Sparse Autoencoder Recap ==
-
However, in practice, sigmoid decoders are rarely used because of one major limitation - the limited range of the sigmoid function. Since the range of the sigmoid function is the interval <math>[0, 1]</math>, for datasets in which the input data is not naturally represented in this range, additional pre-processing must be done to scale the data into this range. While it may appear such pre-processing would involve simply scaling the data down linearly into the interval <math>[0, 1]</math>, this does not always work, and the additional pre-processing step is an unnecessary complication. (If you're curious about what other kinds of pre-processing might be necessary, you can look at the pre-processing done for the natural image dataset used in the first sparse autoencoder assignment)
+
In the sparse autoencoder implementation, we had 3 layers of neurons: the input layer, a hidden layer and an output layer. Recall that each neuron (in the output layer) computes the following:
-
Hence, in practice, '''linear decoders''' are often used instead. For a linear decoder, the activation function of the output unit is simply the identity function. Formally, to reconstruct the input from the features using a linear decoder, we simply set <math>\hat{x} = W^{(2)}a + b^{(2)}</math> instead, without applying the sigmoid function. Now the reconstructed output <math>\hat{x}</math> is a linear function of the activations of the hidden units, which means that by varying <math>W</math>, each output unit <math>\hat{x}</math> can be made to produce any activation in <math>(-\infty, \infty)</math>. This allows us to train the sparse autoencoder on any input that takes on real values without any additional pre-processing. (Note that the hidden units are '''still sigmoid units''', that is, <math>a = \sigma(W^{(1)}*x + b^{(1)})</math>, where <math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weight and bias terms for the hidden units)
+
<math>
 +
z^{(3)} &= W^{(2)} a^{(2)} + b^{(2)} \\
 +
a^{(3)} &= f(z^{(3)})
 +
</math>
 +
 
 +
where <math>a^{(3)}</math> is the reconstruction of the input (layer <math>a^{(1)}</math>).
 +
 
 +
Notice that due to the choice of the sigmoid function for <math>f(z^{(3)})</math> we need to constrain the inputs to be in the range <tt>[0,1]</tt>.
 +
 
 +
While some datasets like MNIST fit well into this criteria, this hard constraint can sometimes be awkward to satisfy. For example, if one uses PCA whitening, the input is no longer constrained to <tt>[0,1]</tt> and it is not clear what kind of scaling is appropriate to fit the data into the constrained range.
 +
 
 +
== Linear Decoder ==
 +
 
 +
One easy fix for the fore-mentioned problem is to use a ''linear-decoder'', that is, we set <math>a^{(3)} &= f(z^{(3)})</math>.
 +
 
 +
For a linear decoder, the activation function of the output unit is effectively the identity function. Formally, to reconstruct the input from the features using a linear decoder, we simply set <math>\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}</math> instead, without applying the sigmoid function. Now the reconstructed output <math>\hat{x}</math> is a linear function of the activations of the hidden units, which means that by varying <math>W</math>, each output unit <math>\hat{x}</math> can be made to produce any activation without the previous constraints. This allows us to train the sparse autoencoder on any input that takes on real values without any additional pre-processing. (Note that the hidden units are '''still sigmoid units''', that is, <math>a = \sigma(W^{(1)}*x + b^{(1)})</math>, where <math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weight and bias terms for the hidden units)
Of course, now that we have changed the activation function of the output units, the gradients of the output units must be changed accordingly. Recall that for each output unit, we set the error as follows:
Of course, now that we have changed the activation function of the output units, the gradients of the output units must be changed accordingly. Recall that for each output unit, we set the error as follows:

Revision as of 02:03, 22 May 2011

Personal tools