Linear Decoders
From Ufldl
for
Linear Decoders
Jump to:
navigation
,
search
In the autoencoder exercises so far, you have been using '''sigmoid decoders''' to reconstruct the input from the activations of the hidden units (features). That is, the activation function of the output units has been a sigmoid function. Formally, taking <math>a</math> to be the activations of the hidden units, <math>W^{(2)}, \, b^{(2)}</math> to be the weight and bias terms for the output units, and <math>\hat{x}</math> to be the output of the output units, you have been reconstructing the output as <math>\hat{x} = \sigma(W^{(2)}a + b^{(2)})</math>, where <math>\sigma(x)</math> is the sigmoid function applied to <math>x</math>. However, in practice, sigmoid decoders are rarely used because of one major limitation - the limited range of the sigmoid function. Since the range of the sigmoid function is the interval <math>[0, 1]</math>, for datasets in which the input data is not naturally represented in this range, additional pre-processing must be done to scale the data into this range. While it may appear such pre-processing would involve simply scaling the data down linearly into the interval <math>[0, 1]</math>, this does not always work, and the additional pre-processing step is an unnecessary complication. (If you're curious about what other kinds of pre-processing might be necessary, you can look at the pre-processing done for the natural image dataset used in the first sparse autoencoder assignment) Hence, in practice, '''linear decoders''' are often used instead. For a linear decoder, the activation function of the output unit is simply the identity function. Formally, to reconstruct the input from the features using a linear decoder, we simply set <math>\hat{x} = W^{(2)}a + b^{(2)}</math> instead, without applying the sigmoid function. Now the reconstructed output <math>\hat{x}</math> is a linear function of the activations of the hidden units, which means that by varying <math>W</math>, each output unit <math>\hat{x}</math> can be made to produce any activation in <math>(-\infty, \infty)</math>. This allows us to train the sparse autoencoder on any input that takes on real values without any additional pre-processing. (Note that the hidden units are '''still sigmoid units''', that is, <math>a = \sigma(W^{(1)}*x + b^{(1)})</math>, where <math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weight and bias terms for the hidden units) Of course, now that we have changed the activation function of the output units, the gradients of the output units must be changed accordingly. Recall that for each output unit, we set the error as follows: :<math> \begin{align} \delta_i = \frac{\partial}{\partial z_i} \;\; \frac{1}{2} \left\|y - \hat{x}\right\|^2 = - (y_i - \hat{x}_i) \cdot f'(z_i) \end{align} </math> (where <math>y = x</math> is the desired output, <math>\hat{x}</math> is the reconstructed output of our autoencoder, <math>z</math> is the input to the the output units, and <math>f(x)</math> is our activation function) Since the activation function for the output units for a linear decoder is just the identity function, the above reduces to: :<math> \begin{align} \delta_i = - (y_i - \hat{x}_i) \cdot z_i \end{align} </math>
Template:Languages
(
view source
)
Return to
Linear Decoders
.
Views
Page
Discussion
View source
History
Personal tools
Log in
ufldl resources
UFLDL Tutorial
Recommended Readings
wiki
Main page
Recent changes
Random page
Help
Search
Toolbox
What links here
Related changes
Special pages