Linear Decoders
From Ufldl
(→Linear Decoder) |
|||
Line 1: | Line 1: | ||
- | |||
== Sparse Autoencoder Recap == | == Sparse Autoencoder Recap == | ||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
In the sparse autoencoder, we had 3 layers of neurons: an input layer, a hidden layer and an output layer. In our previous description | In the sparse autoencoder, we had 3 layers of neurons: an input layer, a hidden layer and an output layer. In our previous description | ||
Line 16: | Line 5: | ||
In these notes, we describe a modified version of the autoencoder in which some of the neurons use a different activation function. | In these notes, we describe a modified version of the autoencoder in which some of the neurons use a different activation function. | ||
This will result in a model that is sometimes simpler to apply, and can also be more robust to variations in the parameters. | This will result in a model that is sometimes simpler to apply, and can also be more robust to variations in the parameters. | ||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
Recall that each neuron (in the output layer) computed the following: | Recall that each neuron (in the output layer) computed the following: | ||
Line 39: | Line 16: | ||
where <math>a^{(3)}</math> is the output. In the autoencoder, <math>a^{(3)}</math> is our approximate reconstruction of the input <math>x = a^{(1)}</math>. | where <math>a^{(3)}</math> is the output. In the autoencoder, <math>a^{(3)}</math> is our approximate reconstruction of the input <math>x = a^{(1)}</math>. | ||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
Because we used a sigmoid activation function for <math>f(z^{(3)})</math>, we needed to constrain or scale the inputs to be in the range <math>[0,1]</math>, | Because we used a sigmoid activation function for <math>f(z^{(3)})</math>, we needed to constrain or scale the inputs to be in the range <math>[0,1]</math>, | ||
Line 73: | Line 22: | ||
no longer constrained to <math>[0,1]</math> and it's not clear what the best way is to scale the data to ensure it fits into the constrained range. | no longer constrained to <math>[0,1]</math> and it's not clear what the best way is to scale the data to ensure it fits into the constrained range. | ||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
== Linear Decoder == | == Linear Decoder == | ||
- | |||
- | |||
One easy fix for this problem is to set <math>a^{(3)} = z^{(3)}</math>. Formally, this is achieved by having the output | One easy fix for this problem is to set <math>a^{(3)} = z^{(3)}</math>. Formally, this is achieved by having the output | ||
Line 92: | Line 32: | ||
<math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weight and bias terms for the hidden units. | <math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weight and bias terms for the hidden units. | ||
It is only in the ''output'' layer that we use the linear activation function. | It is only in the ''output'' layer that we use the linear activation function. | ||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
An autoencoder in this configuration--with a sigmoid (or tanh) hidden layer and a linear output layer--is called a '''linear decoder'''. | An autoencoder in this configuration--with a sigmoid (or tanh) hidden layer and a linear output layer--is called a '''linear decoder'''. | ||
In this model, we have <math>\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}</math>. Because the output <math>\hat{x} </math> is a now linear function of the hidden unit activations, by varying <math>W^{(2)}</math>, each output unit <math>a^{(3)}</math> can be made to produce values greater than 1 or less than 0 as well. This allows us to train the sparse autoencoder real-valued inputs without needing to pre-scale every example to a specific range. | In this model, we have <math>\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}</math>. Because the output <math>\hat{x} </math> is a now linear function of the hidden unit activations, by varying <math>W^{(2)}</math>, each output unit <math>a^{(3)}</math> can be made to produce values greater than 1 or less than 0 as well. This allows us to train the sparse autoencoder real-valued inputs without needing to pre-scale every example to a specific range. | ||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
Since we have changed the activation function of the output units, the gradients of the output units also change. Recall that for each output unit, we had set set the error terms as follows: | Since we have changed the activation function of the output units, the gradients of the output units also change. Recall that for each output unit, we had set set the error terms as follows: | ||
Line 135: | Line 52: | ||
\end{align} | \end{align} | ||
</math> | </math> | ||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
Of course, when using backpropagation to compute the error terms for the ''hidden'' layer: | Of course, when using backpropagation to compute the error terms for the ''hidden'' layer: | ||
Line 189: | Line 62: | ||
derivative of the sigmoid (or tanh) function. | derivative of the sigmoid (or tanh) function. | ||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | + | {{Languages|线性解码器|中文}} |