Linear Decoders

From Ufldl

Jump to: navigation, search
Line 15: Line 15:
</math>
</math>
-
where <math>a^{(3)}</math> is the output.  In the autoencoder, <math>a^{(3)}</mat> is our approximate reconstruction of the input <math>x = a^{(1)}</math>.  
+
where <math>a^{(3)}</math> is the output.  In the autoencoder, <math>a^{(3)}</math> is our approximate reconstruction of the input <math>x = a^{(1)}</math>.  
Because we used a sigmoid activation function for <math>f(z^{(3)})</math>, we needed to constrain or scale the inputs to be in the range <math>[0,1]</math>,  
Because we used a sigmoid activation function for <math>f(z^{(3)})</math>, we needed to constrain or scale the inputs to be in the range <math>[0,1]</math>,  
Line 27: Line 27:
One easy fix for this problem is to set <math>a^{(3)} = z^{(3)}</math>.  Formally, this is achieved by having the output
One easy fix for this problem is to set <math>a^{(3)} = z^{(3)}</math>.  Formally, this is achieved by having the output
nodes use an activation function that's the identity function <math>f(z) = z</math>, so that <math>a^{(3)} = f(z^{(3)}) = z^{(3)}</math>.  
nodes use an activation function that's the identity function <math>f(z) = z</math>, so that <math>a^{(3)} = f(z^{(3)}) = z^{(3)}</math>.  
-
This activation function <math>f(\cdot)</math> is called the '''linear activation function''' (though perhaps
+
This particular activation function <math>f(\cdot)</math> is called the '''linear activation function''' (though perhaps
"identity activation function" would have been a better name).  Note however that in the ''hidden'' layer of the network, we still use a sigmoid (or tanh) activation function,
"identity activation function" would have been a better name).  Note however that in the ''hidden'' layer of the network, we still use a sigmoid (or tanh) activation function,
-
so that the hidden units are (say) <math>a^{(2)} = \sigma(W^{(1)}*x + b^{(1)})</math>, where <math>\sigma(\cdot)</math> is the sigmoid function,  
+
so that the hidden unit activations are given by (say) <math>\textstyle a^{(2)} = \sigma(W^{(1)}x + b^{(1)})</math>, where <math>\sigma(\cdot)</math> is the sigmoid function,  
-
<math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weights and bias terms for the hidden units.  
+
<math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weight and bias terms for the hidden units.  
It is only in the ''output'' layer that we use the linear activation function.   
It is only in the ''output'' layer that we use the linear activation function.   
An autoencoder in this configuration--with a sigmoid (or tanh) hidden layer and a linear output layer--is called a '''linear decoder'''.  
An autoencoder in this configuration--with a sigmoid (or tanh) hidden layer and a linear output layer--is called a '''linear decoder'''.  
-
In this model, we have <math>\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}</math>. Because the output <math>\hat{x}</math> is a now linear function of the hidden unit activations, by varying <math>W^{(2)}</math>, each output unit <math>a^{(3)}</math> can be made to produce values greater than 1 or less than 0 as well.  This allows us to train the sparse autoencoder real-valued inputs without needing to pre-scale every example to a specific range.   
+
In this model, we have <math>\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}</math>. Because the output <math>\hat{x} </math> is a now linear function of the hidden unit activations, by varying <math>W^{(2)}</math>, each output unit <math>a^{(3)}</math> can be made to produce values greater than 1 or less than 0 as well.  This allows us to train the sparse autoencoder real-valued inputs without needing to pre-scale every example to a specific range.   
Since we have changed the activation function of the output units, the gradients of the output units also change. Recall that for each output unit, we had set set the error terms as follows:
Since we have changed the activation function of the output units, the gradients of the output units also change. Recall that for each output unit, we had set set the error terms as follows:
:<math>
:<math>
\begin{align}
\begin{align}
-
\delta_i
+
\delta_i^{(3)}
= \frac{\partial}{\partial z_i} \;\;
= \frac{\partial}{\partial z_i} \;\;
-
         \frac{1}{2} \left\|y - \hat{x}\right\|^2 = - (y_i - \hat{x}_i) \cdot f'(z_i)
+
         \frac{1}{2} \left\|y - \hat{x}\right\|^2 = - (y_i - \hat{x}_i) \cdot f'(z_i^{(3)})
\end{align}
\end{align}
</math>
</math>
-
where <math>y = x</math> is the desired output, <math>\hat{x}</math> is the reconstructed output of our autoencoder, <math>z</math> is the input to the the output units, and <math>f(x)</math> is our activation function.
+
where <math>y = x</math> is the desired output, <math>\hat{x}</math> is the output of our autoencoder, and <math>f(\cdot)</math> is our activation function.  Because in
-
Since the activation function for the output units for a linear decoder is just the identity function, the above now simplifies to:
+
the output layer we now have <math>f(z) = z</math>, that implies <math>f'(z) = 1</math> and thus
 +
the above now simplifies to:
:<math>
:<math>
\begin{align}
\begin{align}
-
\delta_i = - (y_i - \hat{x}_i)
+
\delta_i^{(3)} = - (y_i - \hat{x}_i)
\end{align}
\end{align}
</math>
</math>
 +
 +
Of course, when using backpropagation to compute the error terms for the hidden layer:
 +
:<math>
 +
\begin{align}
 +
\delta^{(2)} &= \left( (W^{(2)})^T\delta^{(3)}\right) \bullet f'(z^{(2)})
 +
\end{align}
 +
</math>
 +
Because the hidden layer is using a sigmoid (or tanh) activation <math>f</math>, in the equation above <math>f'(\cdot)</math> should still be the
 +
derivative of the sigmoid (or tanh) function.

Revision as of 02:03, 26 May 2011

Personal tools